This write-up is an in-depth insight into workloads. What are they? What does a workload really mean in the cloud computing universe? How are they classified? How many types of workloads are there?
This article answers all your questions on it in detail, it walks you through the different types of it with the help of examples.
So, without any further ado.
Let’s get on with it.
1. What is a Workload in Cloud?
Simply put, an application or a service deployed on the cloud is a workload. The service could be a massive one comprising of hundreds of microservices working in conjunction with each other or a modest individual service.
A workload is a resource running on the cloud consuming resources such as computing power. There are different types of it, I’ll come to that.
The term workload indicates abstraction & portability. When a service is called a workload, it means it can be moved around between different cloud platforms or from on-prem to the cloud or vice-versa without any dependencies or hassle.
The container technology of the cloud is a great enabler for moving workloads around without breaking stuff.
The below diagram shows a workload deployed on the cloud run by multiple machines.
Here is a snapshot of the workload of an online browser-based game I built, deployed on Google Cloud.
There are various versions of the workload when deployed on the cloud. Every time a workload is deployed a new version of it is created which you can see it the image.
Having different versions of the same workload helps in A/B testing. We can switch between different versions, shut down the instances of one & spin up the instances of others based on our needs.
Now let’s move on to different types of workloads.
2. What are the Different Types of Workloads in the Cloud?
Workloads can be classified into several different categories based on their architecture, resource requirements, resource consumption patterns, user traffic patterns etc.
I’ll begin with the resource requirements.
Classified by Resource Requirements
Workloads requiring general computing power are typically web applications, web servers, distributed data stores, containerized microservices etc. They do not have any specific computational requirements & are easily run using the default abilities of the cloud.
These workloads have specifically high computational requirements. These are typically deep learning applications, highly scalable multiplayer gaming apps built to handle quite a number of concurrent user loads, running analytics on big data, 3D modelling, video encoding etc.
Memory intensive workloads need quite a bit of CPU memory to execute tasks. These are typically distributed databases, caches, real-time streaming data analytics etc.
GPU Accelerated Computation
Workloads such as seismic analysis, computational fluid dynamics, autonomous vehicles, speech recognition require the power of GPUs along with the CPUs to run the accelerated tasks.
Storage Optimized Database Workloads
These workloads are primarily highly scalable NoSQL databases, in-memory databases, data warehouses etc.
Well, these were the workloads classified by the resource requirements. Now let’s have a look into the workloads classified by user traffic patterns.
Classified by User Traffic Patterns
These are the workloads where the resource utilization is pretty known, there are no surprises, no traffic spikes & stuff.
These kinds of workloads can be a utility deployed on the cloud used by a limited number of users in a private network, for instance, an organization-wide tax-calculation utility or the knowledge base on something.
These workloads have utilization at specific times, maybe a few days in a month like an electricity bill payment app etc.
Serverless compute suits best for these kinds of applications, where there is no need to pay for idle instances just pay for the compute utilized.
These workloads include popular apps like social networks, online multiplayer games, video, game streaming apps etc.
Traffic can spike by any amount exponentially. Pokemon Go surpassed all traffic expectations by growing upto 50x the anticipated traffic.
Traffic spikes on social networks when any major global event occurs. In these kinds of scenarios, the auto-scaling ability of the cloud saves the day by dynamically adding additional instances to the fleet as & when required.
Then there are workloads which can be the hybrid of the above-stated workloads. Well, there is no limit to the architectural complexity in scalable applications. We will talk about this in another article, designing scalable applications on the cloud.
More On the Blog
For now, Guys this is pretty much it. If you liked the article, do share it with your folks. You can follow 8bitmen on social media, subscribe to the browser notifications to stay updated on the new content on the blog.
I’ll see you in the next write-up.
- Distributed Systems, Scalability & System Design #1 – Heroku Client Rate Throttling
- Zero to Software/Application Architect – Learning Track
- Java Full Stack Developer – The Complete Roadmap – Part 2 – Let’s Talk
- Java Full Stack Developer – The Complete Roadmap – Part 1 – Let’s Talk
- Best Handpicked Resources To Learn Software Architecture, Distributed Systems & System Design