Last Updated on October 8, 2023 by KnownSense
Deployment is the process of making microservices-based applications available and operational in a production environment. As part of your microservices strategy, one of the questions will be where do you actually deploy and run your microservices? In this article we will look how virtualization helps in microservices deployment.
Before we start looking at virtualization, let’s answer the question why not use physical machine, i.e. physical servers to run our microservices? The problem is having loads of expensive physical servers that take up space to run these tiny applications, these microservices, is waste of resources, resources like CPU, memory, and storage. We could choose to run our entire microservices architecture on a couple of physical servers to reduce wastage and costs, but the problem with this is it violates our autonomous design principle in that our individual microservices are no longer independently changeable and deployable. By sharing this environment, there might be conflicts between dependencies and one microservice going wrong might affect other microservices running on the same physical machine, not to mention if one of our servers actually dies, it will take out half of our microservices architecture.
Virtualization
Virtualization is a technology that allows multiple virtual instances of computing resources, such as servers, storage, or network devices, to run on a single physical hardware infrastructure. It essentially creates a virtual layer between the hardware and the software, enabling greater flexibility, resource utilization, and efficiency. Here are some ways of adopting virtualization:
Virtual Machines

we use virtualization in the form of virtual machines and servers where each virtual machine is basically a virtual server mimicking a physical server in software. When you log onto one of these virtual machines, they look and behave like a physical server when you’re actually on the operating system, and on top of this, you can get virtual machine management, operating systems, or software that can manage hundreds of virtual machines.Because we’re now more efficiently sharing the physical resources between these virtual machines, we can even afford to run each microservice instance in its own virtual server, and once we’ve perfected our virtual machines, we can even take snapshots and templates which can help with both scaling out when we need to introduce microservice instances to our microservices architecture and it can also help with general deployment. If something goes wrong, we can revert back to a previous snapshot. On top of this, we now have a lot of support and technologies around the concepts of infrastructure as code where we can basically write scripts containing code that can deploy new instances of our microservices using virtual machine templates and you can do all sorts of other maintenance tasks using these scripts on top of these virtual machines. So even when we are using virtual machines, we’ve got to make sure we have physical backup hosts to span our virtual architecture over. The good news is virtual machines are both supported on‑premise and within the cloud. The biggest advantage of virtual machines when deploying your microservices is that you still have your familiar operating system. The virtual machine is still running that operating system that you used to
Containers

Containers is another type of virtualization, which is actually even more efficient than using virtual machines. This is because unlike virtual machines that provide an entire operating system running within the virtual machine, containers only ever contain those dependencies that are needed to run your microservice. They have a much smaller footprint than a virtual machine in terms of resource usage. Because of the smaller footprint and the efficient use of resources from our underlying hardware in terms of CPU memory and storage, we can run hundreds more containers, each one of them running an instance of our microservices. Compared to virtual machines due to their efficiency, they also start up even faster. The container world is also very rich in terms of support and compatibility. We can get pre‑existing base images that are compatible with our microservices technology stack, and we can also create our own pre‑configured images and then store them centrally within an image repository. Over time, this means the deployment and the scaling out of our microservices architecture becomes a lot more easier. The other amazing thing I’ve seen with container technologies is I can use one configuration file to run my entire microservices architecture on my development machine in complete isolation. So once you are over the learning curve of learning all the container technologies, running your microservice instances in containers is probably the best way of deploying and running your microservices, both in terms of deployment and in terms of development. I can basically spin up entire microservice architectures for the purpose of development and testing using these single configuration files. The other good news is like virtual machines can have special management software to manage instances of virtual machines, containers also have management software in the form of orchestration engines, and these orchestration engines provide a lot of functionality. Not only do they ensure that the right number of containers or right number of instances of each type of microservice are running, but they also provide load balancing and service registry functionality. This means all the stresses of scaling out our microservices architecture are handled by the orchestration engine, both in terms of spinning up the right containers containing the right software and also in terms of routing and load balancing the traffic to these containers. The other good news is container technologies also support all the technologies around infrastructure as code. Basically, you can have scripts and code, which spin up these images and deploy your microservices. On top of this, all container technologies and container orchestration engines are supported both on‑premise and within the cloud, and they’ve almost become the de facto standard for running microservices architecture. So I highly recommended that you investigate container technologies as part of your microservices strategy because they offer a next level of resiliency and automation.
Cloud Deployment
When it comes to deploying and running your microservices architecture is to use a cloud platform, a cloud platform like AWS, Azure, or Google. But before you consider these options, you’re probably thinking, why not go with the on‑premise option?

Drawbacks of on‑premise
One of the reasons for not going with the on‑premise option is our microservices architecture has a lot of moving parts that need to be stored somewhere within a server room or within a data center. On top of all this, because we’re aiming for resiliency, we also need backup locations which have active versions of your software architecture acting as backups. Then in addition to all this hardware, you need an infrastructure team to manage and maintain everything. As you can imagine, all this upfront setup results in upfront cost and not to mention the time required to set up all this hardware and set up all these locations, as well as recruiting the right number of people for your infrastructure team.
Advantages of Cloud Platform
Again, time results in cost. Obviously, if you already have dedicated locations and an infrastructure team, an on‑premise microservices architecture might be a viable option. However, if you use the cloud as a platform to host your microservices, you’ll find that the cloud platform will manage most of the hardware in the background that’s running all your microservices, something known as serverless services. One of the models provided is known as Infrastructure as a Service, where you basically deploy to virtual machines or containers, and you’re only responsible for managing the operating system, the middleware, the runtime, and your applications and data. Everything else in the background is provided by the cloud platform. You only probably need to worry about just updating the operating system and the runtime used to run your microservices. The cloud platform will manage all the hardware and some of the services, which are off the shelf, for you. In terms of these off‑the‑shelf services like, for example, a security component, an API gateway, or a service registry, all you have to worry about is configuration. If you want to free up your resources even more, you can give more management and maintenance responsibility to your cloud platform in terms of also managing and maintaining the operating system, the middleware, and the runtime by going for the Platform as a Service option. Here you basically deploy your microservices and the data to cloud‑specific black boxes, and you have no control over the runtime or the operating system running these applications and data. The advantage of this approach is most cloud providers provide powerful auto‑scaling out options, so when performance is needed, your microservices architecture will automatically be scaled out for you. The main disadvantage is you have to stay a step ahead of any automatic operating system, or middleware, or runtime updates that your cloud provider might choose to roll out. Even with this, most updates are normally backwards compatible and unlikely to take down your microservices architecture. If you want to be even more further tied in and locked in to your cloud platform, you could even go for the Function as a Service option. In this scenario, your microservices are basically conceptual microservices that are made up of snippets of code, which are uploaded as independent functions that run within the cloud platform. I’m not a huge fan of this option because firstly, it breaks up your microservices application into independent functions, and you have to think about those functions at a conceptual level as an independent microservice. Then on top of these, most of these functions are platform specific, and this is in terms of the technologies and the languages they use, which means you basically get locked in and tied in to this specific cloud provider. The cloud provider can also control and limit the concurrency of these functions, i.e. how many functions you have running at the same time and also for how long. Losing this kind of control of how your applications actually run might become problematic when you need to performance‑tweak your microservices architecture. Another option is Software as a Service, which is a distribution model of an application through the Cloud. In this provider fully manages the application and makes them available through the Internet, most of the cases with a classical web browser.
Hybrid Approaches
Remember, for your microservices architecture, you could always mix up all these models. You could have some parts of your architecture which is on premise, some parts of your architecture which is within cloud using IaaS, PaaS, FaaS or SaaS. The end model basically doesn’t have to be one option. You could have a hybrid architecture.
Conclusion
In conclusion, microservices deployment is a critical aspect of modern software development, and virtualization technologies, such as virtual machines and containers, offer efficient and flexible solutions to overcome the limitations of physical servers. Cloud platforms further streamline deployment, reducing infrastructure management complexities and facilitating scalability. A hybrid approach, incorporating diverse deployment models, allows organizations to tailor their microservices architecture to specific needs, ensuring adaptability and resource optimization.