Is containerization a barrier to SaaS adoption?
Containerization is a cloud adoption accelerator; but wouldn't it be a barrier to the SaaS model adoption, and ultimately wouldn't it delay all the benefits that can be expected from the cloud?
What is containerization?
Containerization is the encapsulation of software that can run smoothly on multiple infrastructure platforms. Using containerization also allows, in addition of some other many advantages, the possibility of achieving a solid overall infrastructure.
Containerization differs from virtualization as the code is written in a particular environment before being moved to another target location for implementation. Virtualization has become mainstream because it allows developers to work on their code without taking care to the environment. However, moving code from development to production often introduces bugs and other complications. The production virtual machine (VM) or operating system (OS) often includes other files that interfere with the new software. This involves possible errors during implementation, which can be a real pain as well as disturbing other processes.
Containerization avoids these issues by allowing developers to work in a partitioned production environment. This makes it possible to code and test with the relevant libraries, configuration files and dependencies.
The container is not dependent of the operating system. As a result, the container can be moved from one platform to another or even to the cloud without encountering any implementation problems. Containers are small pieces of software which have less capacity and start-up time than standard virtual machines. They embed the core of the machine and allow servers that are supposed to be more efficient.
Containerization of applications
A container of an application is an executable software package that combines application code with needed configuration files, variables of environment, libraries, and dependencies. A copy of the operating system is not included, but instead a runtime is being used on the host operating system. This runtime allows multiple containers to use the same operating system on a single system.
Some other required files may be shared when multiple containers run on the same system. For example, bins and libraries can be used in multiple containers on a single operating system. Using one set of libraries for a whole range of containers reduces the resources needed to run a system. Maintaining isolated applications in containers also reduces the risk of malicious code attacks.
Containerization of applications looks like microservices and distributed applications, with each container operating independently of the others and using minimal host resources. Each microservice communicates with the others through application programming interfaces.
Because containers are separate from their host operating system, these applications are more portable than other ones. They can be easily implemented on a range of platforms, including desktops, virtual machines, various types of operating systems, and cloud-based solutions. This versatility allows developers to work with their most familiar processes and tools.
Advantages of containerization
Containerization offers lot of benefits to the developers. Among the improvements expected when moving to containerization, we can find :
Containers are executable software packages that do not depend on a host operating system. These software packages can be used consistently on any cloud service or platform.
Open-source runtimes combine simple tools with a universal approach, allowing you to optimize both Linux and Windows machines. Your developers can use Agile, DevOps, and other tools to improve the development process and implementation.
Since containers are relatively small files, they require fewer server resources to run. You can reduce server and license fees while increasing your start-up hours.
Keeping anomaly apart
Containerized software applications run independently of each other’s. If a container fails, other applications are not impacted. This allows the development teams to quickly identify and correct issues as they arise while avoiding downtime in other applications.
Containers share the operating system kernel of their host machine and may share other requirements such as libraries. Therefore, containers are smaller than full VMs and are more efficient when running.
Container orchestration platforms can streamline containerized applications, tools, and other services. These platforms make it easy to install, update, scale, monitor, and debug containerized applications.
Containerized applications are separated from each other and from the host operating system. Any malicious code or bug in a container remains isolated and cannot affect the rest of the system. Security permissions can also be customized for the uses of various apps.
Weaknesses of containerization
The undeniable advantages of containerization seem so numerous that application containers seem like a perfect solution. This is not so, and some negative aspects also worth to be considered.
Some flaws cases in containerized applications made possible to reach the kernel of the machine and to endanger all the containers carried by this server (physical or virtual server). The tightness of the applications between them does not therefore exempt from ensuring “vertical” multi-level security. In case of using Docker as a containerization engine, it is for example mandatory to secure the containerized application as well as the registry, the Docker daemon and the host operating system.
Uncontrolled use of resources
Lifecycle management is essential for containers. Indeed, these ones offer the advantage to be put into service and duplicated at high speed. It is thus possible to consume a large amount of computer
resources without really realizing it. It does not matter if the containers that form the application are stopped or deleted when they are no longer needed, but if not, scaling up a containerized application in the Cloud can result in significant costs for the company.
Limits of portability
A Linux container cannot be used on a Windows machine (and vice versa) unless there is an intermediate layer of virtualization/emulation of the corresponding OS.
Data persistence is more complex
Persistent container data must be moved out of the application container to the host system or in some place allowing a persistent file system. Container design is the root cause of data loss. This is because container data can be gone forever once it's deleted, except you back it up elsewhere.
Containerization: Accelerating Cloud Adoption
Application containerization can therefore be seen as the natural and improved extension of virtualization. The specific qualities of containers described above make it possible to confidently consider the migration of applications to the Cloud.
Containerization reduces startup loads and allow you to get rid of the configuration of target operating systems for every application, as they all share a single operating system kernel.
Containerization allows application code to be bundled with associated configuration files, dependencies, and libraries. This unique software package (the container), isolated from the host operating system, allows it to be autonomous and to become portable. It can therefore run on any platform or any cloud without any problem.
This allows any CIO to remain confident when facing a migration in the Cloud with so many dreaded side effects expected. From that standpoint, the containerization is a real factor in accelerating the adoption of the Cloud.
Containerization, a barrier to the adoption of SaaS?
If containerization seems to be an asset in the context of a migration, wouldn’t it be a barrier to the adoption of serverless architecture and the use of managed services?
The history of the Cloud is punctuated by several stages - very schematically:
- virtualization of storage (S3) and computing (EC2) spaces,
- database virtualization (managed DB, RDS, DynamoDB, Aurora, etc.)
- management of complete applications (container management), deployment and maintenance of complex processes and architectures on managed clusters,
- serverless architectures and SaaS
That history of Cloud usage drives more and more to the implementation of completely serverless architecture offering computing services. These ones offering the advantages of automatic scaling. Amazon Lambda is a good example of the side effects of the containerization about the behaviour of some users regarding cloud technologies.
Some natural reluctance of users to adopt the Cloud remains strong, among others:
- don’t put all your eggs in one basket,
- keep the choice to go back,
- keep of choice of the technology to be used,
- not being a prisoner of a technology,
- not being bound hand and foot with a supplier.
Implementing Lambda functions involves writing code that can only run in AWS. Obviously, when you no longer have full control over your code and its execution environment, the reluctance that we have just mentioned is reinforced. Containerization makes it possible to reduce these fears, since the user retains a certain autonomy in his local environment and can then, at least theoretically, deploy his code on any type of platform.
While migrating and running an application in the cloud is greatly facilitated and secured by using containers, it is obviously tempting to adopt a conservative approach and not taking any risks regarding further developments.
Using containers finally makes possible to preserve the methods and technologies being used before the migration to the Cloud rather than advancing further in the understanding of new services provided to users. In this, containerization acts more as a brake than a catalyst for the adoption of SaaS.
It is obviously a pity to deprive yourself of the whole power of SaaS solutions because of these considerations that are for sure significant but remain a poor perception of reality.