If we quickly cover how container ecosystem has evolved, we go back to 2012 – 2013 when containers became mainstream and a number of different container runtimes were introduced. For instance Rocket by CoreOS, LXDs by Canonical, Docker Runtime for Docker containers and as we discussed Dockers is widely adopted by most developers.
Next two-three years were what we can call as container orchestration wars. We were looking at containers running across different nodes. We saw different orchestration systems being introduced like Docker Swarm by Docker, Mesos, and Kubernetes which was introduced by Google in 2014 and with wide adoption, Kubernetes is now clearly a winner.
Now we are more at a phase of what we can call as Business Focus where we are not only talking about Kubernetes and Docker containers but on additional capabilities on top of these orchestration systems, these deployment workflows like how do I do CI/CD, how do I build microservices with it, how do I do serverless?
To be honest this is also a great segway to discuss an enterprise journey into containers as well. In phase 1 it’s mostly starting with the developer, testing the containers on their laptops; basically to cut down time and cost of their dev/test workloads. But as folks use containers more frequently and get a sense of how easy it is to work with them, it paves its way to the second phase where it comes into the DevOps team. How do I take these containers running on a developer platform and run it production on a number of machines and then you are faced with issues like how to monitor them, how do I operationalize them and that’s often where we see orchestration systems like Kubernetes come up and solve those problems.
From there if we look forward, really we see containers as being part of the fabric and they are just part of the picture that you have where containers are part of the new modern stack that you are building on top the capabilities they provide.
Now that we have set up the context, let’s talk about Oracle’s strategy. We are providing a container based infrastructure which is integrated and open. The infrastructure has orchestration and scheduling of containers built in, based on a broad open ecosystem and able to integrate with any CI/CD framework, has management and operations built in. Additionally, have enhanced capabilities with platforms for serverless architectures and microservices platforms.
Being Open is our major goal and really important for us. The solutions that we provide around Managed Kubernetes System are using pure upstream open source components. Customers are always looking for having portability with the cloud provider so that whatever they are deploying in-house, they can easily make that transition to the cloud as well.
You can access Container Engine for Kubernetes to define and create Kubernetes Clusters using the Console and the REST API. You can access the clusters you create using the Kubernetes command line (kubectl), the Kubernetes Dashboard, and the Kubernetes API. Container Engine for Kubernetes is integrated with Oracle Cloud Infrastructure Identity and Access Management (IAM), which provides easy authentication with native Oracle Cloud Infrastructure identity functionality.
We are not only taking part in conferences or speaking sessions but also investing engineering resources in open source community projects like Kubernetes under the umbrella of CNCF.
Lastly, the container based infrastructure is running on our next generation Oracle Cloud infrastructure, so you get all the core features of the robust infrastructure that OCI is built on; things like High Availability, Consistent Performance, Transparent Management, Enterprise-Grade Security and so forth!
The grey shaded area designates the functions that Oracle Manages for the customers, including an integrated Registry and image storage and the Container Engine / Managed Kubernetes.
Within the management plane of K8s, Oracle will manage the etcd and Master nodes. These are set up in a High Availability manner across multiple ADs. Upgrades to new versions of Kubernetes will also be supported in the Container Engine dashboard. Cluster Management options like the creation of node pools, self-healing mechanisms for cluster will also be managed by the service itself.
The customer will manage just the Clusters/Worker Nodes that are set up by the Managed Service for that instance, in their own OCI account/tenancy, shaded in blue above.
One thing to note here that the clusters/worker nodes that customer creates, these are created in the customer’s tenancy and utilize the resources of that tenancy. The Master Nodes and Registry are not part of that tenancy and managed by the K8s Service.
For enhanced capabilities on serverless, we have a Function as a Service platform that is open sourced at FnProject.io, which allows Developers to run applications by just writing code without provisioning, scaling or managing any servers, which is all taken care of transparently by the cloud.