Why we created Amethyst Platform

Vlad Calin
CEO of Amethyst Platform

Over the years, I managed quite some services, internal and external, for the different companies I worked for. At some point, the industry trend started to steer everybody towards containers as they promised (and delivered to some degree) better development workflow, no more "works on my machine" excuses throughout the team, more reproducible builds and overall better stability in development, QA and production.

A little bit on containers

To understand what we do, I think it's useful to have a little history on how things were and how they changed over the time, to reach the state they are in today.

Years ago, applications were simpler, mostly monolithic, that could fit on a single server. Developers were installing all the dependencies locally and ran the application on their own machines. There weren't that many moving parts because the tech ecosystem wasn't so developed as it is today, so things could work like that.

It was the same for the deployment process: get a server, install what you need on it, install the application, start it and you're done. But as the ecosystem grew, more complex applications emerged, distributed systems became more widespread and a single application needed to connect and manage multiple external dependencies: this old approach was not enough.

The first step of evolution towards containers was making the switch from installing all the dependencies and the application directly on a server to putting everything on virtual machines. It provided some advantages, mainly that you could build the virtual machine separately and then move it around from one server to another. It was still pretty slow and sluggish, but it was definitely an improvement over the standard industry at that time.

More and more companies switched to using VMs, some of them ever went a step further and embraced the cloud. But soon enough, a new solution emerged that will revolutionize the way companies think about services: containers.

What are containers?

You can think of containers as very lightweight and faster VMs that run on the host operating system. VMs are simulating anything, from disk, network to processors, but containers are pieces of software that isolate resources from the host computer and allow processes to run only with those isolated resources.

To make things clearer, they are applications that run in little sandboxes managed at a very low level by the operating system (mostly Linux kernels). It's faster, more efficient from a resource consumption point of view and very secure.

The problem

Over the years, I encountered more and more companies that made the switch to containerized applications and microservices. They had multiple smaller applications that needed to communicate with each other. Application architecture became more complicated and harder to maintain.

Deploying a single application and scaling it to the increased traffic inflow was becoming tedious because now, teams needed to deploy on multiple servers and ensure that communication between services is done properly. One service needed to communicate with another service on another server. As the number of services grew, the number of containers and servers also grew.

Current solutions

Solutions that aided in managing containers and applications over a huge number of servers emerged either from companies that made it their mission, or from open source communities. The main solution type I am going to talk about here is the kind of solution that tackles especially managing containers on multiple hosts: container orchestration platforms.

There are some open source solutions that abstract the server resources, receive a configuration file (or manifest) where users describe the desired state of the application (what containers to run, execution parameters, resources, some networking options, etc) and the orchestration platform handles the rest.

Two of the most popular solutions are Kubernetes and Nomad. Unfortunately, them being highly technical solutions, they require highly technical and specialized teams to work with them. Managing a Kubernetes cluster requires a team on its own dedicated only to keeping clusters up and running. As you can imagine, this requires a huge investment from companies. Fortunately, more and more cloud providers are offering managed Kubernetes clusters that will reduce overhead and cluster management costs for companies.

But these tools are not perfect: their functionality is very low level and use low-level resources to get things done. For example, Kubernetes has Pod resources that describe a collection of containers, Service resources that describe how certain groups of containers can reach each other, ReplicaSet resources that can control how many replicas of the same Pod are available at any point in time in the cluster, and the list goes on.

Applications are defined as combinations of these resources that are configured to work together. In most cases, there resource definitions become very complex and require some effort to develop and maintain, just like regular code.

Another issue with solutions like Kubernetes is that they are written by technical people for technical people. Most of the times, they lack UIs, auxiliary tools such as logging and monitoring and if you need something, you need to install it. Systems like these are highly modular and require in depth knowledge to build and extend them.

Introducing Amethyst Platform

We felt the need to fill a gap in the market: a container management platform that has a good UI, enables all the necessary services by default with no extra configuration while also offering a pleasant technical experience.

We created a platform that offer a more straight-forward solution for container deployment that focuses on what you need to get done, instead of managing all the puzzle pieces and leaving you in charge to put them all together. You just tell us what you need to run (be it a batch job at specific intervals or triggered externally, a web service that needs to accept outside traffic at api.yourcompany.com an internal service that only specific containers can reach at http://service.internal:8080/.

We want to shift the focus in this space from very granular moving pieces to the bigger picture: the application you need to run and define what it needs to be run in terms of services, jobs, configuration and data. No more container, network, role bindings, persistent volume claims, ingress route and whatnot.

We will help you deploy your application as a whole, not pieces that might or might not work well together.

We hope you enjoyed this post and you found it useful. If you spot any error, have any suggestions or just want to get in touch, you can reach us at [email protected]

Deploy your applications in minutes

We are setting things up! Leave us an mail address to be the first to find out when we launch!

Or contact us directly at [email protected].

If you want, you can share this post via Twitter