Over the past few years, microservice architectures and the cloud practices surrounding them have proliferated throughout the IT industry. As someone practicing IT in the early 2000s, the container revolution is reminiscent of when VM infrastructure became the new normal. With any major shift in IT architectures, new and old software finds new use cases, and LINBIT®’s open-source software suite is no exception. This blog post will discuss how LINBIT’s software solutions, new and old, are being used in conjunction with microservice architectures.
Different Approaches for Different Architectures
As is true of most things in IT, there’s more than one way to skin the proverbial cat. Also, I’m just one person (it’s true, Google it), and while I talk to many people about their storage requirements and projects, this is by no means intended as an exhaustive list of potential applications for LINBIT’s software. That said, I don’t write blogs for fun – or at least not purely for fun. I hope this post helps contextualize how some of LINBIT’s supported open-source offerings can be leveraged in microservice architectures. I will share some pros and cons for each approach.
LINSTOR Plugin for Docker and Docker Swarm
At one point, Docker was almost synonymous with containers. While other tools are available for building and running containers, Docker is still my go-to. Back around 2016, when Docker first announced it would officially support persistent volume plugins, LINBIT jumped on the task of creating a driver for DRBD® enabled persistent volumes, which has evolved over time and is now maintained by a community member with the support of LINBIT’s development team. Follow these links to the documentation and code repository.
Pros of approach
Many people using Docker in Swarm mode with LINSTOR®’s persistent replicated storage find it satisfactory to achieve HA of container-based workloads. There is some upside to keeping things simple when the need for more complex or robust container orchestration isn’t required. If you have dozens of containerized workloads and some persistence, as opposed to hundreds or thousands, this is a solution worth considering.
LINSTOR provisions block devices that are then mounted into the containers that request them. These block devices provide the best write performance when comparing files, block, and object storage. Docker and LINSTOR, therefore, are excellent choices for databases or any workload that makes frequent writes.
Cons of approach
Docker Swarm’s support for multiple users or designing a “self-service” style framework for developers to run their containers is a little more cumbersome to non-admin users than something like Kubernetes. However, there will often be some familiarity with Docker and the command line required to add microservices to a Docker Swarm cluster.
LINSTOR’s integration with Docker is easily installed but requires a LINSTOR cluster to be initialized on the Docker hosts as a prerequisite, so familiarity with Docker and LINSTOR is needed.
Docker Swarm assumes all volumes are accessible in a ReadWriteMany (RWX) mode, meaning multiple containers on different hosts can concurrently access them. LINSTOR provisions block devices, which are ReadWriteOnce (RWO), and therefore cannot be accessed by numerous containers concurrently. The policy will prevent users from scaling container services beyond what’s allowed by LINSTOR volumes. Luckily, LINSTOR will only complain rather than corrupt data.
LINSTOR CSI Driver for Kubernetes
Kubernetes has proven it’s here to stay as the enterprise-grade container orchestration platform for running microservices across the many heterogeneous environments (public clouds, on-prem, dev laptops, etc.) that make up modern IT infrastructures. LINBIT’s LINSTOR team has been integrating LINSTOR with Kubernetes since the Flex Volume plugin days and pivoted those development efforts into the LINSTOR CSI driver as soon as CSI support was announced. LINSTOR for Kubernetes’ upstream project named Piraeus, managed and developed by the LINBIT development team, was accepted into the CNCF sandbox in 2021.
Pros of approach
LINSTOR’s Kubernetes integration is easily deployed into Kubernetes via Helm and LINBIT’s LINSTOR Operator for Kubernetes. This handles the LINSTOR cluster initialization for you, so you don’t have to have a Ph.D. in LINSTOR to get started.
LINSTOR’s many many features are exposed in a Kubernetes native way. If you know how to take a Kubernetes VolumeSnapshot, you can ship snapshots of LINSTOR volumes offsite to S3 compatible storage backends without needing to know how to operate LINSTOR from the command line.
Cons of approach
Kubernetes is far more complex of a system than something like Docker Swarm. While this complexity enables its many upsides, it can be overkill for smaller teams to manage. Public cloud providers offer managed Kubernetes services, but there is still an operational learning curve in managing and troubleshooting applications deployed into Kubernetes, and LINSTOR doesn’t magically escape this fact.
Linux High Availability NFS Clusters
NFS is a well-known storage protocol in the IT industry. NFS has been around since the 1990s and is still very common in enterprise IT. Therefore, building out highly available NFS clusters using LINBIT’s DRBD and the Linux HA Cluster stack (Pacemaker and Corosync) is a commonly requested cluster from LINBIT’s clients (past and present). These clusters are built on COTS (consumer off the shelf) hardware and can be engineered to include the feature sets of spendier appliance-based NAS solutions.
LINBIT has also been working on a few projects geared towards the Linux HA cluster world, DRBD Reactor and LINSTOR Gateway that can be used to create Highly Available NFS clusters. These tools are more coupled to DRBD and LINSTOR than the more general purpose Linux HA Pacemaker framework.
Pros of approach
NFS is a well-known protocol. There will be someone on your team familiar with its implementation and use. Clustering NFS using DRBD and Pacemaker isn’t anything new, so there are few unknowns to uncover.
NFS Is a shared file system that can be used to satisfy workloads requiring ReadWriteMany (RWX) access to a single persistent volume. This enables scaling out workloads versus only being able to scale up.
Cons of approach
Supporting concurrent (RWX) access to persistent storage comes with a performance cost in the locking necessary to avoid corruption. This means certain workloads, such as databases or persistent message queues, might not perform as well as they would on a ReadWriteOnce (RWO) volume such as the block devices LINSTOR provisions and attaches.
Linux High Availability as a Container Orchestrator
If you’re not interested in Kubernetes, Docker Swarm, or any of the popular container orchestration platforms out there, you can always build your own! LINBIT has supported Pacemaker since it replaced Heartbeat as the de facto cluster resource manager in the Linux HA Cluster stack. Pacemaker leverages the open cluster framework making it the go-to for people building custom cluster stacks on Linux systems.
Pacemaker already has support in its collection of resource agents for managing Docker or LXC containers, DRBD, and filesystems. That is basically all you need to synchronously replicate storage and failover containers between systems participating in the cluster.
Pros of approach
It’s about as customizable as solutions get. The open cluster framework has been well documented and supported by LINBIT for a very long time, so if something is missing in what already exists, LINBIT can help you create whatever it is you’re looking for.
When you’re clustering an application in Pacemaker, it often isn’t aware it’s clustered. This means that if you’re familiar with using the application in a single system setting, most of that familiarity will still apply. Most differences will come into play in performing maintenance without stepping on the cluster’s proverbial toes and causing unintentional failovers (generally, putting the cluster into maintenance mode is good enough).
Cons of approach
You’re creating something custom, so finding resources on the internet won’t always be as straightforward as Googling an error message. However, if you’re a customer of LINBIT, you can shoot an email to your support contact, and we’ll get you the answers.
Conclusion
In short, there’s no one-size-fits-all answer to how you should use LINBIT’s Open Source tools in your microservice architectures. However, we’ve got plenty of ideas and solutions that can get you there regardless of whether you’ve got thousands of persistent containers that require HA or only one.
This blog was inspired by a conversation I had with a user of LINBIT’s software. There’s likely a combination of tools I’ve not considered here, so if you’ve got ideas, I’d love to hear them in our public Slack channels or inbox.