30 November 2017
In re:Invent 2017 AWS announced AWS Fargate among many other new services. AWS describes Fargate as
A technology that allows you to use containers as a fundamental compute primitive without having to manage the underlying instances
I have experience with ECS and Kubernetes and the underlying machines haven't posed a problem for me to manage. In Kubernetes draining a node for maintenance or update is achieved via the CLI
kubectl drain <node name>
30 September 2017
Fix slow container shutdown
Today I was working in a dev environment where everything was done within containers and docker-compose was used to define the services. Usually, when I work on my own projects in this way I don't have to rebuild containers every time I make changes to my code to reflect those changes but in this special case, there was no getting around it. Every time I made changes to the code I had to rebuild the container and run it again. The annoying part was that it took such a long time to re-up the container because the stop process took such a long time. I investigated it and it seems I have been running my apps in the container as the PID 1 process. How does this happen?
There are two ways of running your containers. With CMD or ENTRYPOINT or a combination of both. For example
FROM alpine:3.6 CMD ["ping"]
16 September 2017
AWS Network Load Balancer
The other day I received an email from AWS with their latest announcements. I went through the email and saw that AWS now offers a new kind of ELB namely the Network Load Balancer (NLB). Compared to the ALB and the ELB classic the NLB is a layer 4 load balancer (transport). When I clicked the link, the AWS webpage states the following "The Network Load Balancer for the Elastic Load Balancing service is designed to handle millions of requests per second while maintaining ultra-low latencies" and I thought to myself why didn't AWS offer this ELB a year ago when I needed a network load balancer :)
A year ago I worked on a project where the requirements were to capture NetFlow traffic and query CPEs (Customer Premises Equipment) of their status through the SNMP protocol. The SNMP part wasn't that big of a challenge but the NetFlow was more of a challenge. Having 10.000 CPE each with at least 10 interfaces bombing your backend is quite a challenge. One of the requirements was data integrity since the data would be used in a commercial manner. Our initial design consisted of having 5x EC2 instances with NetFlow parsers that would parse and forward (produce) the data to an Apache Kafka cluster. The following picture shows a simplified design
12 June 2017
Building and Shipping Docker images
Most of the time we don't have to deal with building and shipping docker images because the community has already built quite a number of useful images for us. However, if you practice DevOps with Docker (you should) chances are you have to build custom images for your services. Maybe some developers in your organization have a service running NodeJS, maybe others are writing Java APIs, obviously for these services you have to build your own images and today I would like to describe two ways of building and shipping docker images.
Note that I haven't used the word container in the above paragraph although in many places online the word container is sometimes used interchangeably with the meaning of images. Which is not correct. A Docker container is a running instance of a Docker image. A Docker image can be considered as the "source code" for the container. Docker images are made up of union file systems layered over each other and are build step-by-step using a series of instructions. We will see this in action now by building an image in two ways.
23 May 2017
DevOps seems to gather some real interest from large organizations here in Germany. After having talked to a couple of companies and asked about what they specifically need, it seems to me that there exists this misconception of what DevOps actually is and how to implement it in a company.
Let me start by stating what DevOps is not
- DevOps is not a replacement of Agile
If your company is not practicing Agile it will be more difficult to switch directly to DevOps.
- DevOps is not a replacement for ITIL
DevOps can coexist with ITIL processes, however, to support short lead times and higher deployment frequencies many areas of the ITIL processes need to become fully automated. The ITIL disciplines of service design, incident and problem management still remain relevant because DevOps requires fast detection and recovery when service incidents occur.
- DevOps is not just Automation
This is probably, in my opinion, the biggest misconception. Companies seem to focus eagerly on people to come in and help with automation. Although I agree that the biggest part of DevOps time should go into enabling Automation (CI/CD, IaC, DR...etc) but the work shouldn't end there. DevOps requires cultural norms and an architecture that allows for the shared goals to be achieved throughout the IT value stream. This goes far beyond automation.
06 May 2017
Redis, Apache Kafka & RabbitMQ - When to use what
I recently had to present a design where the designed consisted of Redis, Apache Kafka and RabbitMQ, among other things of course. At the presentation the obvious question came up, can't we just use one of these?
I understand that for novice users visiting the respective websites it must be difficult to understand the differences. On the front page on the redis webpage they state
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker.
On the Apache Kafka webpage you'll see
Publish & Subscribe to streams of data like a messaging system
and finally on RabbitMQ
RabbitMQ is the most widely deployed open source message broker
"Message" seems to be the keyword for all of them but that doesn't tell the full story. Let's have a look at the details and example scenarios where one would choose one over the other.