cc-by-sa 3.0 Thomas Wegner/Berlin Buzzwords (no changes made) Last week was a busy week. Endocode was spread out across Europe, listening and contributing to the newest buzz. While some of our team were in Amsterdam at the GCP Next, finding out what’s new in Kubernetes 1.3, Endocode’s Thomas Fricke was contributing at the Berlin Buzzwords. It was the seventh time that the Berlin Buzzwords brought interesting and inspiring people together, to listen to keynotes and talks, ask questions and discuss ideas and experiences.
We care about containers! You too? Well then we should get together! Google & CoreOS are hosting half-day events to share with you the State of Containers. That’s exciting already. But right in the middle of that excitement, showing you how to implement the technology in real-life projects, are Google partners Xebia & Endocode. The events will take place in Amsterdam, Brussels, Stockholm, Hamburg and Berlin between March 14th and March 18th.
In our last blog post we gave you a short introduction to Linux namespaces. Part 2 will go deeper into user namespaces and current problems that Linux containers face today. Among them, resource accounting and container privileges are top culprits. Currently, processes on the host may still share some resource accounting within processes inside containers. The question of how many processes the same user and owner of containers must have is one of the many examples.
Containers are lightweight virtualization tools that give the illusion of separation and isolation to processes. They are not a security technology, but they do offer some isolation like filesystem operations and network operations, using Linux namespaces. However, as more containers are deployed we continue to find problems that need to be addressed. Among them, resource accounting and container privileges are top culprits. For now we will give you a quick overview over Linux namespaces.
In part 2 of this series, we learned about Docker and how you can use it to deploy the individual components of a stream processing pipeline by containerizing them. In the process, we also saw that it is can get a little complicated. This part will show how to tie all the components together using CoreOS. We already introduced CoreOS in part 1 of this series, so go back and take a look if you need to familiarize yourself.
Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 2: Using Docker Containers
In case you missed it, part 1 of this series introduced the applications that we’re going to use and explained how they work individually. In this post, we’ll see how to run Zookeeper, Kafka, Storm and Cassandra clusters inside Docker containers on a single host. We’re going to use Ubuntu 14.04 LTS as the base operating system. Introducing Docker Docker is a software platform used for the packaging and deployment of applications which are then run on a host operating system in their own isolated environment.
Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 1: Introducing the components
When done right, computer clusters are very powerful tools. They can bring great advantages in speed, scalability and availability. But the extra power comes at the cost of additional complexity. If you don’t stay on top of that complexity, you’ll soon become bogged down by it all and risk losing all the benefits that clustering brings. In this three-part series, we’re going to explain how you can simplify the setup and operation of a computing cluster.