Docker containers for Dummies

This is an article for those who still think Docker and containers not going with them. I will simplify a lot, but I want you how important it is what we have at hand is understood; if you’re someone who already knows Docker this article will get bored.

I was a confused he did not know Docker just one year ago. And recent events in which we have participated in Ackstorm (#AzureDevOpsDay, #AzureBoortCamp, AWS BCN, #OpenExpo, etc …) before each talk always wonder who does not know Docker. And always a part of the brave attendees raise their hand, that united the modesty that dare not betray figure the amount in about half of the attendees. Sincerely I understand, so much noise, so much information, so many revolutionary promises that remain is that, in promises.

But what of the Docker is serious and it is good to know where they come from and what we can expect from this. We are also official partner of Docker and we did it without any hesitation. There is no other way.

Containers, Linux kernel isolation

Containers are old almost like Linux, well, Kernel version 2.6.24 (8 years ago), and simply allow linking a collection of processes and treat them with certain parameters and limits. In other words, it allows us to tell Linux “Hey! From now on the OS/this application runs separately and from this directory”

Docker, containers with GIT: in support of programmers

A Docker guys have to recognize the brilliant idea of using this with Git to develop a complete system. Git is another beast should know that can mount a ‘chicken’ in the development department and is also the standard code control and current versions (Azure just announced that it will support and actively cooperate).

Created by Linus Torvalds to manage the Linux kernel code (today about 9.8 million lines of code, more than 12 million reviews) and since 2005 more than 12,000 developers and over 1,200 companies involved). Nothing more and nothing less.

The important thing is that Git allows a programmer to manage changes in your code, change to change or “committee to committee”. Write a program and to control at all times lines changing, decide that this code works, back to back, check differences between what I wrote yesterday and today. And synced with many other developers working on the same code as me.

Docker thought “- What if next? And if I can version an operating system and build and manage it as if it were code? “Every application you install, add each configuration, each of the changes in my operating system I manage with Git. I can compare versions, update changes, and pull back.

Why are they so important? Why they have crept in system administration?

Docker was invented thinking developers. They were given the ability to manage operating systems as if it were code. But it began to be used in production systems because basically the same that worked on the pc developer could launch, build layer by layer on servers in production. And use Docker on production servers greatly facilitates the management of critical system and allows implement without wanting technique are displacing other languages infrastructure. In particular, the boom has come because major infrastructure providers (virtual and annexes machines Cloud offer it, making for the first time in history to change infrastructure provider is as easy as logging into a new machine and make “Docker run my_aplication “… and it works. Yes, it works ……… really works!

It’s really essential!

But now comes the good, this year is the year of the containers as a Service and this shows that Docker has triumphed; AWS EC2 Container Service, Google Container Engine or Service Container Azure are again based solely on Docker services and the thing is more like Docker Cloud (Tutum) are no more or less the first services that allow not use virtual machines or services platform, but my containers to launch a supplier with performance and scalability advantages that entails, plus pay per use.

, , , ,

Leave A Reply