RedDot Ruby Conf 2014

Deployando aplicaciones Ruby con Docker

Bryan Helmkamp  · 


Extracto de la transcripción automática del vídeo realizada por YouTube.

morning so I got it at five o'clock yesterday morning from New York it's a 33 hours door-to-door I need you guys to be aware good morning thank you that's for my benefit okay cool so my name is Brian and today we are going to talk about shipping

ruby apps with docker I didn't start a company called code by that I mean both heard of code mining here oh wow that's awesome okay cool coming to use two codes mine will work on filling the gap between the first hands in the second hands I come see

me afterwards we'll talk about that okay so we're going to talk about docker just to kind of gauge what people's familiarity is and I can kind of try to tune how detail but I get who's heard of docker okay almost everybody how many people have

installed it and played around with it like either locally or somewhere I would say that's about third and how many of you are running it into production like five okay cool so I'm really excited about docker they were really really excited that docker

I think docker is going to be something that all of you are using within the next few years and so my goal in this presentation is to try to kind of paint the picture of why that is and get you excited about it too and we'll look at kind of what the options

are for getting started with it today so docker is a generic way to run any service anywhere where the enemy is as a little bit of an expert it means anyone service so it won't work for say Mac or Windows services but if you're deploying on the Linux

which I imagine many of you are you can use docker and it's a generic way to ship things around and that's where the name comes from its with shipping metaphor so if you think of like cargo containers doctor is based on that metaphor and you put your

application into a cargo container then you throw it to the ops team and they can run it anywhere so it's sort of two big components its container based virtualization combined with a generic package for that thing you can use for any service will get

it to you exactly what that means in just a little bit so container based virtualization is kind of what makes docker are awesome and this is the big DevOps idea of the last year or two and I think going to be the big DevOps idea for the next couple years

it kind of snuck off on a lot of people myself included so a year and a half ago I didn't really know anything about what container based virtualization was but the doctor folks through a conference a couple weeks ago and they got a bunch of presentations

from people who are using container based virtualization and apparently everybody who knows a lot of stuff of that office has been using us for a while so all of the Google runs on container based virtualization pretty much all of Twitter on Facebook so the

guys that are dealing with DevOps at scale are pretty far along in terms of migrating stuff to container base virtualization I think that's one of the big reasons why it's going to become so prevalent and it scales up and down you don't need to

be running nearly as many servers as Google for this to be valuable we're gonna look at how it can be useful even if you have only wants to protect or if you're just developing on your laptop today but you know that if you use approach like this it

does scale up I mean that's the big as it gets right so you can start now and then move up with it so there is a project that's been around for five or so years called LXC just stands for Linux containers and this was kind of hub docker got its start

for a while it was a to a wrapper a higher level easier to use layer on top of LX e and LX e was doing the hard stuff under the hood I like to see itself was both other layers but docker was sort of the user interface for how I see it's see-through on

steroids so it means you get a share kernel and isolated resources so you can only be running one version of the Linux kernel right when you're using container based virtualization so there's no like okay I'm gonna like this kernel but these kernel

modules over here and then I'm gonna do something completely different out right here you can do that with a full vm sort of system that a lot of people run on things like AWS right on AWS we take your kernel you can't do that we have container based

virtualization but it turns out that if you're using a modern kernel that's fine for almost everything right you don't really I mean me personally I don't really want to be picking what I am not smart enough to actually do that so I just want

a good kernel that works and then deal with things that are a little bit higher level in them and then the resources when you're using container based virtualization like LXE or dr isolating so you get a sandbox for resources like kids networking and files

so the root of this is like the CH we part the root on your container won't is not the root or host operating system and the pins are different right so we hid one within your container it's not anything to do with k1 on the underlying closed operating

system and then you can you can do all sorts of fancy things with the way you configure the networking for each container you can give it access to the hosts networking through a network bridge you can say I think it's doesn't have any networking at

all or do you port that Bank and you can also do resource limits so you can assign CPU shares and memory shares to your containers to control how they work when they're competing for resources docker is from a company that is called used to be called the

doc cloud and they kind of had a platform as a service that was a key Heroku and they were running this for awhile and the conclusion that they came to was well you know I don't know we're never gonna be more popular than Heroku ever but they sort

of realized that the technology they developed under the hood in order to support running that was more valuable than the service they were providing themselves so they made a pretty drastic stack and open sourced this project docker which is the next evolution

of their underpinnings of the platform as a service and it's all under either MIT or MIT or BSD license very permissive licensing and gave it away and said okay well we're gonna try to be instead is become a unified standard for how people use container

based virtualization if we can do that we'll figure out how to make make a lot of money doing it so then rebranded the company to darker ink the open source it's on go and it consists of two primary components release keys are facing components which

is a command line their face and a server daemon that you run which provides a REST API that is talk over age which has some really nice advantages oops under the hood doctors at the top and then there are a few pieces and how to make this work on the left

side at a ufs stands for another union file system that's a component that doctor use uses in order to store docker images basically the actual file systems that make up the applications that you're going to run need to be stored somewhere and a ufs

is a layered file system that makes that efficient it used to be that docker would talk to alexei which we looked at a little bit earlier and lxc uses seekers and name spaces which are some kernel level features that's really like the weapons-grade stuff

that makes this work that's like the hardest components to get absolutely right I think and then but but more recently docker has switched to a component they open source called the container which obviates the need for them to use alexei right now you

can switch back and forth and live containers the default but i think it seems like live containers that direction that everything's good things so it just simplifies the stack a little bit okay so what do you get when you run your Ruby application within

a docker container isolation we already talked about it's sort of an ephemeral if in the sense that if you're running a rails app usually in the process that you're going to be serving your request from whether it's engine X with passenger

or unicorn or something like that it doesn't really need to keep anything over the beyond the lifecycle of that process right so we kind of move to these pseudo share nothing architectures where we're connecting to a database and writing things there

but the process themselves are supposed to be disposable and doctor works really well with that it does someone to try not to delete anything that you might need later but basically you can throw away containers they're not one of the big advantages with

container virtualization versus say full VM virtualization is the low CPU and memory overhead so if you think about it you know let's say you fire up VirtualBox on your laptop and then you think about how many VMs can I run simultaneously on my laptop

like well each of those needs what like half a gigabyte of RAM at a minimum to kind of like run a full system and there a number of gigabytes each so you can run a few of them and that works fine but with docker you can run hundreds because you're not

actually needing to store full copies of all these file systems and boot full instances of an operating system it's more like running a process it's a special magical process that's basically than it is running an entirely new operating system

and so they also boot up much more quickly if you are booting up a regular VM you have to wait for all of the bunching to do right that's just kind of the way it works but with container virtualization and docker the amount of time it takes to boot a isolated

container is measured in milliseconds you basically can't perceive how long it takes to start up a new docker container or throw it away you get the small images with the ufs layering which we talked about in this what's everything be very very high

density which means saving a bunch of money and also save a bunch of time because if you can run all of your infrastructure on half as many servers that makes everybody's life easier right there's not always going to be things that need to be dealt

with on a per server basis and having fewer servers is a win so doctor has two components there's images and containers or two main concepts this is a little tricky at first I kept getting them confused but the image is a saved version of something that

can be booted into it so I think that when I say image can think of like a package docker images like a doctor package and then when you run that package you get a container which is like a process so it consists of a root file system that could be any Linux

[ ... ]

Nota: se han omitido las otras 5.403 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.