RubyConf 2015

Desarrollando aplicaciones críticas de seguridad con Ruby

Tom Macklin  · 




Extracto de la transcripción automática del vídeo realizada por YouTube.

- Alright, well welcome everybody to my talk. Thanks for coming. Hope everyone's having a good conference. I know I am. Is everybody learning a lot? - [Voiceover] Yeah, real good. - Excellent. I try to leave a few minutes when I do talks because I learn so much in conferences I wanna talk about the stuff I'm learning in other people's talks more than what I came to talk about.

So if I get on a side note about something that I heard, the last talk I was at was phenomenal. But anyway, I hope you guys get a lot out of my talk today. Before I say anything else, let me get this disclaimer out of the way. I work for the Naval Research Laboratory, but my talk today is my opinions based on my professional experience and my personal experience.

My opinions don't represent those of the U. S. Navy, the U. S. government, anything like that. As a matter of fact, if you do any research you'll probably find that there's a lot of people in the government who disagree with me on a lot of things. Also another disclaimer.

I say 'we' a lot when I talk because I have a really close knit team, and it's an awesome team. And we argue about stuff, we don't always agree, but when I say 'we', I'm not talking about big brother or all the developers I work with. I'm just kind of subconsciously referring to the fact that we try to make as many decisions as we can as a team.

So I apologize in advance when I say 'we'. So enough about that. Little about me. I consider myself a good programmer. Not a great programmer, but a good programmer, and I like to keep things simple. I study a martial art called akido, and in akido we have a lot of sayings, and one of the sayings we have is that a advanced technique is just a simple technique done better.

And I like to apply that not just in martial arts, but in all aspects of my life, and programming is no exception. So everything I do, everything I talk about, the underlying theme is keep things as simple as you possibly can. So just a little bit about this Naval Research Lab thing.

It was started in 1923 by Congress by the recommendation of this guy, Thomas Edison, who said we needed a naval research lab, and so we have one. And the group I work in, the Systems Group, has come up with some pretty cool technology they have used, most notably the onion router Tor, came out of NRL.

And a lot of the foundational technologies and virtual private networkings were developed by Cathy Meadows and Ran Atkinson are two doctors at NRL. The Vanguard Space Satellite Program came out of NRL, which was America's first satellite program. Of course, Sputnik was first out of the Soviet Union.

And there was a great paper from 1985 called the Reasoning About Security Models. It was written by Dr. John McLean, who's my boss's boss's boss's boss's boss's boss. But anyway, it's a great paper. It talks about system Z, and if you're into academics it's a really cool theory about security.

So all that said, my talk is not about anything military related. It's not academia. It's not buzz word bingo. I had a really cool buzz word bingo slide, but I took it out because CeCe's was way better. So anyway, what am I going to be talking about? Well, I wanna spend some time unpacking what I mean by security critical.

Like we just heard in the last talk, people throw phrases around, and it means different things to different people. So I want to unpack what I mean by it. Sorry about that. I also wanna work through a use case. Now this use case isn't an actual use case, but it's kind of a composite of experiences I've had.

So it borrows from systems I've worked on and developed in, but it's not actually representative of any system we've ever built. But the main reason I'm here is this last point, next steps. We've got a lot of initiatives we're interested in pursuing to improve our ability to use Ruby in security critical applications.

And some of them we know how to do well. Others we have an idea how we'd do it, but we probably wouldn't do it well. And others we know we can't do. And so if anything you see on my next step slides rings a bell with you, please come talk to me after the talk because we're interested in getting help from people who wanna do cool stuff with security in Ruby.

So anyway, there was a great talk that I saw that influenced my thinking about this subject with Ruby. Back in 2012, I was at a conference called Software Craftsmanship North America. I really recommend you go sometime, if you haven't. It's a great conference.

But Uncle Bob had gave this talk called Reasonable Expectations of the CTO. You probably haven't seen it, it's on Vimeo. If you haven't seen it, look it up. I'm not gonna summarize it for you, but watch it. And as you watch it, just add security to the list of problems that systems have.

It's very applicable to the security problem as well, and it rings even more true today than when he gave the talk in 2012. So when we talk about computer security one of things we talk about alot is assurance. And assurance usually is a verb. It's something that I do to assure you that everything is gonna be OK, that there's no problem.

Well, when I talk about assurance, I'm not talking about telling you everything is gonna be OK because what's the first thing you think when I tell you everything's gonna be OK? Something's wrong. So I don't want to assure you of anything. What I wanna do is talk about giving you assurances that allow you to make a decision of your own.

And even if you don't like the assurances that you get when you do a security analysis on something, at least you know where you stand, and that's really useful. So when I talk about assurances, I'm not trying to tell you everything's gonna be OK. I'm talking about evidence.

We've all seen this chart before, and whether you're trying to make money or make the world a better place or solve a security problem, this chart is not avoidable to my knowledge. And when we go about solving a security problem, we bump into it, too. And we look at it and go, well, we got a few choices.

We can so something really clever that's gonna outsmart the attackers. We could go buy a really cool library that's gonna provide us all this super awesome security and solve all of our problems. Or we could hire some outside consultant who's gonna assure us that everything's gonna be OK.

Well, don't do any of that 'cause attackers are really, really clever. They're more clever than me, they're more clever than you, and what's more is there is lots of them, and they have lots of time. You build a feature, it's on to the next feature. They are out there hammering on your stuff day after day, sometimes teams of them, if you're unlucky enough to be a target, and most of you aren't.

But we're going to make mistakes in our code. It's just a fact of life. There are going to be bugs. There are going to be security bugs. So I'm gonna talk about what we can do to defend ourselves. A key point I wanna make today is that a security critical system should have the right security controls, in the right places, and with the right assurances.

Say that again. A security-critical system should have the right security controls, in the right places, and with the right assurances. Now I like to do that with architecture. We construct architecture, and a lot of times when we're building code, the principles that make code awesome are the same principles that make code secure.

We wanna reduce complexity. We wanna localize functionality. We wanna improve test coverage, things like that. But also we wanna make sure we have the right controls in the right places. A firewall at the front door isn't gonna keep bad guys out, just like the guy with a gun in your server room isn't gonna keep hackers out of your server.

So you've gotta not only consider architecture of your code and design and test coverage, but you also need to think about what controls you're using where. And how, more specifically, we layer those controls in our system. So some of these acronyms you may not recognize.

[ ... ]

Nota: se han omitido las otras 3.918 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.