DjangoCon 2015

Cómo testear el rendimiento de las aplicaciones modernas

Dustin Whittle  · 

Presentación

Vídeo

Transcripción

Extracto de la transcripción automática del vídeo realizada por YouTube.

alright thanks everyone for joining again this is performance testing for modern apps and we're going to be talking about how some tools of the trade for testing performance on the server side but also understanding client-side performance so I have a

ton of content there's a bunch of notes at the bottom of my slides i'll make them all available online so if i go a little bit fast know that there's plenty of notes for later so the reality is that top engineering organizations they could perform

it's not as a nice to have but is a critical feature of their product and it's because they understand that it has a direct impact on their business's bottom line most of the time developers really don't think of this until you go to launch

and really i'm here to help change that so you can find out a little bit about me follow me on twitter at Dustin whittler Dustin widdle calm so why does performance matter Microsoft found that being searches that were two seconds solar resulted in the

4.3 percent drop in revenue per user and when Mozilla shaved 2.2 seconds off their landing page experience Firefox downloads increased 15.4 percent so they got 60 million more downloads just the kids the page was a bit faster in making Barack Obama's website

sixty percent faster increased donation conversions by fourteen percent but the most impressive metric that I've come across is that by decreasing the end user latency of amazon.com retail operations by 100 milliseconds results in a one percent improvement

in revenue so whether it's Yahoo shopzilla AOL amazon com all of these engineering companies all these engineering organizations understand that ultimately performance directly impacts the bottom line so the question is how fast is fast enough so point

1 seconds it feels instantaneous it feels like you're flipping a page in a book or 100 milliseconds so you should really strive to keep your load times in this range but one second allows you to think seamlessly and after 10 seconds you really start to

lose the attention of your users so there's been a bunch of performance studies to understand basically the attention span and applications and user experiences and what the Rose is that performance really is key to a great user experience I think everyone's

probably had the experience where you go to check out in the e-commerce store and you click the checkout button and then it just waits for a long time and you really quickly start to lose faith I think the engineer and all the snows to just wait it out or

going to get charged again so again how fast is fast enough 100 milliseconds is again instantaneous it feels like flipping a page in a book 100 milliseconds at 300 milliseconds the delay is perceptible so your users are going to start to notice and after about

one second you really start to interrupt the users flow so users expect a site to load in two seconds and after three seconds forty percent will bend in your site and this comes from a nielsen performance survey that they did a long time ago and the realities

that mobile applications are even worse this is really hard to do because modern applications are really complex when you have 100 microservices talking to each other and you're making calls to external providers a shipping provider payment processing

provider fraud detection provider it's really hard to have great performance so with application complex the exploding how do you manage this and how do you test for this I think most companies treat performance and really up time as a critical differentiator

and if you look at some of the major enterprise companies they all treat up time as a critical metric and I think we've all encountered this if you provide a service to others it's enforced by an SLA so what's the goal here the goals treat performance

as a feature and we're going to talk about some tools of the trade for performance testing to do exactly that so the first day I like to start with is how to understand your baseline performance so just rock python is going to be pretty cool pretty fast

when you start out django you have a framework layer that's going to add a bit of overhead and then you're going to have your application stack that's going to add a bit more of overhead really what you want to understand is what's your baseline

performance on some hardware on the specific set of hardware that you're going to run in production what's the performance of a static asset being served not from Python what's this what's the performance of Python hello world so a very simple

script what's the overhead of your framework and then what's your actual application because oftentimes what you'll find is that the business transactions and your applications or have very different performance the home page is going to be highly

cash whereas the checkout process is going to talk to a bunch of third-party providers so it's going to be inherently slower so you should understand the performance each one of these transactions and really understand how that affects your users so I

like to do that by understanding what the static threshold is with the hello world baseline is and what the application benchmark is so if you've ever installed Apache you've probably backs s2 Apache vents if not you can apt-get install apache2 utils

on most Linux platforms and it becomes available Apache bench is a very simple tool for benchmarking the performance of applications this isn't specific to django this isn't specifically Python you can use most of these tools across any different app

application platform so Apache bench is really crude and dead simple so if you just want to get an idea of how fast the particular transaction is like your home page it's really easy to test the single concurrent user so in this case what you're seeing

is Apache bench Dashti for concurrency so I want to test one user going as fast as possible for 10 seconds to acne demo app com so we'll run apache bench dash to concurrency of one for a time of 10 seconds and dash k's benchmark mode so I want no delay

between the requests and which will start to get is a response that looks like this so you'll get the out requests per second which is a use very useful metric but more importantly is the latency so what's the average response time per transaction

and the balance to figuring out how much capacity or infrastructure can support is understanding when you max out your requests per second and the latency starts to rise so you can always serve more transactions just very slowly but you don't want your

users waiting ten seconds just because there's a thousand of them shutting up so you really need to understand the balance of this and with Apache bench it's really easy to start to increase the concurrency so in this case I'm going to start testing

10 users so we'll just change the concurrency level here to 10 and test again for 10 seconds and what you'll see is that we have more requests per second in this case we have 65 requests per second but the time per request has gone up to an average

of 151 milliseconds now Apache bench is great it's a quick and dirty it makes it very easy to load test a server and but i prefer siege so see jaws ave a similar format if you can app install seeds on most platforms or port and brew install seeds it's

pretty straightforward and it has a very similar format so you can run siege dash C again concurrency of 10 users and for a time of 10 seconds now what you'll notice is in these examples I'm load testing one endpoint I'm only load testing the home

page but that's only so useful and we'll get metrics that were very similar to Apache bench so for ten concurrent users we get about 65 transactions a second and the average response time is about 30 milliseconds I think our 150 milliseconds rather

so you can keep increasing the concurrency until you max it out and really well until you can start to see the latency start to skyrocket really what you want to understand is what's the maximum request per per second that the machine can get well before

the average response time starts to increase now this is fine for a very simple application but most of the time applications aren't one endpoint its many different endpoints you have functionality to you have the home page you have login logout add to

your cart and checkout process in order etc so there's a quick tip to be able to call the entire application to discover all the URL endpoints because most of us inherit applications and we don't have the ability to just build them from scratch every

time so you often don't know all the functionality that exists so there's a tool called s proxy which is a transparent HTTP proxy basically what it allows you to do is make a request through a proxy and will love the URL that you're requesting

so if you want to interact with an application and you want to crawl all the endpoints in the application it's very easy to use s proxy and W get to basically emulate a search engine spider and the goal here is that we want to find all the URLs of the

application so it starts with s proxied a show and all it's going to do is run an HTTP proxy on port 9000 one and all the URLs that we access are going to be put into a URLs txt file the next thing we're going to do is use W get so W get has a spider

mode that allows you to emulate a search engine spider and we'll go to the home page and recursively crawl all of the links so if you really want a quick and easy way to discover all the functionality of the application you can run s proxy and W get and

discover all the public URLs in your application and what you end up with will sort the list so we have a unique list of URLs at the end and you end up with something that looks like this this is just a simple ecommerce application that i'm using but there's

[ ... ]

Nota: se han omitido las otras 4.943 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.