|
|
Subscribe / Log in / New account

Apache attacked by a "slow loris"

Apache attacked by a "slow loris"

Posted Jun 24, 2009 20:49 UTC (Wed) by aliguori (subscriber, #30636)
In reply to: Apache attacked by a "slow loris" by mjthayer
Parent article: Apache attacked by a "slow loris"

It's not sockets that are the issue here, it's threads. Apache uses one thread per-request and it limits itself to a finite number of threads. This means that Apache is going to be bound to a fixed number of simultaneous requests regardless of processor/memory/bandwidth resources. This is a design flaw and is to due the use of synchronous IO routines to read data from the socket. It's an unfortunately common problem in network facing servers.

The Right Thing to do is to use a state machine to process incoming requests so that a single thread can handle many requests at once. You will then be bound by processor/memory/bandwidth.

You could still do a slowloris attack against ISS but it becomes a traditional DoS because you have to be able to exhaust the web servers resources with your local resources. What makes Apache vulnerable here is that you don't need a lot of resources on the client to exhaust Apache's resources regardless of how much more powerful the server is.


(Log in to post comments)

Apache attacked by a "slow loris"

Posted Jun 24, 2009 21:05 UTC (Wed) by mjthayer (guest, #39183) [Link]

You are right of course. It does seem rather ridiculous to me to pack all the TCP data going to the port into thousands of sockets just so that it can be unpacked again by a heavy-weight polling syscall, but that is a different subject.

All modern OSes offer alternative

Posted Jun 25, 2009 4:23 UTC (Thu) by khim (subscriber, #9252) [Link]

Have you seen surveys? Do you know WHY nginx is growing so fast? Do you even know WHAT the nginx is? It's caching web-server. It can serve static web- pages and protect "real" server (often Apache server) from slowloris attack. And it DOES NOT use "a heavy-weight polling syscall" - all moder operating systems offer alternative...

P.S. The real motivation was not to fight slowloris attack - it was to reduce server load when it talks with thousands of dial-up clients. Think about it: if you have huge number of very slow clients the dynamic is the same! Server processes or threads are tied for minutes when they serve "feature-rich" pages to clients who only consume 1Kb per second. Apache was unable to work with it (nginx author tried to fix it for years) so new web-server was born. And as statistic shows real admins who are in charge of real sites know all about this problem. Tempest in a teapot...

All modern OSes offer alternative

Posted Jun 25, 2009 4:31 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Why do you need an entirely new web server? Couldn't you do the same thing with a caching reverse proxy like Varnish? That way, you only need to configure one set of servers.

All modern OSes offer alternative

Posted Jun 25, 2009 4:45 UTC (Thu) by khim (subscriber, #9252) [Link]

Why do you need an entirely new web server?
Because you need solutions, not a buzzwords? I've explained why you need two servers below. Without "real" web server you can serve static pages (icons, images, etc) via sendfile(2) - and this is important for real-world servers.
Couldn't you do the same thing with a caching reverse proxy like Varnish?
You can name your frontend server "web server", "web accelerator" or use any other term, but if your frontend is "heavily threaded, with each client connection being handled by a separate worker thread" then you just added complexity without any benefits. What'll happen with your frontend if you'll have 50'000 clients with opened connections? Nginx can handle such load on medium-sized system.
That way, you only need to configure one set of servers.
You still need to configure server. Nginx was designed from the ground up to do two things and do them well:
1. Serve static pages.
2. Work as http-accelerator.

All modern OSes offer alternative

Posted Jun 25, 2009 5:43 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Because you need solutions, not a buzzwords?
This coming from somebody who's hawking a specific product as the solution to a whole class of problems? I don't think I'm the one who has to worry about buzzwords here.
You still need to configure server.
Reading the nginx webpage, it appears you can configure nginx as a caching reverse proxy. That's fine. My issue is that you pretend it's the only game in town when really, any caching reverse proxy will do. (And feature sets may differ; Varnish, for example, appears to have a more sophisticated load balancer.)

Also, I can't fathom why you would want your web accelerator serving content on its own. A caching reverse proxy setup is the only one that makes sense to me: that way, you have one place to configure what's served: the back-end servers. Because the back-end servers already mark what's static and what's not (via cache-control HTTP headers), you shouldn't have to do anything special to push static content to the front-end server, and the reverse proxy asking the back-end servers once in a while for some static content won't make a difference in the overall load.

All modern OSes offer alternative

Posted Jun 26, 2009 12:21 UTC (Fri) by tcabot (subscriber, #6656) [Link]

I can imagine different case requiring different solutions. In one case (I think the one you're thinking of) the bulk of what gets served is generated by the back-end servers and the "static" assets are smaller by comparison - site icons, css, js, etc. In that case you're right: a proxy is the way to go because you're concentrating control and configuration in one place.

On the other hand, let's say that your site serves massive quantities of "interesting" image files (which I understand was the original use case for nginx). In that case the server needs to be extremely efficient because the working set is so large that a cache wouldn't do much good.

Horses for courses.

Varnish is the answer

Posted Jun 26, 2009 13:04 UTC (Fri) by dion (guest, #2764) [Link]

Varnish only allocates a worker thread once the entire request has been received from the client, so simply slapping Varnish in front of the webserver under attack will defeat the attack.

If all you want to do is to mitigate a Slow Loris attack then just move your web server to a different port and start Varnish with the default configuration on port 80.

nginx vs slowloris

Posted Jul 4, 2009 17:18 UTC (Sat) by gvy (guest, #11981) [Link]

Vitya, linux.kiev.ua is running nginx + apache-1.3 and it's rather depressed by slowloris in my tests. Could you please elaborate on how to cook things up so as to protect dynamic paths? I could only come up with proxying stuff for at least some fixed minimal timeout (which is not an option for lots of pages) and googling 'nginx slowloris' doesn't yield anything particularly useful to me.

PS: nginx is really an excellent static httpd/reverse proxy, anyone running moderately busy site should consider looking into it. Could drop apache instance numbers an order of magnitude, together with RAM occupied by those.


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds