|
|
Subscribe / Log in / New account

Apache attacked by a "slow loris"

Apache attacked by a "slow loris"

Posted Jun 24, 2009 14:22 UTC (Wed) by mjthayer (guest, #39183)
Parent article: Apache attacked by a "slow loris"

I wonder whether sockets are the right thing for an application like Apache which handles such massive numbers of connections. It ought to be possible for it to just handle all TCP packages sent to its port number itself, asynchronously, on a single threaded event loop. Don't know if the APIs are available for that sort of thing though, and it would probably need some TCP handling libraries (see the recent article on the middle layer design anti-pattern...)


(Log in to post comments)

Apache attacked by a "slow loris"

Posted Jun 24, 2009 14:49 UTC (Wed) by hppnq (guest, #14462) [Link]

That would not solve anything, but simply create another consumable, scarce resource.

Apache attacked by a "slow loris"

Posted Jun 24, 2009 15:13 UTC (Wed) by mjthayer (guest, #39183) [Link]

I'm sure you're right, but I don't quite follow. What is the scarce resource here? Obviously several threads can be spread over several CPUs, but otherwise why are multiple threads handling sockets superior to one thread handling incoming packages asynchronously?

Apache attacked by a "slow loris"

Posted Jun 24, 2009 15:49 UTC (Wed) by hppnq (guest, #14462) [Link]

You would run out of memory, I guess. My comment was mostly inspired by the observation that the DoS is caused by Apache's rather relaxed handling of the HTTP protocol, which in principle makes the lower-level data processing irrelevant. But of course the DoS would appear differently.

You may want to take a look at network channels, by the way: your idea is really not crazy at all. :-)

Apache attacked by a "slow loris"

Posted Jun 28, 2009 20:23 UTC (Sun) by pphaneuf (guest, #23480) [Link]

The scarce resources here are TCP ports and memory. Maybe you can make things more memory efficient, or add memory, but the number of TCP ports is both fixed by TCP itself and inconveniently small.

No matter how you implement it, there's a fixed cap on the number of TCP connections per IP address. You could add IP addresses to a server, but that would be a waste of another precious resource, since during normal usage, most web servers can't handle the maximum number of connections of a single IP.

Apache attacked by a "slow loris"

Posted Jun 28, 2009 21:59 UTC (Sun) by dlang (guest, #313) [Link]

TCP ports don't get used up on servers when you have lots of connections.

the limit is that you cannot duplicate the set
source IP
source port
destination IP
destination port
in about a 2 min period

when connecting to a server the destination port and destination IP are fixed, so a client can make lots of connections and make it so that no other connections could be made from that source IP, but that doesn't hurt anyone else.

that also isn't the attack that's happening here.

Apache attacked by a "slow loris"

Posted Jun 28, 2009 20:43 UTC (Sun) by pphaneuf (guest, #23480) [Link]

Oh, and Squid uses a single thread with non-blocking I/O (asynchronous), yet it is affected, and IIS uses a thread-per-connection (if I recall correctly), just like Apache with the threaded MPM, but it is unaffected.

I think the key difference is how timeouts are handled. If the 300 seconds timeout in Apache is reset at every header (or worse, at every packet received, which could be one character each!), then you could stretch something for a long time. Maybe lighttpd and IIS do something like give it 300 seconds to get all the headers, and after that, too bad, you're just cut off (freeing the TCP port for another connection). You could still mount some sort of DoS attack, but the attacker would have to keep it up more intensively, so that very few legitimate clients manage to slip by as well as Slowloris does for affected servers (which is eventually a 100% effectiveness, with next to zero difficulty for the attacker).

Apache attacked by a "slow loris"

Posted Jun 28, 2009 22:05 UTC (Sun) by dlang (guest, #313) [Link]

this is exactly the problem. they have one timeout variable that's used for many different things, and while some of the things need long timeouts, others don't, and could be set much shorter.

hopefully this will force the apache team to tackle this issue and seperate the timeouts, but from the article it sounds like they are not responding well.

they are right that the basic attack approach of having a botnet of servers connect to an apache server and tie it up is an old attack that has been possible forever. fixing the timeout issues will not address that, and even after fixing the timeouts the attackers can kill the apache server by making legitimate requests that take time to process, but fixing the timeouts will go a long way towards leveling the playing field again, right now it's tilted heavily in favor of the attackers.

Apache attacked by a "slow loris"

Posted Jun 29, 2009 1:20 UTC (Mon) by njs (guest, #40338) [Link]

> Oh, and Squid uses a single thread with non-blocking I/O (asynchronous), yet it is affected

The original report said that Squid was affected, but the Squid maintainers can't reproduce it (http://www.squid-cache.org/bugs/show_bug.cgi?id=2694); looks like a mistake in the original report to me.

> and IIS uses a thread-per-connection (if I recall correctly)

The article here claims that IIS does not use thread-per-connection, but rather some sort of asynchronous state-machine design (like lighttpd or squid) plus a thread pool to parallelize that state-machine.

> Maybe lighttpd and IIS do something like give it 300 seconds to get all the headers, and after that, too bad, you're just cut off

No -- they just handle the slow connection as normal. The difference is that for them, an idle connection costs a few bytes of memory describing that connection, and one can easily have thousands of these data structures sitting around without anyone noticing. For Apache, an idle connection ties up an entire server process, and for various reasons you can't have thousands of server processes sitting around.

Apache attacked by a "slow loris"

Posted Jun 24, 2009 15:05 UTC (Wed) by epa (subscriber, #39769) [Link]

It has often been pointed out that TCP seems a poor basis for a stateless protocol like HTTP. (To be fair, the HTTP headers can be quite big and you need to POST data sometimes, making a request too big to fit in a single UDP datagram. But it's interesting to wonder what the world would look like if Tim B-L had chosen the other path...)

Apache attacked by a "slow loris"

Posted Jun 25, 2009 16:25 UTC (Thu) by iabervon (subscriber, #722) [Link]

HTTP is actually quite stateful: each connection is in the state of expecting responses to particular requests; it's really handy not having to invent TCP in order to figure out what response goes to what request, what's coming back when and where, what you're still waiting for, what you should give up on and start over requesting, etc. Once you have any state at all, it's much easier to use TCP than it is to use UDP and deal with the state at a higher level. Simply getting responses routed back to requesters through firewalls and NAT with UDP is a pain and requires a lot of protocol-specific analysis in a lot of devices.

HTTP is "stateless" only in that you often return to the default state, not in that you never leave the default state. UDP is only really appropriate for cases where you don't care if your message is received and you won't get a response to it.

Apache attacked by a "slow loris"

Posted Jun 24, 2009 15:35 UTC (Wed) by mjthayer (guest, #39183) [Link]

Actually, is there a way to get raw TCP data on a given port other than a raw IP socket?

Apache attacked by a "slow loris"

Posted Jun 24, 2009 15:59 UTC (Wed) by mjthayer (guest, #39183) [Link]

See hppnq's answer above...

Apache attacked by a "slow loris"

Posted Jun 24, 2009 20:49 UTC (Wed) by aliguori (subscriber, #30636) [Link]

It's not sockets that are the issue here, it's threads. Apache uses one thread per-request and it limits itself to a finite number of threads. This means that Apache is going to be bound to a fixed number of simultaneous requests regardless of processor/memory/bandwidth resources. This is a design flaw and is to due the use of synchronous IO routines to read data from the socket. It's an unfortunately common problem in network facing servers.

The Right Thing to do is to use a state machine to process incoming requests so that a single thread can handle many requests at once. You will then be bound by processor/memory/bandwidth.

You could still do a slowloris attack against ISS but it becomes a traditional DoS because you have to be able to exhaust the web servers resources with your local resources. What makes Apache vulnerable here is that you don't need a lot of resources on the client to exhaust Apache's resources regardless of how much more powerful the server is.

Apache attacked by a "slow loris"

Posted Jun 24, 2009 21:05 UTC (Wed) by mjthayer (guest, #39183) [Link]

You are right of course. It does seem rather ridiculous to me to pack all the TCP data going to the port into thousands of sockets just so that it can be unpacked again by a heavy-weight polling syscall, but that is a different subject.

All modern OSes offer alternative

Posted Jun 25, 2009 4:23 UTC (Thu) by khim (subscriber, #9252) [Link]

Have you seen surveys? Do you know WHY nginx is growing so fast? Do you even know WHAT the nginx is? It's caching web-server. It can serve static web- pages and protect "real" server (often Apache server) from slowloris attack. And it DOES NOT use "a heavy-weight polling syscall" - all moder operating systems offer alternative...

P.S. The real motivation was not to fight slowloris attack - it was to reduce server load when it talks with thousands of dial-up clients. Think about it: if you have huge number of very slow clients the dynamic is the same! Server processes or threads are tied for minutes when they serve "feature-rich" pages to clients who only consume 1Kb per second. Apache was unable to work with it (nginx author tried to fix it for years) so new web-server was born. And as statistic shows real admins who are in charge of real sites know all about this problem. Tempest in a teapot...

All modern OSes offer alternative

Posted Jun 25, 2009 4:31 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Why do you need an entirely new web server? Couldn't you do the same thing with a caching reverse proxy like Varnish? That way, you only need to configure one set of servers.

All modern OSes offer alternative

Posted Jun 25, 2009 4:45 UTC (Thu) by khim (subscriber, #9252) [Link]

Why do you need an entirely new web server?
Because you need solutions, not a buzzwords? I've explained why you need two servers below. Without "real" web server you can serve static pages (icons, images, etc) via sendfile(2) - and this is important for real-world servers.
Couldn't you do the same thing with a caching reverse proxy like Varnish?
You can name your frontend server "web server", "web accelerator" or use any other term, but if your frontend is "heavily threaded, with each client connection being handled by a separate worker thread" then you just added complexity without any benefits. What'll happen with your frontend if you'll have 50'000 clients with opened connections? Nginx can handle such load on medium-sized system.
That way, you only need to configure one set of servers.
You still need to configure server. Nginx was designed from the ground up to do two things and do them well:
1. Serve static pages.
2. Work as http-accelerator.

All modern OSes offer alternative

Posted Jun 25, 2009 5:43 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Because you need solutions, not a buzzwords?
This coming from somebody who's hawking a specific product as the solution to a whole class of problems? I don't think I'm the one who has to worry about buzzwords here.
You still need to configure server.
Reading the nginx webpage, it appears you can configure nginx as a caching reverse proxy. That's fine. My issue is that you pretend it's the only game in town when really, any caching reverse proxy will do. (And feature sets may differ; Varnish, for example, appears to have a more sophisticated load balancer.)

Also, I can't fathom why you would want your web accelerator serving content on its own. A caching reverse proxy setup is the only one that makes sense to me: that way, you have one place to configure what's served: the back-end servers. Because the back-end servers already mark what's static and what's not (via cache-control HTTP headers), you shouldn't have to do anything special to push static content to the front-end server, and the reverse proxy asking the back-end servers once in a while for some static content won't make a difference in the overall load.

All modern OSes offer alternative

Posted Jun 26, 2009 12:21 UTC (Fri) by tcabot (subscriber, #6656) [Link]

I can imagine different case requiring different solutions. In one case (I think the one you're thinking of) the bulk of what gets served is generated by the back-end servers and the "static" assets are smaller by comparison - site icons, css, js, etc. In that case you're right: a proxy is the way to go because you're concentrating control and configuration in one place.

On the other hand, let's say that your site serves massive quantities of "interesting" image files (which I understand was the original use case for nginx). In that case the server needs to be extremely efficient because the working set is so large that a cache wouldn't do much good.

Horses for courses.

Varnish is the answer

Posted Jun 26, 2009 13:04 UTC (Fri) by dion (guest, #2764) [Link]

Varnish only allocates a worker thread once the entire request has been received from the client, so simply slapping Varnish in front of the webserver under attack will defeat the attack.

If all you want to do is to mitigate a Slow Loris attack then just move your web server to a different port and start Varnish with the default configuration on port 80.

nginx vs slowloris

Posted Jul 4, 2009 17:18 UTC (Sat) by gvy (guest, #11981) [Link]

Vitya, linux.kiev.ua is running nginx + apache-1.3 and it's rather depressed by slowloris in my tests. Could you please elaborate on how to cook things up so as to protect dynamic paths? I could only come up with proxying stuff for at least some fixed minimal timeout (which is not an option for lots of pages) and googling 'nginx slowloris' doesn't yield anything particularly useful to me.

PS: nginx is really an excellent static httpd/reverse proxy, anyone running moderately busy site should consider looking into it. Could drop apache instance numbers an order of magnitude, together with RAM occupied by those.

Apache attacked by a "slow loris"

Posted Jun 25, 2009 4:29 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

So you're talking about having Apache implement TCP in userspace? That makes no technical sense whatsoever. The kernel implementation is thoroughly debugged, mature, patched regularly, and faster to boot. Apache will have to maintain just as much state as it does today, and moving TCP to userspace solves nothing.

A "socket" is just a handle to a tiny bit of state information describing the connection, and of course it's the right abstraction. It's what the protocol is specified to use, and in-order, streamed delivery is the perfect medium for HTTP anyway.

The real problem here is what Apache does after it reads data from a socket. Recall that both lighttpd and IIS use sockets (just like every other network daemon on Earth), and they are not vulnerable to this attack.

The counter to this attack is simple, really, and it's conceptually the same as a counter to a SYN flood: only commit your resources when the remote party has committed his own. The problem here is how to shoehorn that idea into Apache's model, which commits resources (in this case, processes) very early.

Here's one uninformed idea: accept connections and read HTTP requests in one master process, asynchronously. Only when a complete request has been read send the file descriptor of the connection to a worker; the actual handoff can be achieved using a SCM_RIGHTS over a unix domain socket.

Apache attacked by a "slow loris"

Posted Jun 27, 2009 15:06 UTC (Sat) by dmag (guest, #17775) [Link]

> So you're talking about having Apache implement TCP in userspace?

No, having a generic library for TCP in user space.

> That makes no technical sense whatsoever.

Please read the linked article. http://lwn.net/Articles/169961/

> The kernel implementation is thoroughly debugged, mature, patched regularly,

Agreed.

> and faster to boot.

No. Right now, the driver gets the packet, later the kernel gets around to looking at it (maybe doing checksums, etc), and then much later userspace requests it. If each of these happen on a different CPU, you will waste thousands (ten thousands?) of instructions because of CPU caches and data locking issues.

To become a better programmer, read this: http://lwn.net/Articles/250967/
It's really long, but pay attention to the parts where loops get 10x faster just by re-arranging data structures.

> The real problem here is what Apache does after it reads data from a socket.

Agreed. TCP sockets is icing on the cake once you've solved the current bottleneck.

> Here's one uninformed idea: accept connections and read HTTP requests in one master process, asynchronously.

But why does that process *have* to be Apache? Just put another web server in front of it (Nginx,Varnish, etc.). Apaches is really a "big and expensive" single-threaded application server (mod_php, mod_passenger, mod_perl, etc). In fact, Apache isn't especially good at serving static files either. I have to admit, Nginx has the best architecture, (but consider Varnish if caching is a big win).

It's like when you go to the big warehouse stores, and before you get to the front of the line, some guy with a scanner has already scanned your cart, and all you do is pay at the register without waiting.


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds