Category Archives: security

Whatsupp?

Funny that.  Just a couple of weeks ago, I wrote:

The spy in your ‘puter or ‘phone … Some of that is P2P communications software like Microsoft’s skype or Facebook’s whatsapp, that should be prime vehicles for Aussie-style targeted espionage.

Suppose you’re a government spy agency that has leaned on whatsapp to introduce your spyware.  You want to get everyone to update to a version with the spyware.  How do you go about it?  How about an announcement of a serious security flaw in earlier versions to persuade everyone who might have something to hide to make the upgrade?

As reported, the whatsapp flaw was already at a much deeper level than just spying on whatsapp traffic (as per my earlier comment): it was used to install some of the world’s most sophisticated spyware called Pegasus, developed by an Israeli company NSO and sold to government agencies for total surveillance on dangerous elements such as dissidents and human rights lawyers.  The Reg article quotes a comment that kind-of summarises:

NSO Group has been bragging that it has no-click install capabilities for quite some time. The real story here is that WhatsApp found the damn thing.

— Eva (@evacide)

Indeed.  Pegasus wasn’t new, and was thought to have been distributed by more conventional means (and no doubt was, to less-than-paranoid users).  How did they make the connection between it and a critical whatsapp bug?  One might speculate there was more to this story than is being told!

A good day to bury other security/spyware news?  Golly, what a coincidence that Thrangrycat was also just announced.  The perfect way to bury something more than the official lawful intercept (wiretapping as required of them by the US Government) malware into Cisco routers, switches and firewalls, so deeply that future upgrades won’t affect it.

Wicked speculation: could it be the amount of work they’ve had to devote to supporting US Government spying requirements that caused Cisco to fall behind an unencumbered Huawei?

Quis Custodiet Ipsos Custodes?

With the controversy over the US and its allies adopting Huawei kit generating more heat than light, I think perhaps it’s time to don my mathematician’s hat and take a look at what could and couldn’t really be at stake here.  Who could be spying on us, and how?

Much of the commentary on this is on the level of legislating the value of pi.  That is to say, a fundamental conflict with basic laws of nature.  At the heart of this is Trump’s ranting about China spying on us: the idea that a 5g router (or any other infrastructure component) could spy on his intelligence services’ communications is on the level of worrying about catching cold from reading my blog because I sneezed while writing it.

At least, a router acting on its own.  A router in collaboration with other agents could plausibly be a different story, but more on that later.

To set the scene, I can recommend Sky’s historical perspective: Huawei’s 5G network could be used for spying – while the West is asleep at the wheel.  This looks back to the era of British domination of the world’s communications infrastructure, and how we successfully used that to eavesdrop German wartime communications.  It also traces the British company involved, which was bought by Vodafone in 2012.

Taking his lesson from history, Sky’s correspondent concludes that if the Brits and the Americans could do it (the latter a longstanding conspiracy theory more recently supported by the Snowden leaks[1]), then so could the Chinese.  Of Huawei (a private company), he says:

[founder] Ren Zhengfei … has said his firm does not spy for China, and that he would not help China spy on someone even if required by Chinese law.

Personally, I’m inclined to believe him.

But it may also be a promise he is unable to keep, even if he wants to. The state comes before everything.

which might just be plausible, with the proviso that it would risk destroying China’s world-leading company and a powerhouse of its economy.

But the historical analogy misses one crucial difference in the modern world.  Modern encryption.  Maths that emerged (despite the US government’s strenuous efforts to suppress it) around the 1980s, and continues to evolve, while also being routinely used online, ensures that traffic passing through Huawei-supplied infrastructure carries exactly zero information of the kind historically used to decrypt cyphers, such as (famously) the Enigma.  Encryption absolutely defeats the prospect of China doing what Britain and America did.  And – particularly since Snowden[1] – encryption is increasingly widely deployed, even for data whose security is of very little concern, such as a blog at wordpress.org.

Unless of course the encryption is compromised elsewhere.  The spy in your ‘puter or ‘phone.  Or the fake certificate that enables an imposter to impersonate a trusted website or correspondent.  These are real dangers, but none of them is under Huawei’s (let alone the Chinese government’s) control or influence.

Looking at it another way, there’s a very good reason your online banking uses HTTPS – the encrypted version of HTTP.  It’s what protects you from criminals listening in and stealing your data, and gaining access to your account.  The provenance of the network infrastructure is irrelevant: the risk you need to protect against is that there is any compromised component between you and your bank.  Which is exactly what encryption does.

So why is the US government attacking Huawei so vigorously, not merely banning its use there but also threatening its allies with sanctions?  I can see two plausible explanations:

  1. Pure protectionism.  Against the first major Chinese technology company to be not merely competitive with but significantly ahead of its Western competitors in a field.  And against the competitive threat of 5G rollout giving Europe and Asia a big edge over the US.
  2. The US intelligence agencies’ own spying on us.

OK, having mooted (2), it’s time to return to my earlier remark about the possibility of a router collaborating with another agent in spying with us.  The spy in your ‘puter or ‘phone.  There’s nothing new about malware (viruses, etc) that spy on you: for example, they might seek to log keypresses to steal your passwords (this is why financial institutions routinely make you enter some part of your credentials using mouse and menus rather than from the keyboard – it makes it much harder for malware to capture them).  Or alternatively, an application (like a mailer, web browser, video/audio communication software, etc) encrypts but inserts the spy’s key alongside the legitimate users’ keys: this is essentially what the Australian government legislated for to spy on its own citizens.

But such malware, even when installed successfully and without your knowledge, has a problem of its own.  How to “phone home” its information without being detected?  If it makes an IP connection to a machine controlled by the attacker, that becomes obviously suspicious to a range of tools in a techie’s toolkit.  Or for non-techie users, your antivirus software (unless that is itself a spy).  So it’ll have a pretty limited lifetime before it gets busted.  Alternatively, if it ‘phones home’ low-level data without IP information (that’ll look like random line noise to IP tools if they notice it at all), the network’s routers have nowhere to send it, and will just drop it.

This smuggling of illicit or compromised data to a clandestine listener is where a malicious router might conceivably play a role.  But for that to happen, the attacker needs a primary agent: that spy in your ‘puter or ‘phone.  If anyone’s intelligence service has spyware from a hostile power, they have an altogether more serious problem than a router that’ll carry or even clone its data.

And who could install that spy?  Answer: the producers of your hardware or software.  Companies like Microsoft, Apple, Google and Facebook have software installed on most ‘puters and ‘phones.  Some of that is P2P communications software like Microsoft’s skype or Facebook’s whatsapp, that should be prime vehicles for Aussie-style targeted espionage.  If anyone is in a position to spy on us and could benefit from the cooperation of routers to remain undetected, it’s the government who could lean on those companies to do its bidding.  I’m sure the companies aren’t happy about it, but as the Sky journalist said of Huawei, it may also be a promise he is unable to keep, even if he wants to. The state comes before everything”.

China’s presence in any of those markets is a tiny fraction of what the US has.  Could it be that the NSA made Huawei an offer they couldn’t refuse, but they did refuse and the US reaction is the penalty for that?  It’s not totally far-fetched: there’s precedent with the US government’s treatment of Kaspersky.

And it would certainly be consistent with the US government’s high-pressure bullying of its allies.  The alternative explanation to pure protectionism is that they don’t want us to install equipment without NSA spyware!  The current disinformation campaign reminds me of nothing so much as Bush&Blair’s efforts to discredit Hans Blix’s team ahead of the Iraq invasion.

[1] I’m inclined to believe the Snowden leaks.  But I’m well aware that anything that looks like Intelligence information might also be disinformation, and my inclination to believe it would then hint at disinformation targeted at people like me.  So I’ll avoid rash assumptions one way or t’other.  Snowden’s leaks support a conspiracy theory, but don’t prove it.

Pretty Good Phishing

PGP is not broken.  It has long been the best framework most of us have for digital identity, and a secure means of communication.

Sadly the same cannot be said for certain popular PGP tools, nor for vast numbers of tutorials out there.  The usage we enjoyed and became accustomed to for a quarter century will now lead at best to confusion, and at worst to mistakes that could defeat the entire purpose of PGP and leave users wide open to spoofing.  That applies both to longstanding users who understand it well, and to the newbie who has read and understood a tutorial.

The underlying problem is that 32-bit (8 hex character) key IDs are comprehensively broken.  The story of that is told at evil32.com, by (I think) the people who originally demonstrated the issue.  It’s developed further since I last paid attention to it (and drew my colleagues’ attention to the need to stop using those 32-bit key IDs), in that an entire ‘shadow strong set’ has now been uploaded to the keyservers.  Those imposters were revoked by the evil32 folks, but with the idea being out there, anyone could now repeat that exercise and generate their own fake identities and fake Web of Trust.  And when a real malefactor does that, they’ll have the private keys, so there’ll be no-one to revoke them.

Let’s take a look at a recent sequence of events, when I rolled a release candidate for an Apache software package, and PGP-signed it.  Bear in mind, this is all happening in a techie community: people who have been happily using PGP for years.

[me] Signs a software bundle, upload it with the signature to web space.
[colleague] Checks the software, comes back with a number of comments.  Among them:

- Key B87F79A9 is listed as "revoked: 2016-08-16" in key server

Where does that come from?  I take great care of my PGP keys, and I certainly don’t recollect revoking that one.  To have revoked it, someone needs to have had access to both my private key and my passphrase, which is kind-of equivalent to having both the chip and the PIN to use my bank card (and that’s ignoring risks like someone tampering with my post on its way from the bank).  This is … impossible … alarming!

Yet this is exactly what happens if you RTFM:

% gpg --verify bundle.asc
gpg: Signature made Sun 16 Apr 2017 00:00:14 BST using RSA key ID B87F79A9
gpg: Can't check signature: public key not found

We don’t have the release manager’s public key ( B87F79A9 ) in our local system. You now need to retrieve the public key from a key server.

% gpg --recv-key B87F79A9
gpg: requesting key B87F79A9 from HKP keyserver pgpkeys.mit.edu
gpg: key B87F79A9: public key "Nick Kew <me>" imported
gpg: Total number processed: 1
gpg:           imported: 1

That’s a paraphrased extract from a real tutorial (which I intend to update, if noone else gets there first).  It was fine when it was written, but now imports not one but two keys.  Here they are:

$ gpg --list-keys B87F79A9
 pub 4096R/B87F79A9 2011-01-30
 uid Nick Kew <niq@apache...>
 uid Nick Kew (4096-bit key) <nick@webthing...>
 sub 4096R/862BA082 2011-01-30

pub 4096R/B87F79A9 2014-06-16 [revoked: 2016-08-16]
 uid Nick Kew <niq@apache...>

Both appear to be me; one is really me, the other an imposter from the evil32 set.  It’s easy to see when we know what we’re looking for, but could be confusing if unexpected!

The problem goes away if we use 64-bit Key IDs, or (nowadays strongly recommended) the full 160-bit (40 character) fingerprint.  It is computationally infeasible anyone could impersonate that, and indeed, they haven’t.

$ gpg --fingerprint B87F79A9
 pub 4096R/B87F79A9 2011-01-30
 Key fingerprint = 3CE3 BAC2 EB7B BC62 4D1D 22D8 F3B9 D88C B87F 79A9
 uid Nick Kew <niq@apache...>
 uid Nick Kew (4096-bit key) <nick@webthing...>
 sub 4096R/862BA082 2011-01-30

pub 4096R/B87F79A9 2014-06-16 [revoked: 2016-08-16]
 Key fingerprint = C74C 8AA5 91CB 3766 9D6F 73C0 2DF2 C6E4 B87F 79A9
 uid Nick Kew <niq@apache...>

The imposter’s fingerprint is completely different from mine.  It’s not PGP that’s broken, it’s the use of 32-bit/8-character key IDs in our tools, our tutorials, and our minds, that’s at fault.

However, the problem is a whole lot worse than that.  It’s not just my key (and everyone else in the Strong Set at the time of the evil32 demo) that has an imposter, it’s the entire WoT.  Let’s see if WordPress will let me present these side-by-side if I truncate the lines a bit.  The commandline used here is

$ gpg --list-sigs [fingerprint] |egrep ^sig|cut -c14-50|sort|uniq|head -5

which lists me:

My Key Imposter
010D6F3A 2012-04-11  dirk astrath (mo
02D1BC65 2011-02-07  Peter Van Eynde 
0AA3BF0E 2011-02-06  Christophe De Wo
16879738 2011-02-07  Markus Reichelt 
1DFBA164 2011-02-07  Bernhard Wiedema
010D6F3A 2014-08-05  dirk astrath (mo
02D1BC65 2014-08-05  Peter Van Eynde 
0AA3BF0E 2014-08-05  Christophe De Wo
16879738 2014-08-05  Markus Reichelt 
1DFBA164 2014-08-05  Bernhard Wiedema

The first field there is the culprit 8-hex-char Key IDs for my signatories and their evil32 doppelgangers.  The only clue is in those dates, which would be easy to overlook.  Otherwise we have a complete imposter WoT.   Those IDs offer no more security than a checksum (such as MD5 or SHA) if used without due care, and without a chain of trust right back to the user’s own signature (which is something you probably don’t have if you’re not a geek).

There are a lot of tools and tutorials out there that need updating to prevent this becoming yet another phisher’s playground.  Tools should not merely stop displaying 8-character key IDs, they shouldn’t even accept them.  I don’t think mere disambiguation is enough when an innocent user might thoughtlessly just select, say, the first of competing options.

I’ve already been diving in to some of those tutorials where I have write access to update them, but the task is complicated by having to work in the context of a document that deals with more than just the one thing, and without adding too much complexity for readers.  So I decided to work through the story here first!

Under attack

Yesterday morning I woke up to several hundred (or was it thousand?) messages from the online contact form on my website.  They came from what was clearly an automated dumb probe: all within a few minutes just before 4 a.m.  The probe had tried filling different fields with all kinds of payloads: fishing Unix paths, fishing Windows paths, escaped and unescaped commandline sequences including shellshock, SQL injection attacks, Javascript/XSS fragments, attempts to send mail or proxy HTTP.  Oh, and some fragments whose potential purpose eludes me.

OK, no big deal: just a few minutes of my time.  Dumb bots attack websites all the time.  Whatever vulnerabilities my server has (and I’m sure there are some), that kind of bot probing my contact form is no threat – except insofar as it could become a DoS.

This morning, another 740 messages.  From an even briefer probe: all at 03:59 and 04:00.  Checked the IP they all came from, and firewalled it off.  With a DROP rule, of course.  If it recurs from elsewhere, I’ll have to take a view on whether this approach can be extended or is useless.

If I can be arsed, maybe I’ll stay up and tail the log tonight, starting 03:50 or so.  Wonder if the perpetrator can be pwned while in action?  On second thoughts, maybe not at that hour, doubly not after the couple of pints I regularly enjoy on a Thursday evening.

A little secret

Yahoo admits to a billion customer records being compromised.  The numbers are staggering, but the news of the exploit is mundane.

Doubtless the raw numbers are very largely inactive accounts.  People who long-since stopped using Yahoo accounts.  People who signed up with some other company that subsequently got borged by Yahoo.  People who once signed up to access some service but never used the accounts.   Etcetera.  Just as with social media numbers (even just the number of followers of this humble blog), to be taken with a big pinch of salt.

Nevertheless, that’s a billion signups.  Allowing for fakes and duplicates, that might be a nine-digit number of real people who once answered security questions.  That’s a bunch of answers that, unlike passwords, travel with the user across multiple services, not just online but also those you might access by other means such as the ‘phone or even face-to-face.  The name of your first pet or your primary school are no more secure than the classic mother’s maiden name.

And now a billion such records have leaked.  Give or take: we don’t know how many users ever were genuine, nor how many such questions and answers each genuine user disclosed.

So what does it mean if you’re one of the billion?  If someone wants to steal your identity, your security questions and answers have passed from the realm of something they have to research to something easily automated.  Well, we don’t know that for certain, but it’s certainly a risk that can no longer be dismissed.

You’d better change your security questions everywhere that matters.  What do you mean, you can’t remember which questions you signed up to Yahoo with twenty years ago?  Don’t tell me you can’t change the city of your birth, or the initials of your first lover.  Oh dear [shakes head].

And even if you’re not one of the billion, you may already have started to get the phishing emails purporting to be from yahoo (or others) about changing passwords.

I’ve argued here before that security questions are not fit for purpose.  Perhaps the Yahoo leak might help persuade the world to stop using them for things that matter!

Public wifi menace

A couple of days ago, I was looking up a bus timetable from my ‘phone.  All perfectly mundane.

The address I thought I wanted failed: I don’t have it bookmarked and I’ve probably misremembered.  So I googled.

Google failed too.  With a message about an invalid certificate.  WTF?  Google annoyingly[1] use https, and I got a message about an invalid certificate.    Who is sitting in the middle?  Surely they can’t really be eavesdropping: with browsers issuing strong warnings, they’re never going to catch anything sensitive.  Must be just a hopelessly misconfigured network.

I don’t care if someone watches as I look up a bus time, I just want to get on with it!  But it’s not obvious with android how I can override that warning and access google.  Or even an imposter: if they don’t give me the link I wanted from google, nothing lost!

So has my mobile network screwed up horribly?  Cursing at the hassle, I go into settings and see it’s picked up a wifi network.  BT’s public stuff: OpenZone, or something like that (from memory).  This is BT, or someone on their network, playing sillybuggers.  Just turn wifi off and all works well again as the phone reverts to my network.

Except, now I have to remember to re-enable wifi before doing anything a bit data-intensive, like letting the ‘phone update itself, or joining a video conference.  All too easy to forget.

Hmm, come to think of it, that broken network is probably also what got between me and the bus timetable in the first place.  That wasn’t https.

[1] There are good reasons to encrypt, but search is rarely one of them.  Good that google enables it (at least if you trust google more than $random-shady-bod), but it’s a pain that they enforce it.

Identity and Trust

Folks who know me will know that I’ve been taking an interest for some time in the problems of online identity and trust:

  • Passwords (as we know them today) are a sick joke.
  • Monolithic certificate authorities (and browser trust lists) are a serious weakness in web trust.
  • PGP and the Web of Trust remain the preserve of geekdom.
  • People distrust and even fear centralised databases.  At issue are both the motivations of those who run them, and security against intruders.
  • Complexity and poor practice opens doors for phishing and identity theft.
  • Establishing identity and trust can be a nightmare, to the extent that a competent fraudster might find it easier than the real person to establish an identity.

I’m not a cryptographer.  But as mathematician, software developer, and old cynic, I have the essential ingredients.  I can see that things are wrong and could so easily be a whole lot better at many levels.  It’s not even a hard problem: merely a more rational deployment of existing technology!  Some time back I thought about setting myself up in the business of making it happen, but was put off by the ghost of what happened last time I tried (and failed) to launch an innovative startup.

Recently – starting this summer – I’ve embarked on another mission towards improving the status quo.  Instead of trying to run my own business, I’ve sought out an existing business doing good work in the field, to which I can hope to make a significant contribution.  So the project’s fortunes tap into my strengths as techie rather than my weaknesses as a Suit.

I should add that the project does rather more than just improve the deployment of existing technology, as it significantly advances the underlying cryptographic framework.  Most importantly it introduces a Distributed Trust Authority model, as an alternative to the flawed monolithic Certificate Authority and its single point of failure.  The distributed model also makes it particularly well-suited to “cloud” applications and to securing the “Internet of Things”.

And it turns out, I arrived at an opportune moment.  The project has been single-company open source for some time and generated some interest at github.  Now it’s expanding beyond that: a second corporate team is joining development and I understand there are further prospects.  So it could really use a higher-level development model than github: one that will actively foster the community and offer mutual assurance and protection to all participants.  So we’ve put it forward as a candidate for incubation at Apache.  The proposal is here.

If all goes well, this could be the core of my work for some time to come.  Here’s hoping for a big success and a better, safer online world.

Saved from Visa

I’ve written before about the Fraudster’s Friend misleadingly named “Verified by Visa”.  Most directly in my post Phished by Visa, though Bullied by Visa perhaps also deserves a mention.

Today I went to place an order with Argos, who I’ve used several times before and who have always – in contrast to some of their competitors – delivered very efficiently.  This time alas the shopping process has become significantly more hassle, and they’ve introduce the VBV cuckoo into the process.  But I was pleased to note that, when I came to the VBV attack, Firefox flagged it up as precisely what it is: an XSS attack, and in the context of secure data (as in creditcard numbers) a serious security issue.

I hope Firefox does that by default, rather than just with my settings.  Though it would be courageous, to take the blame from the unwashed masses who might think VBV serves their interests when it doesn’t work.  Doing the Right Thing against an enemy with ignorance on its side has a very bad history in web browsers, as Microsoft in the late 1990s killed off the opposition by exposing their users to a whole family of “viruses” in a move designed to make correct behaviour a loser in the market (specifically, violation of MIME standards documented since 1992 as security-critical).

Alas, while Firefox saved me from the evil phishing attack, the combination of that and other Argos website trouble pushed me to a thoroughly insecure and less than convenient medium: the telephone.  Bah, Humbug.

To phish, or not to phish?

I recently had email telling me my password for $company VPN is due to expire, and directing me to a URL to update it.

Legitimate or phishing?  Let’s examine it.

It follows the exact form of similar legitimate emails I’ve had before.  Password expires in 14 days.  Daily updates decrementing the day count until I change it.  So far so good.

However, it’s directing me to an unfamiliar URL: https://$company.okta.com/.   Big red flag!  But $company outsources a range of admin functions in this manner, so it’s entirely plausible.

It appears to come from a legitimate source.  But since all $company email is outsourced to gmail, the information I can glean from the headers is limited.  How much trust can I place in gmail’s SPF telling me the sender is valid?

A look on $company’s intranet fails to find anything relevant (though in the absence of a search function I probably wouldn’t find it anyway without a truly gruelling trawl).  OK, let’s google for evidence of a legitimate connection between $company and okta.com.  I’ve resolved similar problems to my own satisfaction that way before both for $company and other such situations (e.g. here or here), but the hurdle for a $company-VPN password – even one I’m about to change – has to be high.

Googling finds me only inconclusive evidence.  There’s a linkedin page for $company’s sysop, only it turns out he’s moved on and the linkedin page is just listing both $company and okta skills in his CV.  There’s a PDF at $company’s website with instructions for setting up some okta product (though it’s one of those that insults you with big cuddly pictures of selecting a series of menu options without actually saying anything non-obvious).

Hmmm …

OK, maybe I can get okta.com to prove itself, with the kind of security question your bank asks when you ‘phone it.  Let’s use okta’s “Password Reset”.  I expect it’ll send a one-off token I can use to set a new password.  If legit, that’ll work; if not then the newly-minted password is worthless and I just abandon it.  But no such thing: instead of sending me such a token, it tells (emails) me:

Your Okta account is configured to use the same password you currently use for logging in to your organization’s Windows network. Use your Windows account password to sign in to Okta. Please use the password reset function in Windows to reset your password.

Well, b***er that.  Windows account password?  Windows network?  I have no such thing, and neither does $company expect me to.  I expect $company may have a few windows boxes, but they’re certainly not the norm.  No doubt it just means the LDAP password I’m supposed to be changing, but if I know that then why should I be asking it for password reset?  Bah, Humbug!

One more thing to try before a humiliating request for help over something I should be able to deal with myself.  Somewhere in my gmail I can dig up previous password reset reminders, with a URL somewhere on $company’s own intranet.  Try that URL.  Yes, it still works, and I can reset my VPN password there.  All that investigation for … what?

Well, there’s a value to it.  Namely the acid test: does the daily password reminder stop after I’ve reset the password?  If it’s genuine then it shares information with $intranet and knows I’ve reset my password.  If it’s a phish then it knows nothing.  So now I’m getting some real evidence: if the password reminders stop then it’s genuine.

They do stop.  So I conclude it is indeed genuine.

Unless it’s so ultra-sophisticated that it’s been warned off by my having visited the site and used password reset, albeit unsuccessfully.  Waiting to try again in a few months?  Hmmm ….

Well, if $company hasn’t outsourced it then the intranet-based password reset will continue to work next time.  If it doesn’t work next time then there’s one more piece of evidence it’s genuine.

Defending against shell shock

I started writing a longer post about the so-called shell shock, with analysis of what makes a web server vulnerable or secure.  Or, strictly speaking, not a webserver, but a platform an attacker might access through a web server.  But I’m not sure when I’ll find time to do justice to that, so here’s the short announcement:

I’ve updated mod_taint to offer an ultra-simple defence against the risk of shell shock attacks coming through Apache HTTPD, versions 2.2 or later.  A new simplified configuration option is provided specifically for this problem:

    LoadModule taint_module modules/mod_taint.so
    Untaint shellshock

mod_taint source and documentation are at http://people.apache.org/~niq/mod_taint.c and http://people.apache.org/~niq/mod_taint.html respectively.

Here’s some detail from what I posted earlier to the Apache mailinglists:

Untaint works in a directory context, so can be selectively enabled for potentially-vulnerable apps such as those involving CGI, SSI, ExtFilter, or (other) scripts.

This goes through all Request headers, any PATH_INFO and QUERY_STRING, and (just to be paranoid) any other subprocess environment variables. It untaints them against a regexp that checks for “()” at the beginning of a variable, and returns an HTTP 400 error (Bad Request) if found.

Feedback welcome, indeed solicited. I believe this is a simple but sensible approach to protecting potentially-vulnerable systems, but I’m open to contrary views. The exact details, including the shellshock regexp itself, could probably use some refinement. And of course, bug reports!

  • Privacy