Bruce Schneier

 
 

Schneier on Security

A blog covering security and security technology.

Friday Squid Blogging: Bottled Water Plus Squid

Only in Japan:

Bandai toy company from Japan has finally realized that bottles of water just aren't cute. As Japan is the cute capital of the world, this just wouldn't do. To fix the problem, they developed these adorable floating squids that can be added to any bottle of water. Thank god for Japanese innovation. Of course, they're only available in Japan, but at least they're affordable at only $6 each.

Posted on July 17, 2009 at 4:09 PM10 CommentsView Blog Reactions


Pepper Spray–Equipped ATMs

South Africa takes its security seriously. Here's an ATM that automatically squirts pepper spray into the face of "people tampering with the card slots."

Sounds cool, but these kinds of things are all about false positives:

But the mechanism backfired in one incident last week when pepper spray was inadvertently inhaled by three technicians who required treatment from paramedics.

Patrick Wadula, spokesman for the Absa bank, which is piloting the scheme, told the Mail & Guardian Online: "During a routine maintenance check at an Absa ATM in Fish Hoek, the pepper spray device was accidentally activated.

"At the time there were no customers using the ATM. However, the spray spread into the shopping centre where the ATMs are situated."

Posted on July 17, 2009 at 1:04 PM27 CommentsView Blog Reactions


Privacy Salience and Social Networking Sites

Reassuring people about privacy makes them more, not less, concerned. It's called "privacy salience," and Leslie John, Alessandro Acquisti, and George Loewenstein -- all at Carnegie Mellon University -- demonstrated this in a series of clever experiments. In one, subjects completed an online survey consisting of a series of questions about their academic behavior -- "Have you ever cheated on an exam?" for example. Half of the subjects were first required to sign a consent warning -- designed to make privacy concerns more salient -- while the other half did not. Also, subjects were randomly assigned to receive either a privacy confidentiality assurance, or no such assurance. When the privacy concern was made salient (through the consent warning), people reacted negatively to the subsequent confidentiality assurance and were less likely to reveal personal information.

In another experiment, subjects completed an online survey where they were asked a series of personal questions, such as "Have you ever tried cocaine?" Half of the subjects completed a frivolous-looking survey -- ­"How BAD are U??" -- with a picture of a cute devil. The other half completed the same survey with the title "Carnegie Mellon University Survey of Ethical Standards," complete with a university seal and official privacy assurances. The results showed that people who were reminded about privacy were less likely to reveal personal information than those who were not.

Privacy salience does a lot to explain social networking sites and their attitudes towards privacy. From a business perspective, social networking sites don't want their members to exercise their privacy rights very much. They want members to be comfortable disclosing a lot of data about themselves.

Joseph Bonneau and Soeren Preibusch of Cambridge University have been studying privacy on 45 popular social networking sites around the world. (You may not have realized that there are 45 popular social networking sites around the world.) They found that privacy settings were often confusing and hard to access; Facebook, with its 61 privacy settings, is the worst. To understand some of the settings, they had to create accounts with different settings so they could compare the results. Privacy tends to increase with the age and popularity of a site. General-use sites tend to have more privacy features than niche sites.

But their most interesting finding was that sites consistently hide any mentions of privacy. Their splash pages talk about connecting with friends, meeting new people, sharing pictures: the benefits of disclosing personal data.

These sites do talk about privacy, but only on hard-to-find privacy policy pages. There, the sites give strong reassurances about their privacy controls and the safety of data members choose to disclose on the site. There, the sites display third-party privacy seals and other icons designed to assuage any fears members have.

It's the Carnegie Mellon experimental result in the real world. Users care about privacy, but don't really think about it day to day. The social networking sites don't want to remind users about privacy, even if they talk about it positively, because any reminder will result in users remembering their privacy fears and becoming more cautious about sharing personal data. But the sites also need to reassure those "privacy fundamentalists" for whom privacy is always salient, so they have very strong pro-privacy rhetoric for those who take the time to search them out. The two different marketing messages are for two different audiences.

Social networking sites are improving their privacy controls as a result of public pressure. At the same time, there is a counterbalancing business pressure to decrease privacy; watch what's going on right now on Facebook, for example. Naively, we should expect companies to make their privacy policies clear to allow customers to make an informed choice. But the marketing need to reduce privacy salience will frustrate market solutions to improve privacy; sites would much rather obfuscate the issue than compete on it as a feature.

This essay originally appeared in the Guardian.

Posted on July 16, 2009 at 6:05 AM28 CommentsView Blog Reactions


Laptop Security while Crossing Borders

Last year, I wrote about the increasing propensity for governments, including the U.S. and Great Britain, to search the contents of people's laptops at customs. What we know is still based on anecdote, as no country has clarified the rules about what their customs officers are and are not allowed to do, and what rights people have.

Companies and individuals have dealt with this problem in several ways, from keeping sensitive data off laptops traveling internationally, to storing the data -- encrypted, of course -- on websites and then downloading it at the destination. I have never liked either solution. I do a lot of work on the road, and need to carry all sorts of data with me all the time. It's a lot of data, and downloading it can take a long time. Also, I like to work on long international flights.

There's another solution, one that works with whole-disk encryption products like PGP Disk (I'm on PGP's advisory board), TrueCrypt, and BitLocker: Encrypt the data to a key you don't know.

It sounds crazy, but stay with me. Caveat: Don't try this at home if you're not very familiar with whatever encryption product you're using. Failure results in a bricked computer. Don't blame me.

Step One: Before you board your plane, add another key to your whole-disk encryption (it'll probably mean adding another "user") -- and make it random. By "random," I mean really random: Pound the keyboard for a while, like a monkey trying to write Shakespeare. Don't make it memorable. Don't even try to memorize it.

Technically, this key doesn't directly encrypt your hard drive. Instead, it encrypts the key that is used to encrypt your hard drive -- that's how the software allows multiple users.

So now there are two different users named with two different keys: the one you normally use, and some random one you just invented.

Step Two: Send that new random key to someone you trust. Make sure the trusted recipient has it, and make sure it works. You won't be able to recover your hard drive without it.

Step Three: Burn, shred, delete or otherwise destroy all copies of that new random key. Forget it. If it was sufficiently random and non-memorable, this should be easy.

Step Four: Board your plane normally and use your computer for the whole flight.

Step Five: Before you land, delete the key you normally use.

At this point, you will not be able to boot your computer. The only key remaining is the one you forgot in Step Three. There's no need to lie to the customs official; you can even show him a copy of this article if he doesn't believe you.

Step Six: When you're safely through customs, get that random key back from your confidant, boot your computer and re-add the key you normally use to access your hard drive.

And that's it.

This is by no means a magic get-through-customs-easily card. Your computer might be impounded, and you might be taken to court and compelled to reveal who has the random key.

But the purpose of this protocol isn't to prevent all that; it's just to deny any possible access to your computer to customs. You might be delayed. You might have your computer seized. (This will cost you any work you did on the flight, but -- honestly -- at that point that's the least of your troubles.) You might be turned back or sent home. But when you're back home, you have access to your corporate management, your personal attorneys, your wits after a good night's sleep, and all the rights you normally have in whatever country you're now in.

This procedure not only protects you against the warrantless search of your data at the border, it also allows you to deny a customs official your data without having to lie or pretend -- which itself is often a crime.

Now the big question: Who should you send that random key to?

Certainly it should be someone you trust, but -- more importantly -- it should be someone with whom you have a privileged relationship. Depending on the laws in your country, this could be your spouse, your attorney, your business partner or your priest. In a larger company, the IT department could institutionalize this as a policy, with the help desk acting as the key holder.

You could also send it to yourself, but be careful. You don't want to e-mail it to your webmail account, because then you'd be lying when you tell the customs official that there is no possible way you can decrypt the drive.

You could put the key on a USB drive and send it to your destination, but there are potential failure modes. It could fail to get there in time to be waiting for your arrival, or it might not get there at all. You could airmail the drive with the key on it to yourself a couple of times, in a couple of different ways, and also fax the key to yourself ... but that's more work than I want to do when I'm traveling.

If you only care about the return trip, you can set it up before you return. Or you can set up an elaborate one-time pad system, with identical lists of keys with you and at home: Destroy each key on the list you have with you as you use it.

Remember that you'll need to have full-disk encryption, using a product such as PGP Disk, TrueCrypt or BitLocker, already installed and enabled to make this work.

I don't think we'll ever get to the point where our computer data is safe when crossing an international border. Even if countries like the U.S. and Britain clarify their rules and institute privacy protections, there will always be other countries that will exercise greater latitude with their authority. And sometimes protecting your data means protecting your data from yourself.

This essay originally appeared on Wired.com.

Posted on July 15, 2009 at 12:10 PM136 CommentsView Blog Reactions


Data Leakage Through Power Lines

The NSA has known about this for decades:

Security researchers found that poor shielding on some keyboard cables means useful data can be leaked about each character typed.

By analysing the information leaking onto power circuits, the researchers could see what a target was typing.

The attack has been demonstrated to work at a distance of up to 15m, but refinement may mean it could work over much longer distances.

These days, there's lots of open research on side channels.

Posted on July 15, 2009 at 6:17 AM40 CommentsView Blog Reactions


Poor Man's Steganography

Hide files inside pdf documents: "embed a file in a PDF document and corrupt the reference, thereby effectively making the embedded file invisible to the PDF reader."

Posted on July 14, 2009 at 1:48 PM24 CommentsView Blog Reactions


Gaze Tracking Software Protecting Privacy

Interesting use of gaze tracking software to protect privacy:

Chameleon uses gaze-tracking software and camera equipment to track an authorized reader's eyes to show only that one person the correct text. After a 15-second calibration period in which the software essentially "learns" the viewer's gaze patterns, anyone looking over that user's shoulder sees dummy text that randomly and constantly changes.

To tap the broader consumer market, Anderson built a more consumer-friendly version called PrivateEye, which can work with a simple Webcam. The software blurs a user's monitor when he or she turns away. It also detects other faces in the background, and a small video screen pops up to alert the user that someone is looking at the screen.

How effective this is will mostly be a usability problem, but I like the idea of a system detecting if anyone else is looking at my screen.

Slashdot story.

EDITED TO ADD (7/14): A demo.

Posted on July 14, 2009 at 6:20 AM38 CommentsView Blog Reactions


North Korean Cyberattacks

To hear the media tell it, the United States suffered a major cyberattack last week. Stories were everywhere. "Cyber Blitz hits U.S., Korea" was the headline in Thursday's Wall Street Journal. North Korea was blamed.

Where were you when North Korea attacked America? Did you feel the fury of North Korea's armies? Were you fearful for your country? Or did your resolve strengthen, knowing that we would defend our homeland bravely and valiantly?

My guess is that you didn't even notice, that -- if you didn't open a newspaper or read a news website -- you had no idea anything was happening. Sure, a few government websites were knocked out, but that's not alarming or even uncommon. Other government websites were attacked but defended themselves, the sort of thing that happens all the time. If this is what an international cyberattack looks like, it hardly seems worth worrying about at all.

Politically motivated cyber attacks are nothing new. We've seen UK vs. Ireland. Israel vs. the Arab states. Russia vs. several former Soviet Republics. India vs. Pakistan, especially after the nuclear bomb tests in 1998. China vs. the United States, especially in 2001 when a U.S. spy plane collided with a Chinese fighter jet. And so on and so on.

The big one happened in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial. The networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and -- in many cases -- shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar, but after two years there is still no evidence that the Russian government was involved. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were angry over the statue incident.

Poke at any of these international incidents, and what you find are kids playing politics. Last Wednesday, South Korea's National Intelligence Service admitted that it didn't actually know that North Korea was behind the attacks: "North Korea or North Korean sympathizers in the South" was what it said. Once again, it'll be kids playing politics.

This isn't to say that cyberattacks by governments aren't an issue, or that cyberwar is something to be ignored. The constant attacks by Chinese nationals against U.S. networks may not be government-sponsored, but it's pretty clear that they're tacitly government-approved. Criminals, from lone hackers to organized crime syndicates, attack networks all the time. And war expands to fill every possible theater: land, sea, air, space, and now cyberspace. But cyberterrorism is nothing more than a media invention designed to scare people. And for there to be a cyberwar, there first needs to be a war.

Israel is currently considering attacking Iran in cyberspace, for example. If it tries, it'll discover that attacking computer networks is an inconvenience to the nuclear facilities it's targeting, but doesn't begin to substitute for bombing them.

In May, President Obama gave a major speech on cybersecurity. He was right when he said that cybersecurity is a national security issue, and that the government needs to step up and do more to prevent cyberattacks. But he couldn't resist hyping the threat with scare stories: "In one of the most serious cyber incidents to date against our military networks, several thousand computers were infected last year by malicious software -- malware," he said. What he didn't add was that those infections occurred because the Air Force couldn't be bothered to keep its patches up to date.

This is the face of cyberwar: easily preventable attacks that, even when they succeed, only a few people notice. Even this current incident is turning out to be a sloppily modified five-year-old worm that no modern network should still be vulnerable to.

Securing our networks doesn't require some secret advanced NSA technology. It's the boring network security administration stuff we already know how to do: keep your patches up to date, install good anti-malware software, correctly configure your firewalls and intrusion-detection systems, monitor your networks. And while some government and corporate networks do a pretty good job at this, others fail again and again.

Enough of the hype and the bluster. The news isn't the attacks, but that some networks had security lousy enough to be vulnerable to them.

This essay originally appeared on the Minnesota Public Radio website.

Posted on July 13, 2009 at 11:45 AM40 CommentsView Blog Reactions


Strong Web Passwords

Interesting paper from HotSec '07: "Do Strong Web Passwords Accomplish Anything?" by Dinei Florêncio, Cormac Herley, and Baris Coskun.

ABSTRACT: We find that traditional password advice given to users is somewhat dated. Strong passwords do nothing to protect online users from password stealing attacks such as phishing and keylogging, and yet they place considerable burden on users. Passwords that are too weak of course invite brute-force attacks. However, we find that relatively weak passwords, about 20 bits or so, are sufficient to make brute-force attacks on a single account unrealistic so long as a "three strikes" type rule is in place. Above that minimum it appears that increasing password strength does little to address any real threat If a larger credential space is needed it appears better to increase the strength of the user ID's rather than the passwords. For large institutions this is just as effective in deterring bulk guessing attacks and is a great deal better for users. For small institutions there appears little reason to require strong passwords for online accounts.

Posted on July 13, 2009 at 5:38 AM66 CommentsView Blog Reactions


Friday Squid Blogging: Humboldt Squid Caught Off Seattle

A hundred-pounder.

They're still moving North.

Posted on July 10, 2009 at 4:45 PM13 CommentsView Blog Reactions


Lost Suitcases in Airport Restrooms

Want to cause chaos at an airport? Leave a suitcase in the restroom:

Three incoming flights from London were cancelled and about 150 others were delayed for up to three hours, while the army's bomb squad carried out its investigation, before giving the all-clear at about 5pm.

Passengers were told to leave the arrivals hall, main check-in area at the terminal building, the food courts and shops, and gather at safety areas outside.

The scare led to major traffic disruption around the airport, with tailbacks stretching back about a mile. Some passengers faced lengthy walks to the airport after being dropped off by shuttle bus from the city centre.

Oddest quote is from a police spokesperson:

"Inquires are under way to establish how the luggage came to be located within the toilets."

My guess is that someone left it there.

I'd suggest this as a good denial-of-service attack, but certainly there is a video camera recording of the person bringing the suitcase into the airport. The article says it was left in the "domestic arrivals area." I don't know if that's inside airport security or not.

Posted on July 10, 2009 at 12:45 PM51 CommentsView Blog Reactions


Making an Operating System Virus Free

Commenting on Google's claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google's promise. "It's an idiotic claim," Schneier wrote in an e-mail. "It was mathematically proved decades ago that it is impossible -- not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible -- to create an operating system that is immune to viruses."

What I was referring to, although I couldn't think of his name at the time, was Fred Cohen's 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

This reaction to my comment is accurate:

That seems to us like he's picking on the semantics of Google's statement just a bit. Google says that users "won't have to deal with viruses," and Schneier is noting that it's simply not possible to create an OS that can't be taken down by malware. While that may be the case, it's likely that Chrome OS is going to be arguably more secure than the other consumer operating systems currently in use today. In fact, we didn't take Google's statement to mean that Chrome OS couldn't get a virus EVER; we just figured they meant it was a lot harder to get one on their new OS - didn't you?

When I said that I had not seen Google's statement. I was responding to what the reporter was telling me on the phone. So yes, I jumped on the reporter's claim about Google's claim. I did try to temper my comment:

Redesigning an operating system from scratch, "[taking] security into account all the way up and down," could make for a more secure OS than ones that have been developed so far, Schneier said. But that's different from Google's promise that users won't have to deal with viruses or malware, he added.

To summarize, there is a lot that can be done in an OS to reduce the threat of viruses and other malware. If the Chrome team started from scratch and took security seriously all through the design and development process, they have to potential to develop something really secure. But I don't know if they did.

Posted on July 10, 2009 at 9:44 AM111 CommentsView Blog Reactions


Powered by Movable Type. Photo at top by Steve Woit.

Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.

 
Bruce Schneier