Showing posts with label Security/Privacy. Show all posts
Showing posts with label Security/Privacy. Show all posts

Thursday, November 24, 2011

.

Anti-theft?

The navigation system in my car has an anti-theft feature that’s interesting, in that it relies entirely on a sort of herd immunity. The system is installed in the car’s dashboard, so it’s somewhat involved to pull it out. Easy for a pro, to be sure, but I mean that it’s not like one of those that sits on top, and one can just grab it and run.

When it’s first powered on after installation, the owner has the option of setting a password. If a password is set and the unit is ever disconnected from the battery, as it would be if it were stolen (or, of course, when the car battery is replaced, or when servicing the car requires disconnecting the battery), the password has to be entered in order for the device to be used again. The only way to recover from a forgotten password is to have the manufacturer reset the system — and they will, one presumes, take some measures to ensure that you hadn’t simply boosted it.

The interesting thing about this mechanism is that there’s no way for a thief to know whether or not a password is set. This anti-theft feature does nothing to actually prevent theft, but only to prevent the use of the system after it’s stolen. That’s only a deterrent if the thief knows two things: that this model has this feature and that almost all owners set a password (so that the likelihood of stealing a usable unit is too low to be worth the trouble).

Setting a password does absolutely nothing for your own device’s security — once it’s stolen, no thief will come put it back when he finds that he can’t use it nor sell it. Rather, we all depend on the widespread knowledge, at least among thieves, that everyone sets one. If I opt out, I’m covered by the rest of you. But if too many people opt out, then no one’s unit is safe.

And there is a big down side to setting a password: when your battery’s disconnected for service, if you’ve forgotten the password (which you only used once, maybe several years ago) your nav system becomes a brick.

Perhaps all in-dash navigation systems use this mechanism, and thieves are well aware of that (and new thieves soon will be). I wonder, though, how many owners choose not to set a password.

Tuesday, June 21, 2011

.

Misconceptions about DKIM

I chair the DKIM working group in the IETF. The working group is finishing up its work, about ready to publish an update to the DKIM protocol, which moves DomainKeys Identified Mail up the standards track to Draft Standard.

DKIM is a protocol that uses digital signatures to attach a confirmed domain name to an email message (see part 7, in particular). DKIM started from a simple place, with a simple problem statement and a simple goal:

  • Email messages have many addresses associated with them, but none are authenticated, so none can be relied on.
  • Bad actors — spammers and phishers — take advantage of that to pretend they are sending mail from a place (a domain name) the recipient might trust, in an attempt to fool the recipient.
  • If we can provide an authenticated domain name, something that’s confirmed and that a sender can’t fake, then that information can be used as part of the delivery system, as part of deciding how to handle incoming mail.

It’s important to note that mail signed with DKIM isn’t necessarily good mail, nor even mail from a good place. All we know is that mail signed with DKIM was digitally signed by a specified domain. We can then use other information we have about that domain as part of the decision to deliver the message to the user’s inbox, to put it in junk mail, to subject it to further analysis or to skip that analysis, and so on.

Domain example.com signed this message, is just one of many pieces of information that might help decide what to do.

But some people — even some who have worked on the development of the DKIM protocol — miss the point, and put DKIM in a higher position than it should be. Or, perhaps more accurately, they give it a different place in the email delivery system than it should have.

Consider this severely flawed blog post from Trend Micro, a computer security company that should know better, but doesn’t:

In a recently concluded discussion by the [DKIM Working Group], some of those involved have decided to disregard phishing-related threats common in today’s effective social engineering attacks. Rather than validating DKIM’s input and not relying upon specialized handling of DKIM results, some members deemed it a protocol layer violation to examine elements that may result in highly deceptive messages when accepted on the basis of DKIM signatures.

The blog post describes an attack that takes a legitimately signed message, alters it in a way that does not invalidate the DKIM signature (taking advantage of some intentional flexibility in DKIM), and re-sends the message as spam or phishing. The attacker can add a second from address, and appear to the user to be from a trusted domain, though the DKIM signature is not.

The attack sounds bad, but it really isn’t, and the Trend Micro blog’s conclusion that failure to absolutely block this makes DKIM an EVIL protocol (their words) is not just overstated, but laughable and ridiculous. It completely undermines Trend Micro’s credibility.

Here’s why the attack is overstated:

  1. It relies on the sender’s ability to get a DKIM signature on a phishing message, and assumes the message will be treated as credible by the delivery system.
  2. It ignores the facts that delivery systems use other factors in deciding how to handle incoming messages and that they will downgrade the reputation score of a domain that’s seen to sign these sorts of things.
  3. It ignores the fact that high-value domains, with strong reputations, will not allow the attackers to use them for signing.
  4. The attack creates a message with two from lines, and such messages are not valid. It ignores the fact that delivery systems will take that into account as they score the message and make their decisions.

Apart from that, the blog insists that the right way to handle this attack would be to have DKIM go far beyond what it’s designed to do. Rather than just attaching a confirmed domain name to the message, DKIM would, Trend Micro says, now have to check the validity of messages during signature validation. Yes, that is a layer violation. Validity checking is an important part of the analysis of incoming email, but it is a separate function that’s not a part of DKIM. All messages, whether DKIM is in use or not, should be checked for being well-formed, and deviations from correct form should increase the spam score of a message. That has nothing to do with DKIM.

In fact, the updated DKIM specification does address this attack, and suggests things that delivery systems might do in light of it. But however good that advice might be, it’s not mandated by the DKIM protocol, because it belongs in a separate part of the analysis of the message.


Others have also posted rebuttals of the Trend Micro blog post. You can find one here, at CircleID, and look in the comments there for pointers to others.

Friday, June 03, 2011

.

Trusted identities

The U.S. Postal Service (USPS) now has a way to do a change of address online, on their web site. Nicely, it’s even all using https (SSL/TLS), keeping it encrypted, which is good.

On the first page, you select whether it’s a permanent change or a temporary one, and specify the dates.

On the second page, you select whether the change is for an individual or a whole family.

On the third page, you give the old and new addresses.

On the fourth page, you get this:

For your security, please verify your identity using a credit card or debit card. We’ll need to charge your card $1.00.
[? Help]

To prevent Fraud, we need to verify your identity by charging your card a $1.00 fee. The card’s billing address must match your current address or the address you’re moving to.

If you click the ? Help link, here’s what it tells you:

Identity Verification — Credit/Debit Card

In order to verify your identity, we process a $1 fee to your credit/debit card. The card’s billing address must match either the old or new address entered on the address entry page. This is to prevent fraudulent Change of Address requests.

Please note that the Internet Change of Address Service uses a high level of security on a secure server.

I have a few problems with this:

  1. They’re asking for credit card information in a transaction where no one expects it. They’re assuring you that it’s secure, but how does one know? This is a classic phishing tactic.
  2. They’re assuming you have a credit card to give them. Lots of people don’t have credit cards. I know some.
  3. They’re charging you a dollar to change your address online, a mechanism that’s surely cheaper for them than to have you walk into the post office to do it. That’s nuts.

To be sure, they do have to do something to make sure that people don’t change each other’s addresses as pranks, or worse. But do they really need to charge you a dollar for it? They could make a charge and then rescind it. They could give you an alternative to use a bank account, and verify it the way PayPal does, by making a withdrawal of a few cents and then depositing it back. That would also help for people who have no credit cards, but do have bank accounts — still not everyone, but it’s something.

Or you can just say, Eff this; I’m not giving the post office my credit-card information and paying them a dollar for what I can do for free, and then go into the office and waste a clerk’s time on it.

This is why there are proposals for secure identity verification. The U.S. National Institute of Standards and Technology (NIST) has an initiative called National Strategy for Trusted Identities in Cyberspace (NSTIC) that covers this sort of thing. Whether or not NSTIC is the right answer, we need to get to where we have this kind of verification available, without having to hack the credit-card system for it.

Monday, February 14, 2011

.

Government oversight of the Internet

Now that the protests in Egypt have led to a change in leadership — an outcome that seemed inevitable for a while, though now-former-President Mubarak denied that it would happen — I want to go back and look at a key event during the last few weeks, when the Egyptian government disconnected the country from the Internet

It appears that removing an entire country from the internet is surprisingly easy, by making changes in a system known as the border gateway protocol (BGP). This system is used by ISPs and other organisations to connect to each others’ networks, so the Egyptian government just had to order ISPs to alter the BGP routing tables to make external connections impossible.

Looking at BGP data we can confirm that according to our analysis 88 per cent of the ‘Egyptian internet’ has fallen off the internet, reports Andree Tonk of BGPmon, a site dedicated to monitoring changes in the BGP. A recent report for the OECD cited the BGP as a weak point in online infrastructure that needs to be secured — a prediction that seems to have now come true.

As the report makes clear, it’s not technically difficult, at least not for a relatively small country with a relatively centralized connection to the Internet. And we see countries such as China and Iran using similar techniques to do more selective blocking (the latter has, I understand, responded to the events in Tunisia and Egypt by joining the former in blocking access to blog sites such as this one). The issue isn’t technical, but one of policy: is the government allowed to cut off the Internet?

Of course, with countries where the government makes its own authority, the answer is always Yes. But what about in the U.S., where the government was limited, at least through the end of the 20th century, to abiding by its constitution, legislation, and a judicial system?

For one answer to that question, we can look to Senator Joe Lieberman of Connecticut, who, along with Senators Susan Collins (Maine) and Tom Carper (Delaware), introduced legislation to enhance the security and resiliency of the cyber and communications infrastructure of the United States.

The Protecting Cyberspace as a National Asset Act of 2010, S.3480 (here’s a PDF of the latest version as of this writing) was introduced last June and was entirely replaced by Senator Lieberman in December (you have to go to the bottom of page 197 of the PDF to see the new version). The December version was reported to the Senate from the Committee on Homeland Security and Governmental Affairs, which Mr Lieberman chairs (and on which his cosponsors sit). It’s now on the Senate’s legislative calendar. (The corresponding House bill is H.R.5548.)

The bill, if it should become law, would create a new operational entity within [the Department of Homeland Security]: the National Center for Cybersecurity and Communications (NCCC).

The NCCC would be led by a Senate-confirmed Director, who would regularly advise the President regarding the exercise of authorities relating to the security of federal networks. The NCCC would include the United States Computer Emergency Response Team (US-CERT), and it would lead federal operational efforts to protect public and private sector networks. The NCCC would detect, prevent, analyze, and warn of cyber threats to these networks.

The bill creates, in addition to the NCCC, quite a number of offices, councils, task forces, and programs, some of which make sense and some of which probably don’t. It creates the Office of Cyberspace Policy, whose Director is appointed by and reports to the President. It creates the Federal Information Security Taskforce, comprising executives and representatives from more than a dozen government agencies. And so on.

The entire bill is quite extensive, running well over 200 pages. And what’s frightening about it is that it puts the U.S. government right in the middle of the operation and management of the Internet within the United States and its territories — and keep in mind how central U.S. operations and U.S.-based services are to the Internet as a whole. It’s difficult to understand the effect that all this new administration will have on the operation of the Internet within the U.S., and the effect that it could have if it’s mismanaged, if it tries to respond to perceived threats, if it’s affected by right-wing zealots or other dubious elements that inhabit the U.S. political community.

I have read the bill’s summary, along with parts of the bill itself, but haven’t had time to read the whole bill yet. It’s not clear how bad it could be, nor, indeed, whether it will be bad at all... but I’m very skeptical of the result of putting such a large set of deep layers of U.S. government bureaucracy in the middle of the operation and management of the Internet. And I’m deeply worried about giving authority to make operational decisions to people who have insufficient technical knowledge to understand the ramifications of those decisions, who may have political or ideological motivations that do not coincide with what’s best for the Internet, and who can implement their decisions without the checks-and-balances oversight that protects us in other parts of our lives.

I have lots more reading to do.

Thursday, February 10, 2011

.

Foiling offline password attacks

Jarno, at F-Secure — an excellent Finnish anti-malware company — has posted a nice analysis of encoding password files. Because he assumes some knowledge of the way things work, I’ll try to expand a bit on that here. Some of this has been in these pages before, so this is a review.

A cryptographic hash algorithm is a mathematical algorithm that will take some piece of data as input, and will generate as output a piece of data — a number — of a fixed size. The output is called a hash value, or simply a hash (and it’s sometimes also called a digest). The algorithm has the following properties:

  1. It’s computationally simple to run the algorithm on any input.
  2. Given two different inputs, however similar, it’s very likely that the hashes will be different (it is collision resistant).
  3. Given a hash value, it’s computationally infeasible to determine an input that will generate that hash (it is preimage resistant).
  4. Given an input, it’s computationally infeasible to choose another input that gives the same hash (it has second preimage resistance).

Cryptographic hash algorithms go by names like MD5 (for Message Digest) and SHA-1 (for Secure Hash Algorithm), and they’re used for many things. Sometimes they’re used to convert a large piece of data into a small value, in order to detect modifications to the data. They’re used that way in digital signatures. But sometimes they’re just used to hide the original data (which might actually be smaller than the hash value).

Unix systems used to store user names and passwords in a file called /etc/passwd, with the passwords hashed to hide (obfuscate) them. A standard attack was to find a way to get a copy of a system’s /etc/passwd file, and try to guess the passwords offline. If you know what hash algorithm they’re using, that’s easy: guess a password, hash it, then look in the /etc/passwd file to see if any user has that hash value for its password.

Nowadays, most systems have moved away from storing the passwords that way, but there are still services that do it, there are still ways of snatching password files, and the attack’s still current. Jarno’s article looks at some defenses.

Salting the hashed passwords involves including some other data along with the password when the hash is computed, to make sure that two different users who use the same password will have different hashes in the password file. That prevents the sort of global attack that says, Let’s hash the word ‘password’, and see if anyone’s using that. Of course, if the salt is discoverable (it’s the user name, or something else that’s stored along with the user’s information), users’ passwords can still be attacked individually.

Even using individual attacks, it’s long been easy to crack a lot of passwords offline: we know that a good portion of people will use one of the 1000 or so most popular passwords (password, 123456, and so on), and it never has taken very long to test those. Even if that only nets the attacker 5% of the passwords in the database, that’s pretty good. But now that processors are getting faster, it’s feasible to test not only the 1000 most popular passwords, but tens or hundreds of thousands. All but the best passwords will fall to a brute-force offline attack.

The reason offline attacks are important is that most systems have online protections: if, as an attacker, you actually try to log in, you’ll only be allowed a few tries before the account is locked out and you have to move on to another. But if you can play with the password file offline, you have no limits.

Of course, the best defense is for a system administrator to make sure no one can get hold of the system’s or the service’s password file. That said, one should always assume that will fail, and someone will get the file. Jarno suggests the backup defense of using different salt values for each user and making a point of picking a slow hash algorithm. The reasoning is that it doesn’t make much difference if it takes a few hundred milliseconds for legitimate access — it doesn’t matter if a login takes an extra quarter or half second — but at a quarter of a second per attempt, it will be much harder for an attacker to crack a bunch of passwords on the system.

Just two small points:

First, Jarno recommends specific alternatives to SHA-1, but he doesn’t have it quite right. PBKDF2 and HMAC are not themselves hash algorithms. They are algorithms that make use of hash algorithms within them. You’d still be using SHA-1, but you’d be wrapping complexity around it to slow it down. That’s fine, but it’s not an alternative to SHA-1.

The same is the case for bcrypt, only, worse, bcrypt uses a non-standard hash algorithm within it. I would not recommend that, because the hash algorithm hasn’t been properly vetted by the security community. We don’t really know how its cryptographic properties compare with those of SHA-1.

Second, Jarno suggests that as processors get faster, the hashing can be changed to maintain the time required to do it. He’s right, but that still leaves an exposure: because the server doesn’t have the passwords (only the hashes of the passwords), no hash can be changed until the user logs in. If the system doesn’t lock out unused accounts periodically, those unused accounts become weak points for break-ins over time.

That said, this is sound advice for system administrators and designers. And perhaps at least a little interesting to some of the rest of you.

Thursday, January 13, 2011

.

Badges? I don’t have to show you any stinking badges.

Bruce Schneier points out this paper (pdf here) that analyzes the ease of getting fake law enforcement credentials and then using them successfully.

Today, badges convey that the bearer is granted the authority to enforce laws established by a governmental or quasi-governmental entity and are cherished by law enforcement officers. The issue is that there are over 17,000 law enforcement agencies in the United States all with different badges and credentials issued to their personnel. This is not including, the over 70 different federal law enforcement agencies that issue badges and/or credentials. And in there lies the problem. How do you know a cop is a cop?

The most common response to that question is usually, if they walk like a cop. Talk like a cop. And look like a cop, then they are a cop. The assumption is further built upon when being presented with a badge and identification card. But that is not always the case.

It’s probably not surprising that they found it easy to obtain, cheaply, fake badges, and that those badges then allowed them pretty much unimpeded access to whatever they wanted. In other words, real badges provide little real security.

As someone who’s worked in secured facilities, where badges are required for access, I can say that how well all of this works very much depends upon how the badges are used, and the bottom line is that it’s useless to expect reasonable enforcement by having people look at others’ badges. They will too often fail to look, and when they do look they will be unable to detect the fakes.

But not all setups rely only on visual inspection. At one facility, a guard eyed your badge at the gate, but that was designed only to block tourists — to keep arbitrary curious people from wandering onto the premises. But to actually get into the building, everyone had to pass a badge reader and enter an identification code on a keypad, a two-factor authentication process (something I have, and something I know). The reader validated the badge, making it harder to get a fake through. The identification code made sure that I matched the badge — not just that I bore a passing resemblance to the guy in the photo, but that I actually knew the code that was stored in the database for that particular badge.

That defeats attempts to clone a badge, or to put arbitrary information (or none at all) on the magnetic strip. Getting through that system would require compromising an individual and specifically stealing or copying his credentials and obtaining the corresponding identification code... or, alternatively, finding a way to get in that bypasses the badge readers. There might have been such a way, but none was apparent to me.

Once inside, we’re back to the visual checks again, so one can wander unimpeded through much of the building. But the process at the entrances repeats for access to certain areas of the building, again requiring either badging in (with reader and ID code) or opening a lock with a combination unique to that area.

I don’t think there’s any way to completely get rid of visual inspection of credentials, but we have to minimize it. The public is especially vulnerable, in cases where someone dresses like a cop and has something that looks like a badge. But for official buildings and other secured areas we do have alternatives, and we should be using them rigorously.

Sunday, January 09, 2011

.

More on search warrants and electronic data

Varying a bit from this item, last week the California state Supreme Court decided that police can seize and search a mobile device that an suspect has with him when he’s arrested.

This differs from the first decision in a couple of ways. For one thing, the former was by the U.S. Sixth Circuit Court of Appeals, a court that covers Michigan, Ohio, Kentucky, and Tennessee; California is covered by the Ninth Circuit, and the Sixth Circuit’s decision is not binding there. For another, this decision is by a state court, not a federal one, so it applies in the state of California only.

But more significantly, this is specifically about things that someone who’s arrested has on his person at the time of arrest. The decision is based on a more general rule that police are allowed to examine whatever a suspect has when he’s arrested:

Under U.S. Supreme Court precedents, this loss of privacy allows police not only to seize anything of importance they find on the arrestee’s body ... but also to open and examine what they find, the state court said in a 5-2 ruling.

The majority, led by Justice Ming Chin, relied on decisions in the 1970s by the nation’s high court upholding searches of cigarette packages and clothing that officers seized during an arrest and examined later without seeking a warrant from a judge.

As in many other cases, this highlights a need to be clear that data storage devices and devices that can access online information are not like cigarette packages and clothing. I don’t think any of us doubt that the police can and should look for cocaine hidden in a cigarette pack, or a switchblade in the back trouser pocket. But if I’m carrying my laptop when I’m arrested, do they have reasonable access to all my stored email and other personal and financial information?

The minority of two justices say no, as do I:

The dissenting justices said those rulings shouldn’t be extended to modern cell phones that can store huge amounts of data.

Monday’s decision allows police to rummage at leisure through the wealth of personal and business information that can be carried on a mobile phone or handheld computer merely because the device was taken from an arrestee’s person, said Justice Kathryn Mickle Werdegar, joined in dissent by Justice Carlos Moreno.

They argued that police should obtain a warrant - by convincing a judge that they will probably find incriminating evidence - before searching a cell phone.

The courts need to sort out these differences, and set up a legal understanding of where personal effects end and private data begins. Unfortunately, the current U.S. Supreme Court does not have the composition to come up with a reasonable answer to that question.

Friday, January 07, 2011

.

Downloading photos

Over time, I’ve run across a few people who have posted photos to Flickr, set the Flickr option to disable downloading, and then been dismayed to find that people were saving copies of their photos anyway. They told Flickr not to allow downloading, and people can, apparently, still download.

Some of these had set the option a long time ago. With recent changes to Flickr, they’ve made it a little clearer that this isn’t a security feature, but even with that, people don’t understand what’s going on, and how others can still download their photos.

Here’s how the Flickr setting works:

You’re looking at a photo on Flickr, and you view a specific size (click on the photo or click the Action pull-down, then select view all sizes). Above the photo in the view all sizes screen is the license information and a list of sizes, and between them it says Download and supplies a download link. Also, if you right-click (Mac: ctrl-click) on the photo, there’ll be a save image selection on the menu.

If the owner of the photo has disabled downloading, the Download line will say The owner has disabled downloading of their photos, and there will be no save image option on the pop-up menu (on some browsers it may be there, but it won’t work).

What’s important to understand is that that’s all the option does: it removes the ability to save the image (photo) using the standard browser interfaces. That is, it makes it less convenient to save the image.

But any image that’s displayed by your browser has an img tag in the HTML source, and that tag has the URL for the image. For example, if we look at this photo that I’ve posted to Flickr, and then view the HTML source for the page,[1] you’ll see the following:

<div id="allsizes-photo">
<img src="http://farm4.static.flickr.com/3136/3047554534_723e36b41f_b.jpg">
</div>

You can put that URL into your browser and get directly to the image. Of course, it doesn’t matter, because you can just right-click the image on the all-sizes page and save it. But if I had disabled downloading, when you looked at the page source you would see this:

<div id="allsizes-photo">
<div class="spaceball" style="height:768px; width: 1024px;"></div>
<img src="http://farm4.static.flickr.com/3136/3047554534_723e36b41f_b.jpg">
</div>

That extra line, the one with class="spaceball", is what blocks the right-click from being able to download the photo. But the URL is still there, and the URL still works. Anyone can still download my photo by going to the HTML source and finding the URL for it there. It would be very easy to write a Firefox add-on that would do this automatically and re-enable a save option on the pop-up menu, and it wouldn’t surprise me if someone had already written one. I haven’t looked, because I don’t really care to download everyone’s Flickr photos.

Here’s Flickr’s warning about this:

Enabling this setting also places deterrents to discourage downloading of your other sizes. (And we really do mean discourage. Please understand that if a photo can be viewed in a web browser, it can be downloaded.)

Is that sufficient? Clearly not; people are still surprised when they find that their photos are freely accessible to anyone who can get to the pages to view them. But if the web browser can retrieve the photos to show them, then they can be saved — if nothing else, they’re saved in the user’s browser cache, and a savvy user can snag them thence.

I’ve been talking about Flickr, specifically, but there’s nothing here that’s really specific to Flickr. It’s true on any web site: anything a user can view, the user can save.

There is an exception to that: there are photo sites that use Flash to show the photos. Flash is a browser plug-in that runs programs that are sent from the web server. The Flash program that displays the photos does it in a way that the browser itself is unaware of (only Flash sees it), so the browser never has the photo, nor even the URL to it. Unless someone can hack the Flash program, there’s no way the user can save the photo directly.

But even in this case, a user can capture a screen image while the photo is being displayed. The Mac’s Preview program makes it easy; use the File -> Take Screen Shot option in Preview’s menu. In Windows, pressing the Print Screen key on the keyboard will copy the screen image to the clipboard, and you can then paste it into a program such as Paint, PowerPoint, or PhotoShop. There are also plenty of other programs (I like Hypersnap, but many others are fine) that give you more flexibility.

In other words, again, anything a user can view, the user can save.

So, in general, we get back to advice that you’ve seen many times in these pages: If you want something to be private, don’t put it on the Internet.


[1] It’s easy, from the browser’s menu: in Firefox, use View -> Page Source; in Chrome, View -> Developer -> View Source; in Safari, View -> View Source; in Internet Explorer, View -> Source.

Tuesday, December 14, 2010

.

Security of auto control systems

While we’re on the joint subject of cars and security, I should dredge up this item that I’ve had hanging about for a few months. It’s from Ars Technica, and reports that researchers have hacked into the control systems of cars because those systems are often not secured:

The tire pressure monitors built into modern cars have been shown to be insecure by researchers from Rutgers University and the University of South Carolina. The wireless sensors, compulsory in new automobiles in the US since 2008, can be used to track vehicles or feed bad data to the electronic control units (ECU), causing them to malfunction.

Earlier in the year, researchers from the University of Washington and University of California San Diego showed that the ECUs could be hacked, giving attackers the ability to be both annoying, by enabling wipers or honking the horn, and dangerous, by disabling the brakes or jamming the accelerator.

The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they’re wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.

The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.

The earlier work, from May, said that there was some security built into the system, but it was insufficient. Still, someone needed access to the inside of the car at some point, to plug into the On-Board Diagnostics (OBD-II) port under the dashboard. Once they could do that, they could reprogram the workings of the car — an example given in the earlier article suggests a program that might wait until the car was going at 80mph, and then disable all the brakes.

With the newer work, attacking the wireless tire-pressure monitors, there’s the danger of attacks from the outside that take advantage of the wireless system. The researchers show how to track cars using that, but if more of the control system is exposed to wireless attacks, things can get very bad, indeed.

It boggles my mind that anyone could put any sort of control system into a vehicle and not secure it. The technology to do secure communication among parts of a system is well known, inexpensive, efficient, and effective, and there’s really no excuse for cutting corners there.

Monday, December 13, 2010

.

Security of auto ignition systems

New Scientist tells us that the encryption between electronic key fobs and car ignition systems has been cracked in many cases. The reason is that most car manufacturers are using weak and/or home-grown encryption:

A device fitted within the key fob of a modern car broadcasts an encrypted radio signal to the car as the driver starts the vehicle. If the signal is recognised by the car’s receiver, it responds by sending an encrypted signal to the engine control unit (ECU), which allows the car to start. If the driver tries using the incorrect car key fob, the ECU locks down the engine.

For over a decade, immobilisers have played a crucial role in reducing car theft, says Nohl. But the proprietary encryption keys used to transmit data between the key fob, receiver and engine are so poorly implemented on some cars that they are readily cracked, Nohl told the Embedded Security in Cars conference, in Bremen, Germany, last month.

Last year he took just 6 hours to uncover the algorithm used to create the encryption key in a widely used immobiliser — the Hitag 2 made by Dutch firm NXP Semiconductors — making it easy to de-immobilise any car using that algorithm. And in 2005 Ari Juels of RSA Labs in Cambridge, Massachusetts, and researchers at Johns Hopkins University in Baltimore, Maryland, took under an hour to crack an encryption system sold by US technology firm Texas Instruments.

Juels says that these cracks were possible because the proprietary algorithms that the firms use to encode the cryptographic keys shared between the immobiliser and receiver, and receiver and engine do not match the security offered by openly published versions such as the Advanced Encryption Standard (AES) adopted by the US government to encrypt classified information. Furthermore, in both cases the encryption key was way too short, says Nohl. Most cars still use either a 40 or 48-bit key, but the 128-bit AES — which would take too long to crack for car thieves to bother trying — is now considered by security professionals to be a minimum standard. It is used by only a handful of car-makers.

This dovetails with the advice that computer security experts usually give:

  1. Use only modern, well known cryptographic algorithms, which have been thoroughly tested for their security properties.
  2. Use them with modern parameters, including currently accepted key lengths.
  3. Use only mature, well tested implementations. Resist the temptation to write your own.

On point 1, security is not enhanced by using a secret algorithm. Quite the opposite: open, well known algorithms have been scrutinized by the top experts in the field, and are, to the best of our knowledge, secure and impractical to break. That fabulous and super-secret algorithm you came up with may pass your security testing, but it’s almost always the case that once algorithms like that get into use, we find weaknesses in them and they fall fairly quickly.

It’s arrogant to claim that you can devise a better algorithm than the collective great minds of the world can. And if you can, yours should stand up to the public scrutiny needed for an assurance that the algorithm is really better. If secrecy is a necessary part of the algorithm, be assured that it will fail.

Point 2 should be obvious: computing capabilities speed ahead, and for most modern encryption algorithms, the key length directly relates to the time required to break the encryption. If we don’t increase key lengths over time, faster, more powerful processors will crack our systems.

On point 3, we have to remember that encryption algorithms are complex and particular. Bugs in the implementation can result in huge holes in the security... and bugs are inevitable in any implementation. Using a mature implementation, one that’s already been thoroughly debugged, gives the best chance of avoiding such holes. Coding an implementation yourself is a sure way to a weak crypto suite.

It seems counter-intuitive, but with modern encryption, the more public the algorithms and the implementations are, the better the security picture looks.

Wednesday, December 01, 2010

.

Et tu, NY Times?

Can you stand one more item about airport security screening? The New York Times published an editorial supporting the scanning machines last week (on opt out day).

What’s remarkable is how out of character the editorial is for the Times. I was surprised that they, who usually staunchly support civil and constitutional rights, favour the machines. That’s unexpected, but not remarkable: the Times and I don’t always agree, even if we usually do. The remarkable part is what appears to be the main bit of their argument: they seem to like the scanners mostly because the Republicans don’t.

In their eagerness to pin every problem in America on President Obama, prominent Republicans are now blaming his administration for the use of full-body scanners and intrusive pat-downs at airports. Those gloved fingers feeling inside your belt? The hand of big government, once again poking around where it should not go.

Mike Huckabee, the former governor of Arkansas and a Republican presidential hopeful, called the scanners and the pat-downs a humiliating and degrading, totally unconstitutional intrusion, in an interview on Fox News. If the president thinks such searches are appropriate, Mr. Huckabee said, he should subject his wife, two daughters and mother-in-law to them. Gov. Chris Christie of New Jersey said the Transportation Security Administration had gone too far, and Gov. Rick Perry of Texas suggested T.S.A. agents be sent to the Mexican border, where he said, absurdly, that we need security substantially more than in our airports.

OK, Governor Perry was talking nonsense, yeah. But, hey, despite the fact that I think Mike Huckabee is a bonehead who usually isn’t worth listening to, this time he’s right: the machines and the molestation are humiliating, degrading, and in violation of the fourth amendment’s guarantee against unreasonable search. That level of invasiveness would only be acceptable with probable cause — a reasonable suspicion of wrongdoing. We accept scans of our baggage, perhaps somewhat reluctantly, but treating every passenger as though she were caching a weapon in her underwear, with not even the slightest reason to think it so, goes beyond what we do in America, at least heretofore.

Our Constitution is there to protect us from abuses by authority. And whether the constitution is violated by a right-wing war criminal, or a president who we’d like to think is on our side, it’s wrong and we have to stand against the violation. When we see abuse of power, we have to call it what it is and rein it in before it goes too far to stop.

And, of course, it doesn’t help that this abuse was prompted by a ridiculous situation: a guy smuggled some crappy explosives aboard in his underwear, managed only to burn himself in intimate places before being subdued, and was arrested when the undamaged plane landed. His father had warned us about him, but the warnings weren’t taken seriously enough. Oh, and at least some reports say that the new scanning machines wouldn’t have detected what he was carrying anyway.

The Times is right that the sort of profiling that some of the opponents suggest isn’t the right answer either. But the Times is wrong to suggest that the abuses are individual problems that were merely handled in a ham-handed way. There’s clearly a pervasive pattern of bad policy and worse implementation, and both need to be fixed.

Tuesday, November 30, 2010

.

One more on airport screening

While passengers and crew alike have their naked images scrutinized and their genitals fondled, in Salon’s Ask the Pilot column Patrick Smith tells us that the people who clean and fuel the airplanes, and other ground crew with access to the secure parts of the airports, are not screened at all:

And by contradictory, here’s some blockbuster news: Although the X-ray and metal detector rigmarole is mandatory for pilots and flight attendants, many other airport workers, including those with regular access to aircraft — to cabins, cockpits, galleys and freight compartments — are exempt. That’s correct. Uniformed pilots cannot carry butter knives onto an airplane, yet apron workers and contract ground support staff — cargo loaders, baggage handlers, fuelers, cabin cleaners, caterers — can, as a matter of routine, bypass TSA inspection entirely.

These people are investigated when they’re hired, of course:

All workers with airside privileges are subject to fingerprinting, a 10-year criminal background investigation and crosschecking against terror watch lists. Additionally they are subject to random physical checks by TSA. But here’s what one apron worker at New York’s Kennedy airport recently told me:

All I need is my Port Authority ID, which I swipe through a turnstile. The ‘sterile area’ door is not watched over by any hired security or by TSA. I have worked at JFK for more than three years now and I have yet to be randomly searched. Really the only TSA presence we notice is when the blue-shirts come down to the cafeteria to get food.

We certainly know that people cleared with background checks and such can be corrupted, even those who go through rigorous processes and are well paid: consider Aldrich Ames, for example. We have to assume that anyone can fall victim to corruption, coercion, or blackmail... or can, say, be kidnapped and have his credentials stolen and used by someone else. If security screening has any value:

  1. Everyone who enters the secure area from outside must be screened.

    That includes ground crew, that includes flight crew (yes, pilots too), and that includes the screening staff themselves.

  2. People must be screened every time they cross the boundary.

    It’s not sufficient to check them when they come to work in the morning, and assume that’s good for the day. Someone can go through screening, pop out, and pick up a bomb or a gun. If they can go back in without screening, we’re allowing an enormous hole in the system.

You will say — pilots will say — that the pilots can crash the planes anyway, so what’s the point of checking them at all? Well, we don’t need to stop them from bringing in butter knives and fingernail clippers... but, then, we don’t need to stop regular passengers from doing that either. But there are other people in the cockpit who might stop an unarmed pilot from crashing the plane. And let’s not assume that the threat model involves just the crashing of a single plane: perhaps a pilot might be given bomb materials to bring into the airport so that others — who have been screened — can pick them up inside the secure perimeter and blow up many planes, or a large portion of the airport.

Screening does limit the threat, but only if we screen reasonably (stopping a dull butter knife but not a sharpened pencil, for example, is just silly), and only if we screen everyone.

Or else let’s admit that the screening isn’t the answer, and focus on other things. Does anyone really advocate tossing out the screening entirely?

Monday, November 29, 2010

.

On full-body scans and air-travel safety

To go along with opt out day last week — a loosely organized day in which air travelers were encouraged to resist the new body-scanner machines, and in which some travellers did some special things to protest the feel your genitals ‘pat down’ that has been set up as an apparently punitive alternative — the New York Times published one of their Room for Debate columns on the topic.[1]

I agree particularly with what Bruce Schneier and Rafi Sela have to say about it. That won’t surprise many, and it’s also not surprising that I have comments about the segments by Arnold Barnett and David Ropeik. Both present the standard false dichotomy, asking whether you’d rather be scanned and be safe, or opt out and let a bomber onto your plane.

Professor Barnett:

But then I remember a basic question. What if the 9/11 terrorists had been thwarted at the security checkpoint? We would have been spared not only the worst terrorist attack in American history, but probably two wars that have gone on for nearly a decade.

Mr Ropeik:

Flying any time soon? Would you rather the T.S.A. folks kept their hands off your body, or terrorist bombs off your plane? Seems like a simple choice, but as National Opt-Out Day looms, it’s worth considering why what seems like a simple question isn’t.

They both give us the same, basic argument: the scanners will thwart the terrorists. The arguments start with the presumption that they make us safer. The trouble is that there’s no clear evidence that they do. It’s not a choice between scanning and terrorist bombs. There are other inspection and investigation techniques, there are other ways to block the terrorists, and it’s easily arguable that these scanners are neither sufficiently effective to make us willing to submit to them, nor sufficiently cost effective for the expenditure to be worthwhile.

Mr Ropeik goes on to say this:

Compare those concerns against the reduced fear of being bombed on a plane. The emotional sharpness of September 11, 2001, and our fears, have faded. The threat hasn’t. We’ve had a shoe bomber, an underwear bomber, liquid bombers. But since our fear has ebbed, feeling coerced into taking a radiation risk or having our privacy invaded carries more weight.

He mentions three bombing plots. He doesn’t mention how many were successful: none. The shoe guy and the underwear guy were prevented, by older inspections that didn’t involve x-ray scanners, from bringing effective bombs aboard. What they did manage to get on board didn’t work, and they were apprehended in the process of trying to make them do something, anything. These weren’t screening failures, but screening successes. If we stop someone from bringing a sword into the plane and he has to try to make do with a plastic knife from the cafeteria, we don’t call that a failure, and suggest cavity searches to find plastic cafeteria-knives.

The liquid bombers, whether or not their plot would have worked, were stopped by investigation before the fact. They never made it to the security checkpoints at all, so screening was irrelevant to that case. Our security money should be going to more of these sorts of investigations, so we’re aware of the terrorist plots before they reach the airports, the train stations, the shopping malls, or the sports arenas.

Professor Barnett dismisses criticism without really addressing it:

The critics instead make two other arguments. The first is that the backscatter machines are ineffective because they cannot detect explosives in body cavities. The second is that recent security measures are very literally reactive: after the Shoe’s Bomber’s effort, the liquid-explosives plot, and the Underwear Bomber’s attempt, procedures were adopted to respond to these specific menaces. Indeed, the Yemeni bombs have led to a ban on printer cartridges in carry-on luggage.

All he has to say about either argument is, The more options we take ‘off the table’ from the terrorists, the more they are driven to more desperate plots that are less likely to succeed. But the point isn’t that we shouldn’t be taking terrorist options off the table. The point is that we have limited resources to throw at the problem, and we should be using them wisely.

A security mechanism that’s expensive and intrusive, that violates some of the basic rights we hold dear, that causes delays in travel and anger among the travellers, and that has an enormous potential for abuse by the authorities who administer it... had better be sufficiently effective to be worth all the expense, intrusion, violation, delay, anger, and abuse.

These machines don’t appear to meet that requirement.


[1] I have not yet experienced either the machines or the enhanced pat downs. I’ll report in these pages if and when I do.

Wednesday, October 27, 2010

.

Free speech and phone calls

Earlier this afternoon, I got a phone call on the house phone, and let it go to the answering machine (I use Google Voice and Skype while I’m working). The machine’s in another room, but I had no trouble hearing the message clearly, for as long as I was willing to listen before I went in to kill it.

The message was of someone SHOUTING, shouting about politics. Shouting about my congressguitarist, and why I shouldn’t vote for him.[1] He did this, he didn’t do that, he eats babies raw, and whatnot.

Shouting, did I say shouting? And, of course, it was a recorded message, the better to save the shouter’s voice for the next round.

This was probably the most egregious violation of my privacy I’ve ever encountered in a phone call that’s protected by the U.S. Supreme Court.

We have a do not call list in the U.S., federally mandated and implemented by the Federal Communications Commission. When Congress set that up, it came with a great deal of controversy about free speech rights, and how it would interfere with free speech. And, so, there are a few (or several, depending upon how you reckon those terms) categories of calls that are exempt, and may be made to anyone, even if her number be on the list:

  1. Calls from organizations with which you have established a business relationship.
  2. Calls for which you have given prior written permission.
  3. Calls which are not commercial or do not include unsolicited advertisements.
  4. Calls by or on behalf of tax-exempt non-profit organizations.

The first two categories come with mechanisms to remove yourself from the organizations’ call lists, and to revoke any permission you’ve given. But those last two categories, put there with free speech in mind, cover religious and political organizations, and such organizations can call you whenever they like, with no requirement that they allow you to opt out.

And, of course, during these last weeks before our elections, as you might imagine, the calls come, as we say, fast and furious. This one was particularly furious.

The battle is long lost, as the Supreme Court has repeatedly set free-speech boundaries well beyond where even I would set them, but this one is a complete mystery to me. I don’t agree that spending money should be equated to speech. I don’t agree that corporations should be given free-speech rights comparable to those of individuals.

But I really don’t see how anyone’s free-speech rights should allow them access to my home. I can’t accept that your right to speak freely includes any right to invade my privacy in order to do it.

I’m sympathetic to the concern over setting up situations where an organization blocks calls without your understanding what they’re blocking. But this isn’t that. I am putting my own numbers on the do-not-call list, and it is my choice not to receive calls from political or religious organizations. I should have that right, and no one should be allowed to force his way, or his electrons, into my home.


[1] Sorry, but I already have: as I noted the other day, I voted last week with an absentee ballot.

Tuesday, October 26, 2010

.

More on Internet cafés and public networks

For my readers who aren’t terribly fond of the entries tagged technology, please stick with this one. It’s important.

Do you log into web sites from public computers, even though I advised against it four years ago? That post only scratched the surface, really: it just talked about using public computers. These days, most people have their laptops with them, and they connect them to the public wireless networks in the cafés.

Most of those networks are unencrypted. That means that you don’t have to enter a key or a password when you access the network. You just select the network name (or let your computer snag it automatically), go to a web page in your browser, and get redirected to some sort of login and/or usage-agreement screen on the network you’ve connected to. Once you click through that, you’re on the Internet.

Suppose there are twenty people in there using that particular network. All twenty of them are sending and receiving stuff through the air. How is it that I only get my stuff, and you only get yours, and we don’t see each other’s, nor the web pages of the other eighteen users? It must be that my web pages are beamed straight to me, and yours to you, right?

No. In fact, everything that everyone sends and receives is out there for all twenty computers to see. But each of our computers is given an IP address, each data packet contains the address that the packet is being sent to... and all of our well behaved computers just look at the addresses and ignore any packets that aren’t meant for them.

Computers do not have to be well behaved. Any computer in the café — or near enough to hear the wireless signals — can see everything that everyone is sending to and receiving from the network. Because the network isn’t encrypted, it’s all out there, in the clear, visible to all who care to be badly behaved.

But we aren’t completely unprotected: we have something called TLS (or SSL, depending upon the version). When the web site’s address, the URL, begins with https, your communication with that web site is encrypted and safe from eavesdropping, even if the network itself isn’t. Perhaps you don’t care who sees you reading the New York Times, but you want to be protected when you visit your bank online. Use http for the Times and https for the bank, and all is well.

And that’s important, because most web authentication just has you send your username and password openly from your browser to the web site. Anyone could snoop your ID and password as you logged in, if your connection to the web site wasn’t encrypted. But that https saves you.

But wait: I have a New York Times account, and I’ve logged into the Times web site (using https). Every time I visit the site, it knows who I am. Even when I just go to http://www.nytimes.com/ ! How does it know that, when I’m not logging in all the time?

Web sites use things called browser cookies to remember stuff about you. A cookie is a short bit of data that the web site sends and asks your browser to attach a name to and keep. Later, when you return, the web site asks if you have a cookie with a particular name, and if you do, your browser sends it. For web sites that you log into, such as your bank and the Times, the login (session) cookie is sent every time your browser touches the web site. Every time I click on another Times article, my Times session cookie is sent again. Every time I go to another page on my bank’s site, my bank’s session cookie is sent again.

My bank is set up securely, as is my credit card site, as is Gmail, as is PayPal: every contact from the login screen until I’m logged out is through https. It’s all encrypted. Not only is my password encrypted when I log in, but the session cookie that the site gives me is encrypted too, every time I send it.

The New York Times, though, doesn’t work that way: only the login itself uses https. Once it gives me the session cookie, everything switches back to http, and there’s no encryption. When I click on an article and my browser sends my cookie again, anyone in the café can grab it.

Now, the cookie doesn’t contain my password, so no one can get my password this way. But as long as I stay logged in, and the cookie is valid, anyone who has that cookie can masquerade as me. If they send my cookie to the New York Times, it will treat them as though they were me, as though they had logged in with my password.

Of course, it’s not just the New York Times that does this. Amazon does it. So do eBay, Twitter, Flickr, Picasa, Blogger, and Facebook. So do many other sites where you can buy and sell things. (All the airline sites I’ve checked do it right, using https after login.) That means that if you use Facebook while you’re at Panera, someone else can borrow your Facebook session cookie and be you, until you log out. If you stop by Starbucks and get on eBay, someone else can use your cookie to make bids from your account.

There’s some protection at some sites. Amazon, for example, will let the cookie thief browse around as you, but will want your password before placing an order... assuming you didn’t enable one-click purchasing. And depending upon the options you have set, eBay might or might not ask for your password when the thief places a bid. But Facebook and Twitter are certainly wide open, here.

To try to increase awareness of this, a guy named Eric Butler has created a Firefox add-on called Firesheep, which will make it trivial for anyone, even someone who knows nothing about the technical details of this stuff, to be a cookie thief and pretend she’s you on Facebook, or Twitter, or Blogger, or the New York Times. Eric isn’t trying to abet unethical or criminal behaviour; he’s trying to push the popular web sites, whose users will be targets of these sorts of attacks, to fix their setups and use https for everything whenever you’re logged in.

So here’s an expanded form of the warning: Don’t do private stuff on public networks, unless you’re absolutely sure your sessions are encrypted. If you don’t know how to be sure, then err on the side of caution.

Monday, October 04, 2010

.

A couple of things about Stuxnet

There’s a relatively newly discovered (within the last few months) computer worm called Stuxnet, which exploits several Windows vulnerabilities (some of which were patched some time ago) as it installs itself on people’s computers. It largely replicates through USB memory sticks, and not so much over the Internet (though it can replicate through storage devices shared over networks). And it’s something of an odd bird. Its main target isn’t (at least for now) the computers it’s compromised, and it’s not trying to enslave the computers to send spam, collect credit card numbers, or mount attacks on web sites.

It’s specifically designed to attack one particular industrial automation system by Siemens, and it’s made headlines because of how extensive and sophisticated it is. People suspect it’s the product of a government, aimed at industrial sabotage — very serious stuff.

The folks at F-Secure have a good Q&A blog post about it.

There are two aspects of Stuxnet that I want to talk about here. The first is one of the Windows vulnerabilities that it exploits: a vulnerability in .lnk files that kicks in simply by having an infected Windows shortcut show its icon:

This security update resolves a publicly disclosed vulnerability in Windows Shell. The vulnerability could allow remote code execution if the icon of a specially crafted shortcut is displayed. An attacker who successfully exploited this vulnerability could gain the same user rights as the local user. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.

Think about that. You plug in an infected USB stick, and you look at it with Windows Explorer. You don’t click on the icon, you don’t run anything, you don’t try to copy it to your disk... nothing. Simply by looking at the contents of the memory stick (or network drive, or CD, or whatever), as you look at its icon and say, Hm, I wonder what that is. I’d better not click on it, it’s already infecting your computer. And since most Windows users prior to Windows 7 ran with administrator rights, the worm could get access to anything on the system.

You need to make sure this security update is on your Windows systems.

The other aspect is interesting from a security point of view. From the F-Secure Q&A:

Q: Why is Stuxnet considered to be so complex?
A: It uses multiple vulnerabilities and drops its own driver to the system.

Q: How can it install its own driver? Shouldn’t drivers be signed for them to work in Windows?
A: Stuxnet driver was signed with a certificate stolen from Realtek Semiconductor Corp.

Q: Has the stolen certificate been revoked?
A: Yes. Verisign revoked it on 16th of July. A modified variant signed with a certificate stolen from JMicron Technology Corporation was found on 17th of July.

I’ve talked about digital signatures before, at some length. When the private keys are kept private, digital signatures that use current cryptographic suites are, indeed, secure. But...

...anyone who has the private key can create a spoofed signature, and if the private keys are compromised the whole system is compromised. When one gets a signing certificate, the certificate file has both private and public keys in it. Typically, one installs the certificate, then exports a version that only contains the public key, and that certificate is made public. The original certificate, containing the private key, has to be kept close.

But it’s just a file, and anyone with access to it can give it to someone else. Shouldn’t, but can. If you can compromise an employee with the right level of access, you can snag the private key and made unauthorized authorized signatures.

In most cases, it’s far easier to find corruptible (or unsophisticated) people than it is to break the crypto. And if the stakes are high enough, finding corruptible people isn’t hard at all. The Stuxnet people may well have a host of other stolen certs in their pockets.

Thursday, September 23, 2010

.

HDCP master key cracking

It’s managed to stay out of the general press, mostly — probably because it’s geeky, it’s hard to explain what it really means, and it’s not likely to affect anything any time soon — but the tech press has been covering the cracking of the HDCP master key. But even PC Mag got it wrong at first, having to correct their article.

To see what it does mean, it helps to back up a bit. If you have a TV made in the last few years, look at the back, where all the associated components can plug in. Especially if your TV is high-definition, you’ll have quite a mass of sockets back there.

Originally, televisions just got their signals off the air, using antennas. The only connectors on those TVs, if any, were for antenna wires. By the time cable and VCRs came along, they just plugged into the TV through the 75-ohm antenna connector, you tuned your TV to the appropriate channel (usually 3 or 4), and the signal on the wire looked to the TV just like a broadcast station. The quality was only OK, but there wasn’t any better quality available anyway, so it didn’t matter.

Eventually, TVs started to sprout other connectors, and components had outputs to match. One yellow RCA connector brought a two-wire composite video signal to the TV, and you no longer had to worry about what the tuner did on channels 3 or 4. Then we got S-Video — Super Video, quite an advance in quality at the time — which came on a four-wire cable with round 4-pin connectors.

S-Video served us through the 1980s, but in the ’90s we went to component video cables: three separate two-wire cables that brought in the red, green, and blue signals for the picture. This was the highest quality yet, and remains the best available for analogue TV.

Of course, each of those only brings in video, so two audio cables (left and right channel) are also needed. Five connectors for each component-video input can really clutter up the back of the TV. And, as I said, all that just works for analogue signals. What do we do for high-definition digital stuff?

For that, we have Digital Visual Interface (DVI), available on some TVs but largely used for computer displays, and High-Definition Multimedia Interface (HDMI). Modern televisions will have two to four HDMI inputs, so users can connect several HD components, such as cable boxes, digital video recorders, game systems, Internet streaming boxes, and Blu-ray disc players. HDMI carries audio, as well as video, so no extra audio cables are needed.

With digital data comes the ability to make perfect copies of source material. There’s no quality lost within the components or through the transmission medium, as there is with analogue data, and what comes out of the TV end of the HDMI cable is exactly the same as what was sent out of the cable provider’s office, sent by the streaming company, or burned onto the Blu-ray disc. The industry needed to prevent users from storing the stream and retaining — and distributing — perfect copies of the content.

That’s where High-bandwidth Digital Content Protection (HDCP) comes in. It’s a system that was developed by Intel, and it ensures that the digital content is encrypted during transmission and can only be decrypted by a licensed device at the other end. Further, the encryption negotiation involves assuring the sending device that the receiving device is licensed, so the data won’t even be sent in the first place if there’s a non-approved device connected. And approved devices promise not to do things the industry doesn’t want them to do.

The system is designed so that each device (not each individual device, but each model) has its own key, generated from a master key, which is used during the negotiation. If someone manages to get a rogue device licensed, or modifies a licensed device to break the rules, the Digital Content Protection company can revoke that device’s key. Other devices will, once they’ve received the revocation information, refuse to send to the compromised device.

OK, so what’s been cracked?

The crackers have, perhaps by analyzing the data from a few dozen licensed devices, generated a master key that can create device keys, allowing a device to negotiate as a licensed device, get a digital data stream that it can decrypt, and circumvent the revocation system. Intel has confirmed that this is real.

This is a big deal. But it’s not a big deal immediately, and it is limited. For one thing, it does not mean that people will be able to copy Blu-ray discs: the HDCP encryption is just dealing with the protocol between devices, and has nothing to do with how the data is encoded at the source (onto discs, or whatever). To copy the content, one has to play the disk and capture the HDMI stream.

For another thing, it’s currently impractical to do all this in software, so someone has to create a piece of hardware that uses this cracked master-key system. That’s clearly possible, and perhaps likely, but it means that we’re not going to have a couple of college students writing HDMI copying code in their dorm room.

It’s also the case that Intel knew about this weakness in the HDCP system at least as long ago as 2001 (before HDMI), when Crosby, et al wrote a paper on it, A Cryptanalysis of the High-bandwidth Digital Content Protection. From the abstract:

We describe a practical attack on the High Bandwidth Digital Content Protection (HDCP) scheme. HDCP is a proposed identity-based cryptosystem for use over the Digital Visual Interface bus, a consumer video bus used in digital VCRs, camcorders, and personal computers. Public/private key pairs are assigned to devices by a trusted authority, which possesses a master secret. If an attacker can recover 40 public/private key pairs that span the module of public keys, then the authority’s master secret can be recovered in a few seconds. With the master secret, an attacker can eavesdrop on communications between any two devices and can spoof any device, both in real time. Additionally, the attacker can produce new key pairs not on any key revocation list. Thus the attacker can completely usurp the trusted authority’s power. Furthermore, the protocol is still insecure even if all devices’ keys are signed by the central authority.

In 2001, it was theoretical, and Intel did nothing to address it. Now, it’s real, and they threaten legal action against anyone who takes advantage of it. I am not a lawyer, but they seem lacking in due diligence, don’t they?

Personally, I would like to see HDCP fall; it’s a terrible nuisance. As with many of these sorts of data-protection technologies, as with any sort of DRM system, HDCP gets in the way of legitimate, normal usage of licensed devices. It makes it difficult or impossible to interconnect multiple devices. There can be random negotiation errors that show up without warning, preventing devices from working — not great if you’ve scheduled the recording of a high-definition program on your licensed DVR while it’s connected to your licensed TV, and something goes wrong.

In general, I don’t support the use of any technology that stops people from doing legitimate things with products they’ve legitimately purchased. No copy-protection scheme has stood the test of time, and they’ve only caused problems for the legitimate users. I hope this is the beginning of the end of HDCP... not now, and not soon enough... but soon.

Friday, September 17, 2010

.

Kindle and security

Wednesday, I talked about Amazon’s email-in service, which lets you send documents to your Kindle by email. The nicest part of it for me is the PDF conversion feature, but you can, in general, sent any personal documents you like, with or without conversion to AZW.

The way it works is this:

When you buy your Kindle, it’s automatically registered to your Amazon account, so ebooks that you buy there are pushed to the Kindle for you. You also get an email address at kindle.com (and also free.kindle.com), and documents you send there are sent on to your Kindle — free if they’re sent by WiFi, and for a small fee if they’re sent over 3G (if you want to make sure you’re not charged, you can send things only to the free.kindle.com address).

You can control who’s allowed to send stuff to your Kindle by listing the authorized email addresses at the Manage Your Kindle page, or through the settings on the Kindle itself, and the only address that’s authorized by default is the one you use for your Amazon account. Makes sense.

But here’s the thing: there’s no password or other security, other than the sender’s email address. You may or may not know this, but it’s trivial for anyone to send email using someone else’s email address. Anyone who knows my email address can guess that I might use that same address on Amazon, and the address to send to at kindle.com defaults to the left-hand side of that address. So it would not be hard for anyone to send stuff to my Kindle, whether I want them to to or not, and whether I want what they’re sending or not.

So what? If people want to send me free ebooks, why is that a problem?

It’s a problem we’re all aware of: spam. Because it’s not just ebooks that can be sent; PDFs, MS Word documents, and plain text can all be sent, as well as pictures and other images. Imagine getting a kindle-ful of advance-fee fraud scams, Viagra ads, and pornographic images. And then imagine paying for those, if you have a 3G Kindle (I don’t, so it’s all free over WiFi).

The good thing is that Amazon’s Manage Your Kindle page lets you do three things that help here:

  1. set the maximum charge allowed for any one document sent to your Kindle,
  2. change the email addresses that can send to your Kindle, and
  3. change your Kindle’s email address.

Because I never want to accept any charges, I’ve set the maximum charge to zero. I’ve also removed the authorization for my regular email address, and authorized only an email address that no one knows. And, most importantly, I’ve changed the email address of my Kindle to something unguessable, essentially a strong password.

I recommend that everyone do the same (except perhaps for the maximum charge, if you want to be able to send things yourself that you’ll be charged for). At the least, everyone should change her Kindle’s email address to something that isn’t likely to be a target for spammers, and that means something long and relatively ugly.

I’m sure that Amazon does spam filtering on kindle.com, but we all know how much gets by the spam filters, in general. I can’t wait until Kindle spam joins email spam, Facebook spam, Twitter spam, and the rest.

Thursday, September 02, 2010

.

Usage issues with OAuth

OAuth — a proposed Open Authentication standard — fills a significant gap in cross-application authentication. It’s common in a world of myriad web-based services for one service you use to want to access another service you use, in order to make things better or easier for you.

For example, you might keep contacts in your mail service, and you might want your photo service to see if people you’re in contact with have photos that you might share. We’ve generally done that sort of thing in one of two ways:

  1. Manual: you go to your mail service and tell it to export your contacts to a file on your computer, and then you go to your photo service and tell it to import contacts from that file.
  2. Automatic: you give your user name and password for your mail service to the photo service, and it logs in and reads your contacts directly.

It should be pretty clear that the manual mechanism is annoying — perhaps to the point of being infeasible for inexperienced users (the Barry’s mother problem) — and that the automatic mechanism is risky, handing your login credentials over to someone else. You might trust your photo service, but what if there’s a problem that results in your email login credentials getting stolen?

Some folks developed an alternative mechanism, called OAuth. Essentially, it works like this, in the example case above:

  1. While you’re using your photo service, you click a link that says something like See if your contacts have anything to share.
  2. The photo service redirects your browser to your mail service and tells it what it wants to do.
  3. Your mail service shows you a screen asking you to approve the action.
  4. If you approve it, your browser is given a token, and is redirected back to the photo service.
  5. Your browser gives the token to the photo service, which uses it to connect to your mail service and retrieve your contact list.
  6. Your photo service shows you the shared photos from your contacts, just as you wanted.

The advantages of this scheme are:

  1. It’s simple for you; you just click on the Do this for me, link on one service, see the approval screen from the other service, then see the result of the action back on the first service.
  2. You can verify that the intermediate approval screen is legitimate, because all the security indicators work. The screen really does come from the second service (the mail service, here), and you interact with it directly.
  3. Your login credentials are only sent to the service you’re logging into, and are never give to anyone else.
  4. You authorize only the actions needed by the requesting service, and no more. If you give away your credentials, the recipient can do anything. But with OAuth, you can authorize the photo service to read (and not change) your contact list, but not your mail. You can authorize a send to a friend service to send email on your behalf just once, but not to read your existing mail. And you can authorize an archive service to read your mail always, but not allow it to send mail.

The only problem with the system involves presenting you with an approval screen that makes sense to you, and making sure that you don’t approve something you shouldn’t (or don’t mean to). Let’s look at that:

In this example, the approval screen should say something like this:

The Frobozz Photo Service would like read access to your contacts. Do you approve this request?

Yes        No

But suppose, instead, that the photo service asked for more access than it needed, either carelessly or, perhaps, maliciously. Suppose you got an approval message like this:

The Frobozz Photo Service would like full access to your contacts. Do you approve this request?

Yes        No

Or like this:

The Frobozz Photo Service would like full access to your account. Do you approve this request?

Yes        No

Would you realize that there was something wrong? Would you know it was asking for more access than it needed? Would you say No? Would my mother?

At the recent IETF meeting, I brought this up with the OAuth chairs, Area Directors, and others, and stressed that a key part of making OAuth work is getting the user interfaces right, and that we can’t leave that to arbitrary implementations. Even though the IETF doesn’t usually do user interfaces, in this case the UI is an integral part of the security of the system. We need a document that lays this out and makes it clear what the potential problems are, perhaps working with UI experts and giving advice on ways to handle it (and ways not to).

And today, the folks at F-Secure point out a case in the wild. Twitter has been using OAuth for a while, and now requires it as the only way applications can get access to Twitter accounts. That’s good, but see this real example:

OAuth request from Twitter

Is the person who clicked on whatever caused this...

  1. ...expecting such an authorization request?
  2. ...aware of what it means to approve access and update authorization?

Would that person know enough to say Deny? A great many Internet users — perhaps the majority — are not aware, and would choose Allow, giving their Twitter account over to a spammer, just because they happen to like Lady Gaga and think that this is associated with her, and perhaps that it will give them access to her tweets, music, videos, or whatever.

F-Secure wants a quick way to rescind the authorization, once the victim realizes what’s happening, and that’s a fine demand. Yet, would most users even make the connection between their granting of this authorization for Lady Gaga and the appearance of unrelated spam tweets from their own Twitter accounts? Would they even know what they should rescind, even if they knew that they could (and how)? I doubt it.

As good an idea as OAuth is, this is a real sticking point. We have no idea how to solve it, but we must try.

Wednesday, July 21, 2010

.

Another reason I don’t “like” Facebook

My anti-spam colleague from Microsoft, Terry Zink, posts about a Facebook-related scam that’s being sent around. You see that one of your Facebook friends likes an odd-looking web page. You click on the link, and you find that it is, indeed, a questionable site, and one that’s probably going to try to load malware onto your computer. Maybe you proceed, and get infected; maybe you’re smart enough to leave without damage.

And, of course, Facebook isn’t responsible for what your friends like.

Ah, but there’s the trick. Probably, your friend didn’t click anything to say she liked it at all. Probably, she did what you, a smart Facebook user, did: she left as soon as she saw that it was garbage. But this is when it becomes a Facebook problem:

Of course, the fact that I now clicked on the link now has it showing up in my Facebook Friends’ newsfeed. Apparently, I now like the xxx link. I know this because a friend pinged me this morning alerting me to the fact that this occurs. So, if you click on this link, my friends, you will automatically like this link and it will show up in your Friends’ newsfeed.

That Facebook allows this to happen — allows a web site to be set up to auto-like itself when a logged-on Facebook user visits it — is a Facebook problem, and is another example of why I dislike Facebook.[1]

Of course, many users use the like feature so promiscuously as to make it useless. For them, auto-liking doesn’t matter, because their Facebook friends can’t (and don’t care to) put any stock in what they like. But for other users, things like this can be a real drawback — at best, making others think you’re a moron, and at worst, drawing others into the scam, and luring them into malware infections.

That’s why I prefer telling people what I like and why, in my own style, and dislike binary like flags.


[1] And I use like, here, in the traditional English sense, as well as the Facebook sense.