Request Membership
Categories
Posts By Month
Bloggers
Related Links
Input Validation RSS

Hell Freezes Over

A security bug was found in djbdns. Daniel Berstein pays his promised security bug bounty for the first time. More details about the bug on bugtraq.

Date: 4 Mar 2009 01:34:21 -0000
From: D. J. Bernstein
To: dns@list.cr.yp.to
Subject: djbdns<=1.05 lets AXFRed subdomains overwrite domains

If the administrator of example.com publishes the example.com DNS data
through tinydns and axfrdns, and includes data for sub.example.com
transferred from an untrusted third party, then that third party can
control cache entries for example.com, not just sub.example.com. This is
the result of a bug in djbdns pointed out by Matthew Dempsky. (In short,
axfrdns compresses some outgoing DNS packets incorrectly.)

Even though this bug affects very few users, it is a violation of the
expected security policy in a reasonable situation, so it is a security
hole in djbdns. Third-party DNS service is discouraged in the djbdns
documentation but is nevertheless supported. Dempsky is hereby awarded
$1000.

The next release of djbdns will be backed by a new security guarantee.
In the meantime, if any users are in the situation described above,
those users are advised to apply Dempsky's patch and requested to accept
my apologies. The patch is also recommended for other users; it corrects
the bug without any side effects. A copy of the patch appears below.

---D. J. Bernstein
Research Professor, Computer Science, University of Illinois at Chicago

--- response.c.orig 2009-02-24 21:04:06.000000000 -0800
+++ response.c 2009-02-24 21:04:25.000000000 -0800
@@ -34,7 +34,7 @@
uint16_pack_big(buf,49152 + name_ptr[i]);
return response_addbytes(buf,2);
}
- if (dlen <= 128)
+ if ((dlen <= 128) && (response_len < 16384))
if (name_num < NAMES) {
byte_copy(name[name_num],dlen,d);
name_ptr[name_num] = response_len;

Anti-Debugging Series - Part IV

In this final part of the anti-debugging series we’re going to discuss process and thread block based anti-debugging. Processes and threads must be maintained and tracked by the operating system. In user space, information about the processes and threads are held in memory in structures known as the process information block (PIB), process environment block (PEB) and the thread information block (TIB). These structures hold data pertinent to the operation of that particular process or thread which is read by many of the API based anti-debugging methods we discussed previously.

When a debugger or reverse engineer tries to get aggressive and hook calls to anti-debugging related APIs we can move lower than the API and directly access the process and thread information to detect the attached debugger. By side stepping the operating system provided methods for querying process and thread information, we can effectively bypass some API based hooking techniques used in anti-anti-debugging efforts.

For example, in part II of this anti-debugging series I demonstrated how to make a call to IsDebuggerPresent() to detect if the debugger present process flag is set.

Prototype: BOOL WINAPI IsDebuggerPresent(void); 

if (IsDebuggerPresent()) {
    //Debugger Detected - Do Something Here
} else {
    //No Debugger Detected - Continue
}

If we analyzed what this API call actually does we would notice that it reads a flag from the PEB which indicates the presence of a debugger. Instead of directly calling the API, it is possible to emulate what the IsDebuggerPresent() function does and directly query the PEB ourselves.

The first step in analyzing data within the PEB structure is to locate the PEB structure in memory. To do this we can use a number of different methods, some more direct and low level that others. The method that is easiest to grasp is to call the function NtQueryInformationProcess with a second parameter of ProcessBasicInformation. This returns a pointer to the process information block (PIB) for the requested process. Once we have access to this PIB structure we look at the PebBaseAddress member to determine the base address of the PEB. Finally, we look at the boolean member BeingDebugged to return the same result that would be returned had we called the function IsDebuggerPresent().

The following code demonstrates our example:

hmod = LoadLibrary(L"Ntdll.dll");
_NtQueryInformationProcess = GetProcAddress(hmod, "NtQueryInformationProcess");

hnd = OpenProcess(PROCESS_QUERY_INFORMATION, FALSE, GetCurrentProcessId());
status = (_NtQueryInformationProcess) (hnd, ProcessBasicInformation, &pPIB, sizeof(PROCESS_BASIC_INFORMATION), &bytesWritten);

if (status == 0 ) {
  if (pPIB.PebBaseAddress->BeingDebugged == 1) {
    MessageBox(NULL, L"Debugger Detected Using PEB!IsDebugged", L"Debugger Detected", MB_OK);
  } else {
    MessageBox(NULL, L"No Debugger Detected", L"No Debugger Detected", MB_OK);
  }
}

There are a number of different ways that we can query the PEB and TIB blocks to detect the presence of a debugger. Let’s divert from the straight forward and instead look at a novel and interesting method of detecting a debugger specifically designed to operate under Windows Vista. In Vista, when a process is started without a debugger present, the main thread environment block contains a pointer to a Unicode string referencing a system DLL such as kernel32.dll. If the process is started under a debugger, that system DLL name is replaced with the Unicode string “HookSwitchHookEnabledEvent”. Thus if we know that the process we are trying to protect is running within a Windows Vista environment, we can use this technique to determine if the processes was launched from within a debugging environment.

To use this technique, the anti-debugging function should first check that it is running on the Windows Vista operating system. After confirming the operating system revision, the technique should locate the TIB. The TIB can be accessed as an offset of segment register FS as in the following code:

void* getTib()
{
  void *pTib;
  __asm {
    mov EAX, FS:[18h] //FS:[18h] is the location of the TIB
    mov [pTib], EAX
  }
  return pTib;
}

Once the location of the TIB is found, the offset 0xBFC from the start of the TIB is read and the pointer checked. If this value is 0x00000C00 we then read the string at offset 0xC00 and compare this value to the Unicode string “HookSwitchHookEnabledEvent”. We check the pointer to ensure that we have a string located in the pointed to address and as a second level of assurance for the accuracy of this method. If we pass this final test we can be sure that our process running under Windows Vista was started from within a debugger.

wchar_t *hookStr = _TEXT("HookSwitchHookEnabledEvent");
strPtr = TIB+0xBFC;

delta = (int)(*strPtr) - (int)strPtr;
if (delta == 0x04) {
   if (wcscmp(*strPtr, hookStr)==0) {
      MessageBox(NULL, L"Debugger Detected Via Vista TEB System DLL PTR", L"Debugger Detected", MB_OK);
    } else {
      MessageBox(NULL, L"No Debugger Detected", L"No Debugger", MB_OK);
    }
} else {
   MessageBox(NULL, L"No Debugger Detected", L"No Debugger", MB_OK);
}

In the four parts of this series we have discussed classes of anti-debugging methods, a few basic API based anti-debugging techniques, some slightly more advanced API techniques, and finally two methods that directly access process and thread information to detect the presence of a debugger.

Instead of continuing this series in blog format, I’ve decided to release a paper outlining the details of nearly 35 different anti-debugging methods. I’ll be presenting the paper (and associated slides) at the SOURCE Boston 2009 security conference which starts March 11th, 2009 and finishes up March 13, 2009. The paper and presentation are geared towards developers and software engineers who may not understand the assembly dump of some anti-debugging code but can understand what I’ve presented to you thus far.

The pre-registration rates for SOURCE Boston end on February 28, 2009. So get your ticket at a discount while you still can! It’s going to be a fantastic conference with some of the best information security topics and presenters in the industry.

Additionally, as a speaker, I’ve been given one ticket at half price to do with as I choose. As of yet I haven’t given it away. If anyone would like a half off ticket to SOURCE Boston and can attend please let me know. I’ll get the discount code over to you ASAP. I look forward to seeing you all at the conference, please come up and say hello!

How To Protect Your Users From Password Theft

Monster.com recently disclosed yet another major breach that compromised the personal data of over 1.3 million users. This is not unlike the previous breach in August 2007, though the attack vector was likely different. From a notice on their website (emphasis mine):

We recently learned our database was illegally accessed and certain contact and account data were taken, including Monster user IDs and passwords, email addresses, names, phone numbers, and some basic demographic data. The information accessed does not include resumes.

Considering the well-known tendency to use the same password on multiple websites, compounded with the fact that Monster pledged a comprehensive security review after the first breach, it’s just embarrassing that they are still storing passwords in the clear.

So let’s talk about how to properly store passwords for a web application.

Use a one-way cryptographic hash

Don’t store your passwords in the clear! If you do, an attacker just needs to find one SQL Injection vulnerability and he’s got the password for every one of your users. The idea behind using a one-way algorithm is that the hash value can’t be reversed to “decrypt” the password. So how does authentication work? When a user attempts to login, you apply the same one-way algorithm to convert the user-provided password into the hash value, and then compare the two hashes. If they match, then the user-provided password was correct. At no time is the password ever stored in the clear.

Often, developers will hear the advice “use a hash” and interpret that as “run the plaintext password through MD5 or SHA-1 and store the result.” But that only solves part of the problem — the part about using an irreversible algorithm. It doesn’t protect against pre-computation. Let’s say you’ve used SHA-1 to hash your passwords, and your USERS table looks like this in the database:

USER          PASSWORD_HASH
admin         5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
bob           fbb73ec5afd91d5b503ca11756e33d21a9045d9d
jim           7c6a61c68ef8b9b6b061b28c348bc1ed7921cb53

So if you wanted to obtain the original passwords you’d have to run a dictionary or brute force attack, hashing all possible password options with SHA-1 and comparing the output to the stored hashes. This would take a long time but eventually you’d figure some of them out. But what if you already had a list of all 8-character permutations and their corresponding SHA-1 hashes? Now all you have to do is look up the hashes, rather than computing them on-the-fly. This is the idea behind rainbow tables.

An attacker with a SHA-1 rainbow table covering 8-character alphanumeric combinations would quickly look up those three hashes and obtain the original passwords of “password”, “p4ssword”, and “passw0rd” respectively.

Use a salt

The best defense against pre-computation of raw hashes is salting. To salt a password, you append or prepend a random string of bits to the plaintext password and hash the result. You then store the salt value alongside the hash so that it can be used by the authentication routine. Look in the /etc/shadow file of any modern Unix system and you’ll see something like this:

user1:$1$lKorlp4C$RD5TSM6PaZ6oaWRVUuXT40:13740:0:99999:7:::
user2:$1$qOmA0CUm$I6IdbZDTDl6B6m7s77VPe1:13650:0:99999:7:::
user3:$1$nIEInNo5$PSxcLtvGIJArL8r2AQl74.:13749:0:99999:7:::

Let’s look at the “user1″ entry in the example above, paying attention to the second field which contains a bunch of alphanumeric characters separated by dollar signs. The first token, 1, is a version number, The second token, lKorlp4C, is the salt. The third token, RD5TSM6PaZ6oaWRVUuXT40, is the one-way hash that was calculated using lKorlp4C as the salt.

When the user attempts to login, the system passes the user-provided password along with the stored salt into the hash routine (in this case, md5crypt), and compares the result to the stored hash.

Each bit of salt used doubles the amount of storage and computation required for a pre-computed table. For instance, if we used one bit of salt — either 0 or 1 — the rainbow table would have to account for two variations of every password. Eight bits of salt require 2^8, or 256 variations of every password. Use a sufficiently large salt and pre-computation becomes infeasible. For example, the md5crypt utility uses 48 bits of salt (and for an extra layer of protection, it runs 1000 iterations of MD5 to slow down dictionary attacks).

There are a couple of common mistakes that people make with regard to salting. First, don’t use the same salt every time. If you do, you’re not really increasing the search space because the attacker only has to account for a single salt value. Second, don’t worry about protecting the salt values, they’re not secrets. The added security is derived not from the secrecy of the salt but rather by the amount it increases the resources required for pre-computation.

If you have OpenSSL installed you can play around with various salt mechanisms and see what the output looks like:

$ openssl passwd -h
Usage: passwd [options] [passwords]
where options are
-crypt             standard Unix password algorithm (default)
-1                 MD5-based password algorithm
-apr1              MD5-based password algorithm, Apache variant
-salt string       use provided salt
-in file           read passwords from file
-stdin             read passwords from stdin
-noverify          never verify when reading password from terminal
-quiet             no warnings
-table             format output as table
-reverse           switch table columns

$ openssl passwd -1 password
$1$LH1SwzJI$0ho4XuPVfGlbWIcNuGIap/
$ openssl passwd -1 password
$1$eAUtQOBh$GlvJwVsyb8In5KKkvnR0E0
$ openssl passwd -1 password
$1$PgaSiWTy$ElLh6uy83Y6T4Y70AGmV20

A quick Google search shows that there is a lot of confusion about salting.

But wait, now my password recovery feature won’t work

What’s that? You say your application has one of those “Forgot My Password” features where a user can type in their username and their current password will be sent to the e-mail address on file? Clearly, that requirement depends on passwords being stored either in the clear or using a reversible mechanism such as symmetric encryption.

The answer here is to redesign your password recovery feature. Don’t let an unnecessary requirement force you into poor security practices. If you must e-mail a password, generate a temporary password that’s only valid for a short time period, and require the user to login immediately and select a new password. This obviates the need to retrieve the original, forgotten password.

Why not just use symmetric encryption?

Instead of storing passwords in the clear, you could encrypt them using a symmetric algorithm such as AES and have the application encrypt/decrypt as needed. While this solves the plaintext storage problem, it creates a new problem: key management. Where do you store the key? How often does it change? How many people have access to it? What do you do if/when the key is compromised? And so on. The tradeoff really isn’t worth it for something that’s more elegantly solved with salted hashes.

Layered defenses

While you’re rethinking password storage, it might be a good time to consider other common flubs such as password complexity and brute-force protections.

In conclusion

  • Storing passwords in the clear puts your users at unnecessary risk if (when) your application database is compromised
  • Use salted hashes instead of storing passwords in a recoverable format
  • Password recovery mechanisms can be implemented without needing to obtain the original password
  • As with any aspect of security architecture, use layered defenses

Have fun refactoring!

How Boring Flaws Become Interesting

One of the great challenges for consumers of static analysis products, particularly desktop tools, is dealing with the large flaw counts. You have to wade through the findings to decide what to fix and when, which can be a daunting task. At Veracode, we continuously update our analysis engine to aggressively reduce false positives, thereby enabling our customers to more efficiently triage their results. Even so, it’s not unusual for customers to ask for clarification on certain flaws as they prioritize fixes.

The other day, we ran into an example that ended up being much more interesting than it appeared. The flaw category was Insecure Temporary Files, and the question was “should I really care about this?” The flaw we identified was in a Java application, and the offending line was something like this:

tmpFile = java.io.File.createTempFile(deploymentName, ".war");

I know what you’re thinking. You think the rest of this post is about how createTempFile() uses java.util.Random instead of java.security.SecureRandom to generate filenames, and since Random is seeded with the system time, you can work backwards to figure out the seed and use it to predict all future temporary files. That’s not it, so keep reading!

We couldn’t remember specifically what was so bad about createTempFile(), aside from using a non-cryptographic PRNG, so we checked the Java API for clues:

Creates a new empty file in the specified directory, using the given prefix and suffix strings to generate its name. … To create the new file, the prefix and the suffix may first be adjusted to fit the limitations of the underlying platform. If the prefix is too long then it will be truncated, but its first three characters will always be preserved. If the suffix is too long then it too will be truncated, but if it begins with a period character (’.') then the period and the first three characters following it will always be preserved. Once these adjustments have been made the name of the new file will be generated by concatenating the prefix, five or more internally-generated characters, and the suffix.

This behavior was verified with a quick test program:

$ for i in `seq 1 10`; do java createTempFile; done
/tmp/prefix53363suffix
/tmp/prefix200suffix
/tmp/prefix53898suffix
/tmp/prefix26801suffix
/tmp/prefix13687suffix
/tmp/prefix2221suffix
/tmp/prefix28661suffix
/tmp/prefix61720suffix
/tmp/prefix23104suffix
/tmp/prefix29833suffix

OK, that looks about right. It does what it says it does. One of my colleagues quickly raised the question, what happens if the generated filename already exists? So he generated /tmp/prefix0suffix through /tmp/prefix65535suffix and ran the test program again.

$ for i in `seq 1 10`; do java createTempFile; done
/tmp/prefix65536suffix
/tmp/prefix65537suffix
/tmp/prefix65538suffix
/tmp/prefix65539suffix
/tmp/prefix65540suffix
/tmp/prefix65541suffix
/tmp/prefix65542suffix
/tmp/prefix65543suffix
/tmp/prefix65544suffix
/tmp/prefix65545suffix

Uh-oh, not good. So not only does createTempFile() use a pretty small search space, but when it exhausts that space, it degrades to being 100% predictable? Decompiling the relevant portion of JRE 1.6.0_07, we can see exactly how the filenames are constructed:

private static File generateFile(String s, String s1, File file)
    throws IOException
{
    if(counter == -1)
        counter = (new Random()).nextInt() & 0xffff;
    counter++;
    return new File(file, (new StringBuilder()).append(s).append(Integer.toString(counter)).append(s1).toString());
}

public static File createTempFile(String s, String s1, File file)
    throws IOException
{
    ...
    File file1;
    do
        file1 = generateFile(s, s2, file);
    while(!checkAndCreate(file1.getPath(), securitymanager));
    return file1;
}

What this tells us is that createTempFile() is actually worse than we thought. Notice that counter is only ever assigned a random value once. As soon as it has that first random value, it simply increments from that point forward. The reason we didn’t get sequential output on our first test run was because we ran the test program 10 times, initializing counter each time. Had we put the loop inside the program, it would have generated a sequential list (try it yourself if you don’t believe me).

As luck would have it, Sun actually just fixed this problem in their latest release, Java 6 Update 11. Amazing that it went so long without being discovered. The updated function looks like this:

private static File generateFile(String s, String s1, File file)
    throws IOException
{
    long l = LazyInitialization.random.nextLong();
    if(l == 0x8000000000000000L)
        l = 0L;
    else
        l = Math.abs(l);
    return new File(file, (new StringBuilder()).append(s).append(Long.toString(l)).append(s1).toString());
}

If you’re wondering, the same bug is present in IBM Java 6 SR2, but it’s been fixed in SR3.

Returning to the original question that led us down this rathole, we came to the conclusion that yes, these types of flaws ARE worth fixing. Predictability and security rarely go hand in hand.

10th Anniversary of the Cyberspace Underwriters Laboratories

It was 10 years ago this week that Tan from the L0pht wrote Cyberspace Underwriters Laboratories to describe a vision of third party testing and certification of computer hardware and software.

Tan’s vision got one step closer this week when CWE and SANS issued the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors. Finally there is consensus about what the worst software security flaws are. This is an important step because minimum due care for a software producer can be defined as preventing the most dangerous programming errors from being delivered to their customers.

This is the approach forward thinking companies like Barclays are using with their software providers. Barclays uses Veracode’s testing services to get 3rd party validation that their software providers aren’t delivering code with dangerous programming errors included.

Will Pelgrin, CSO New York State, and Jim Routh, CISO of Depository Trust & Clearing Corporation, (another Veracode customer) this week have released Application Security Procurement Language. This is language that can be inserted into a software acquisition contract that requires the software provider to show that they used due care to remove the Top 25 dangerous programming errors.

Adam O’Donnell blogged on ZDNet that he sees the pieces falling into place to see Tan’s vision realized too:

If software purchasers start demanding that software is delivered with a minimum of defects, various third-party firms will have to become involved to provide independent measurement of a product’s security profile. This is similar to the “Cyberspace Underwriter’s Lab” model discussed by the l0pht crew 10 years ago this week. In the absence of a single third party, look to product offerings like Veracode, Coverity, and Fortify as well as services from groups mentioned in the twitter improvement plan posted earlier this week. This combination of software metrics, purchasing requirements, and third party validation will eventually make the majority, but not all, of these issues a thing of the past.

Eradicating the Top 25 doesn’t eliminate all software risk, but it will drastically reduce the number of vulnerabilities in our information systems. Let’s all use our buying power to demand better from our software vendors, service providers, and outsourcers. We have the list and the testing capability to keep our them honest.


Underwriters Labratories is a trademark of Underwriters Labratories

CWE/SANS Top 25 Most Dangerous Programming Errors

Today is a very exciting day for software security. The CWE/SANS Top 25 Most Dangerous Programming Errors is being released. I was one of the 41 contributors to the Top 25 Errors.

The list of possible programming errors that can end up causing a vulnerability in an application is immense. The MITRE Common Weakness Enumeration (CWE) has grown to 700 entries. They are all valid programming errors but some are so obscure or low severity that it isn’t even worth inspecting for them in most software. When a list grows big often times the important items get diluted.

From my consulting days at @stake I learned that my customers didn’t want an exhaustive list of everything that could be a problem in their software. They only wanted to know about the things “worth fixing”. This is where some judgment comes in. You have to weigh the threat space: what weaknesses are getting exploited by attackers, and the severity: what weaknesses, if exploited, are likely to cause a serious security breach.

The Top 25 Errors are the flaws so prevalent and severe that no software should be delivered to customers without some evidence that the software does not contain these errors. Customers should start demanding that evidence today.

The first good news is a group of software security experts agreed to a list of the Top 25. The second good news is these issues are well understood and the list comes with advice on how to avoid these errors and fix them when discovered. I also learned from my consulting days that no one wants to hear about problems without a solid plan to fix them. I am also happy to report that a significant majority of these problems can be found using automation: either static or dynamic analysis.

Veracode has been focused on finding the security weaknesses that matter the most to our customers whether they are enterprises like financial services companies deciding on what software to purchase or software vendors striving to produce better software. We have always focused on finding the problems that are “worth fixing” and considered the rest noise. Veracode’s automated testing services are able to test for 64% of the Top 25 programming errors today. We are certain we will be able to raise that percentage in the future. Yet, it will be impossible to reach 100% with technology alone so there is a real need for developers and testers to understand how to avoid and discover these problems.

The ideal as I see it is for developers and testers to be trained on the Top 25 programming errors, for automation be brought to bear on finding as many of the Top 25 as quickly and easily as possible, and for humans to focus on manual inspection and testing to fill the gap. Then customers need to demand evidence that the Top 25 has been eradicated from the software they use and purchase. Here is the list:

CATEGORY: Insecure Interaction Between Components

  • CWE-20: Improper Input Validation
  • CWE-116: Improper Encoding or Escaping of Output
  • CWE-89: Failure to Preserve SQL Query Structure (aka ‘SQL Injection’)
  • CWE-79: Failure to Preserve Web Page Structure (aka ‘Cross-site Scripting’)
  • CWE-78: Failure to Preserve OS Command Structure (aka ‘OS Command Injection’)
  • CWE-319: Cleartext Transmission of Sensitive Information
  • CWE-352: Cross-Site Request Forgery (CSRF)
  • CWE-362: Race Condition
  • CWE-209: Error Message Information Leak

CATEGORY: Risky Resource Management

  • CWE-119: Failure to Constrain Operations within the Bounds of a Memory Buffer
  • CWE-642: External Control of Critical State Data
  • CWE-73: External Control of File Name or Path
  • CWE-426: Untrusted Search Path
  • CWE-94: Failure to Control Generation of Code (aka ‘Code Injection’)
  • CWE-494: Download of Code Without Integrity Check
  • CWE-404: Improper Resource Shutdown or Release
  • CWE-665: Improper Initialization
  • CWE-682: Incorrect Calculation

CATEGORY: Porous Defenses

  • CWE-285: Improper Access Control (Authorization)
  • CWE-327: Use of a Broken or Risky Cryptographic Algorithm
  • CWE-259: Hard-Coded Password
  • CWE-732: Insecure Permission Assignment for Critical Resource
  • CWE-330: Use of Insufficiently Random Values
  • CWE-250: Execution with Unnecessary Privileges
  • CWE-602: Client-Side Enforcement of Server-Side Security

Anti-Debugging Series - Part III

It’s time for part three in the Anti-Debugging Series. With this post we will stay in the realm of “API based” anti-debugging techniques but go a bit deeper into some techniques that are more complex and significantly more interesting. Today we will analyze one method of detecting an attached debugger, and a second method that can be used to detach a debugger from our running process.

Advanced API Based Anti-Debugging

There are a number of functions and API calls within the Windows operating system that are considered internal to the operating system and thus not documented well for the average developer. Many of these functions have undergone extensive research and reverse engineering to be able to understand how they operate and what can be achieved using them. One such poorly documented API function is the NtQueryInformationProcess function which is used to retrieve information about a target process. The function prototype looks like the following:

NTSTATUS WINAPI NtQueryInformationProcess(
    __in HANDLE ProcessHandle,
    __in PROCESSINFOCLASS ProcessInformationClass,
    __out PVOID ProcessInformation,
    __in ULONG ProcessInformationLength,
    __out_opt PULONG ReturnLength
);

This function resides within the Ntdll.dll file and is not an exported function. Because of this, we must use run-time dynamic linking to gain access to the functionality. Run-time dynamic linking is the dynamic loading of a library and mapping of functions within the library to a function pointer allowing it to be called and executed. To load our function, “NtQueryInformationProcess”, we first call LoadLibrary(”ntdll.dll”) and then execute GetProcAddress(HMODULE, “NtQueryInformationProcess”) to receive a pointer to our required function.

HMODULE hmod;
FARPROC _NtQueryInformationProcess;
hmod = LoadLibrary(L"ntdll.dll");
_NtQueryInformationProcess = GetProcAddress(hmod, "NtQueryInformationProcess");

Note: The dynamic linking method is slightly different when using C++ due to declaration differences.

Once we have a function pointer to the NtQueryInformationProcess function, we can call the API. The function call takes five parameters, the first two of which are the most interesting to our anti-debugging efforts. The first parameter is a HANDLE to the target process that we wish to interrogate. Since we are trying to determine information about our own process, we will use a HANDLE that points to ourselves. By default, a HANDLE value of -1 will instruct the function to use the current process as the target. The second parameter is a value indicating what type of information is being requested from the target process. In the Microsoft MSDN documentation there are four documented values for this parameter ProcessBasicInformation (0), ProcessDebugPort (7), ProcessWow64Information (26), and ProcessImageFileName (27). There are other undocumented values that can be passed in, some of which allow for interesting anti-debugging techniques, however we will focus on the ProcessDebugPort (7) value. This value, when used as the second parameter in the NtQueryInformationProcess function will return a DWORD, returned in the address of the third parameter, indicating the DebugPort that is currently available for the process. If a non-zero value is returned, indicating that a DebugPort exists, we can be sure that a debugger is attached and act accordingly.

status = (_NtQueryInformationProcess) (-1, 0x07, &retVal, 4, NULL);

if (retVal != 0) {
    //Debugger Detected - Do Something Here
} else {
   //No Debugger Detected - Continue
}

The second anti-debugging method we will look at today also uses run-time dynamic linking of the Ntdll.dll library along with GetProcAddress() to gain access to the NtSetInformationThread function. This function’s primary purpose is to modify thread specific data for a targeted thread.

NTSTATUS NTAPI NtSetInformationThread(
    __in HANDLE ThreadHandle,
    __ in THREAD_INFORMATION_CLASS ThreadInformationClass,
    __in PVOID ThreadInformation
    __in ULONG ThreadInformationLength
)

For the anti-debugging use of this function we are again only interested in two particular parameters. The first parameter is an identifier to the thread we wish to target and the second parameter is the particular information we wish to modify on the target thread. To get a pointer to our current thread we will use a call to GetCurrentThread(). We will submit that as the first parameter and the enum value for ThreadHideFromDebugger, 0x11, as the second parameter. If a debugger is attached and we pass in 0x11 to NtSetInformationProcess, our process will immediately detach any attached debugger and terminate the process.

lib = LoadLibrary(L"ntdll.dll");
_NtSetInformationThread = GetProcAddress(lib, "NtSetInformationThread");

(_NtSetInformationThread) (GetCurrentThread(), 0x11, 0, 0);

MessageBox(NULL, L"Debugger Detached", L"Debugger Detached", MB_OK);

Many of the Microsoft API calls are intentionally not well documented to discourage their use/abuse. In this case we can make calls to two non-exported functions within the Ntdll.dll library to achieve our goals of detecting or detaching a debugger from our process. There are a number of other methods of API based anti-debugging, feel free to comment about them below.

Stay tuned for our next installment as we touch on process and thread block anti-debugging.

Tallying Twitter’s Application Security Best Practice Violations

If you were paying attention the last few days, you’ve probably read about the wave of attacks launched against the popular Twitter service. It started over the weekend, with a series of phishing attacks sent to unsuspecting Twittizens via Direct Message. Then, on Monday morning, Fox News announced Bill O’Riley (sic) was gay, CNN anchor Rick Sanchez tweeted that he was high on crack, and the Barack Obama transition team decided to raise a few bucks using affiliate referral links to survey websites. All told, 33 celebrity accounts were compromiwsed before Twitter caught on and took control of the hacked accounts.

Naturally, people wanted to know how it was done. A Twitter blog entry provided some vague detail:

The issue with these 33 accounts is different from the Phishing scam aimed at Twitter users this weekend. These accounts were compromised by an individual who hacked into some of the tools our support team uses to help people do things like edit the email address associated with their Twitter account when they can’t remember or get stuck.

What’s interesting about that paragraph is that the celebrity account hacks were not related to the phishing attacks, as one might assume, and they had nothing to do with an exploitable vulnerability in the Twitter app itself. Just a case of somebody getting hold of an admin account. Ho-hum.

Tonight, the “hacker” explained to Wired Magazine how he did it. I’ll try to summarize the attack, but you might have to read it several times because it’s subtle and complicated. Ready? Brace yourself… He used a dictionary attack to brute force a password.

Continue reading here after you’ve picked yourself up off the floor. Here’s the money quote:

The hacker, who goes by the handle GMZ, told Threat Level on Tuesday he gained entry to Twitter’s administrative control panel by pointing an automated password-guesser at a popular user’s account. The user turned out to be a member of Twitter’s support staff, who’d chosen the weak password “happiness.”

Now let’s consider the application security best practices that Twitter could have followed when designing their service, any of which would have foiled the attack.

  • Password complexity. In case you were wondering, the only restriction on Twitter passwords is a minimum length of six characters. No mixed case, no numbers, no special characters, none of that. Although they do encourage you to “Be tricky!”
  • Brute-force protections. Clearly there’s no account lockout mechanism, unless of course “happiness” was at the top of the word list. While there is no perfect solution to brute force attacks, it would appear Twitter didn’t even try.
  • Segregation of administrative functionality. I won’t underestimate the amount of effort required to segregate the admin interface. That being said, the attack would’ve failed if Twitter admins had to perform privileged functions via a dedicated internal interface.

Any others? Leave them in the comments.

In all fairness, it’s hard to make security a top priority in ANY company, much less a startup with overworked non-security-aware developers using an agile methodology with tight iterations (making some educated guesses here about Twitter). Ideally you want to start prioritizing security before you become an attractive target. Twitter missed the boat on that one, but I bet they’re paying attention now.

Anti-Debugging Series - Part II

Welcome back to the series on anti-debugging. Hopefully you have your debugger and development environment handy as we are about to dive into the first round of anti-debugging code. In the first post to this series we discussed six different types of anti-debugging techniques that are in common use today. To refresh, the classifications buckets that we chose to use are:

  • API Based Anti-Debugging
  • Exception Based Anti-Debugging
  • Process and Thread Block Anti-Debugging
  • Modified Code Anti-Debugging
  • Hardware and Register Based Anti-Debugging
  • Timing and Latency Anti-Debugging

Basic API Anti-Debugging

We’ll continue this series of posts by going into a bit more depth on the easiest of API based anti-debugging techniques. An application programming interface (API) is used to support requests made from other applications for resources or functionality within a target service or library. In our case we will be primarily focused on the Microsoft Windows operating system API. There are a number of calls built directly into the operating system API that make detection of a debugger possible. Minor differences in thread and process meta-data is present when processes are run within a debugger. These calls typically facilitate a process or thread examination technique in order to determine if the target thread has a debugger attached.

When learning about anti-debugging, a developer will typically first be introduced to the IsDebuggerPresent() function. This function analyzes the process block of a target process to determine if the processes is running under the context of a debugging session. We’ll save the details of how this actually works for a later article, however suffice it to say that the target process has a flag that will contain a non-zero value if the process is being debugged. This flag is queried and returned when IsDebuggerPresent() is called. A very basic debugging detection routine would be to call this function and execute different code paths based on the response.

Prototype: BOOL WINAPI IsDebuggerPresent(void); 

if (IsDebuggerPresent()) {
    //Debugger Detected - Do Something Here
} else {
    //No Debugger Detected - Continue
}

We could also use the API function CheckRemoteDebuggerPresent(). Contrary to first thought, this function does not target a process on a remote machine, nor does it even require that it target a process remote to itself. The call can use a parameter pointing to itself to determine if it is running inside of a debugger. In the example below we pass in a handle to our current process by calling the GetCurrentProcess() function along with a variable to hold the return value from the CheckRemoteDebuggerPresent() call.

Prototype: BOOL WINAPI CheckRemoteDebuggerPresent(__in HANDLE hProcess,
           __inout PBOOL pbDebuggerPresent);

BOOL pbIsPresent = FALSE;
CheckRemoteDebuggerPresent(GetCurrentProcess(), &pbIsPresent);
if (pbIsPresent) {
    //Debugger Detected - Do Something Here
} else {
    //No Debugger Detected - Continue
}

While these two methods are probably the easiest and most straightforward methods of anti-debugging, they are also the most likely to be understood by a person wishing to bypass them. We can mix it up a bit and use a call to OutputDebugString() instead. OutputDebugString() is typically used to output a string value to the debugging data stream. This string is then displayed in the debugger. Due to this fact, the function OutputDebugString() acts differently based on the existence of a debugger on the running process. If a debugger is attached to the process, the function will execute normally and no error state will be registered; however if there is no debugger attached, LastError will be set by the process letting us know that we are debugger free. To execute this method we set LastError to an arbitrary value of our choosing and then call OutputDebugString(). We then check GetLastError() and if our error code remains, we know we are debugger free.

Prototype: void WINAPI OutputDebugString(__in_opt  LPCTSTR lpOutputString);

DWORD Val = 123;
SetLastError(Val);
OutputDebugString(L"random");
if(GetLastError() == Val) {
    //Debugger Detected - Do Something Here
} else {
    //No Debugger Detected - Continue
}

These three methods are the basic starting point for a developer wishing to implement anti-debugging into their code base. The methods are so simple they could even be implemented as macros making a call quick and easy. Numerous other API based detection methods exist with a vast array of complexity. In the next post in this series we will discuss slightly more advanced API anti-debugging techniques that will make reverse engineering and debugging even more difficult.

Major Break in MD5 Signed X.509 Certificates

Jacob Appelbaum and Alexander Sotirov just gave a presentation at the Chaos Communications Congress in Germany. They have implemented a practical MD5 collision attack on x.509 certificates. All major browsers accept MD5 signatures on certs even though it has been shown to have the collision problem for almost 2 years now. If you can generate your own X.509 certificates you can perform perfect MITM attacks on SSL. They went one better and generated an intermediate certificate authority certificate so they could sign their own certificates. This way they only need to do the attack once and can create as many valid certificates as they want.

6 Certificate Authorities are still using MD5 signing: RapidSSL, FreeSSL, TrustCenter, RSA Data Security, Thawte, verisign.co.jp. They are not going to be happy about this new attack. The researchers decided to target RapidSSL because they were able to better predict some of the certificate fields (serial number and time) because of the way RapidSSL issues the certificates. They were able to perform the computations required with 200 Playstation 3s over 1 to 2 days. Its estimated to be the same as 8000 Intel cores or $20,000 on Amazon EC2.

They ask the question, “Can we trust anything signed with a cert issued by a CA that signed with MD5 signatures in the last couple of years?” The affected CAs have been notified and are going to switch to SHA-1. The researchers also ask the question, “Why did it take an implemented attack to get the CAs to switch to SHA-1?” After all the attack has been known for almost 2 years now. We used the slogan, “Making the theoretical practical since 1992” at L0pht Heavy Industries to highlight the need to implement attacks to get some organizations to improve their security. It is a bit sad to see that in 2008, demonstration is still necessary.

The researchers were worried about repercussions by the CAs that might want to gag them. They had Mozilla and Microsoft sign NDAs that they wouldn’t tell the CAs about the problem until they could give their presentation. They think researchers should consider NDAs with vendors for protection.

You can see a demo of their forged cert here:
https://i.broke.the.internet.and.all.i.got.was.this.t-shirt.phreedom.org

They purposely dated the cert to expire on 9/1/2004 so you need to back date your machine for it to be validated properly.

Full details: http://www.phreedom.org/research/rogue-ca/

Next Page »
 

Powered by WordPress