Archive

Posts Tagged ‘Windows 2008’

The mistery of the missing ‘MSS:’ setting on Windows 2008

November 22nd, 2010 15 comments
Screenshot form Group Policy Editor

The MSS: settings used to be here...

I recently got involved in a project where I defined the Baseline Security settings for windows and Linux. I used the settings provided by the Center for Internet Security (CIS).

We decided on the following approach:

  • Based on the CIS templates we created a baseline document specific to our company
  • I, in my security role, created a Nessus .audit file, so we could audit compliance to our own baseline with Seccubus
  • The windows administrator created GPOs to apply the settings.

When creating in the GPOs we did a strange discovery. In a windows the settings that are normally marked as MSS: in the category Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options do not appear in a domain if its functional level is Windows 2008.

This made us wonder, have these setting become irrelevant ? If this is not the case, how can we still set them, preferably via group policy?

The settings are not irrelevant, as e.g. Peter van Eeckhoutte’s blog points out. Windows 2008 does not forward IPv4 packets that have source routing on them,  but it does accept them if the machine is the final destination. However for IPv6 Windows 2008 will forward these packets by default.

So if the settings are not irrelevant, how can we apply them if they are not in the Group Policy Editor? For this purpose we created an .adm file, which can be loaded into the Group Policy editor as a Classic Administrative template. Read more…

Citrix Edgesight 5.2 vs Memory Allocation within WOW64

February 9th, 2010 2 comments

xenapp

Recently we started evaluating Citrix Edgesight, on a enviroment we are currently building, consisting of XenApp5 2008 x64 and XenDesktop 4 Farms.

After the installation of the EdgeSight agent, suddenly a bunch of applications running within a Java Virtual machine stopped functioning. Throwing the “Could not launch the java virtual machine” error.
These Java apps tried allocating quite some memory using these java arguments (eg: XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=35 -XX:NewRatio=2″   initial-heap-size=”32m” max-heap-size=”1024m”)

After some investigation a colleague (Hugo Trippaers) found out that there was only 0,9 GB of memory allocatable on our Citrix XenApp machines using the memtest32.exe tool. While our other servers happily reported 1,5 GB of allocatable memory (Within WOW64). (Physical Machine = HP DL380G6 with 48 GB of memory, uh should be enough?)

After some deeper digging using memalloc.exe, I discover some substantial differences in memory allocation between our XenApp Servers with the edgesight agent installed and servers without the EdgeSight agent.

XenApp servers with Edgesight Agent 5.2 SP1 x64: memalloc.exe with edgesight
XenApp Servers without edgesight: memalloc.exe – without edgesight

The main difference here is all the Citrix hooks being loaded, see below.
This apparently consumes so much memory that it was not possible for java to allocate enough memory.

For more insights on WOW64 look here:  http://blogs.msdn.com/gauravseth/archive/2006/04/26/583963.aspx

By default 32bit applications within WOW64 can leverage the full 4 GB of memory availlable, which is not possible on a native 32 bit system because of the separation of kernel and user space.
Applications need to be compiled with /largaddressaware (Visual Studio : http://msdn.microsoft.com/en-us/library/wz223b1z(VS.80).aspx) or patched using editbin (http://bilbroblog.com/wow64/hidden-secrets-of-w0w64-ndash-large-address-space/), to fully use the 4 GB availlable otherwise they can only allocate 1,6 GB of memory.

We will open a case with Citrix on this; to be continued.

Citrix hooks being loaded when edgesight is installed:
Read more…

CA will not start… What do you mean, cannot download CRL…

January 20th, 2010 3 comments

As part of my work I was installing a Microsoft PKi infrastructure with two tiers. A root CA and an issuing CA.

Since the root CA is in another domain then the issuing CA, it took some fiddling and tweaking around with my CDP and AIA extensions, but that is another blogpost all together.

I knew I was in for some fun when when the following happened:

  • I installed my Issuing CA and generated the certificate request
  • I issued the request to my Root CA and generated the Issuing CA certificate
  • I tried to install the Issuing CA certificate and got the following error:
Cannot verify certificate chain. Do you whish to ignore the error and continue? The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2168885613)

Cannot verify certificate chain. Do you whish to ignore the error and continue? The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2168885613)

My first reaction was to call one of the network guest and notify him that I needed http access to the Issuing CA to the CDP location. But whil on the phone, I decided to try and to my surprise I was actually able to manually pull down the crl.

Intregued, I decided to check a few things:

  • I could download the CRL from both CDP locations with Internet Exporer
  • I could open the downloaded CRLs
  • I could telnet to port 80 of the both webservers
  • I could telnet to port 80 manually issue the GET /crl/CRLname.crl HTTP/1.0 command and get data back

O.K. what is going on here… Lets open PKI view, which is now included in Windows 2008 and Vista and can be downloaded for Windows 2000 and 2003.

It seemed that PKI view as in agreement, it too could not download the CRL from the CDP location

PKI view shows "Unable To Download" for both CDP locations

PKI view shows "Unable To Download" for both CDP locations

This did sent me on a wild goose chase:

But, as stated, I would use certutil to get the “best” answer on how is my configuration.
Certutil -verify -urlfetch “certfile.cer” will check *every* CDP and AIA URL (including OCSP) and tell you how they are all doing *at that specific instance in time” since it goes to the URLs immediately.
Brian

I exported the Issuing CA certificate from the certificate database of the Root CA and ran the command against is and this is what I found

E:\>certutil -verify -urlfetch <certfile>.cer
Issuer:
CN=Root CA
Subject:
CN=Issuing CA
Cert Serial Number: 115d5f6400020000000b
<snip>

—————-  Certificate AIA  —————-
Verified “Certificate (0)” Time: 0
[0.0] http://IIS1.domain1local/crl/Root-CA.crt

Verified “Certificate (0)” Time: 0
[1.0] http://IIS2.domain1.local/crl/Root-CA.crt

—————-  Certificate CDP  —————-
Wrong Issuer “Base CRL (13)” Time: 0
[0.0] http://IIS1.domain1.local/crl/Root-CA.crl

Wrong Issuer “Base CRL (13)” Time: 0
[1.0] http://IIS2.domain1.local/crl/Root-CA.crl

<snip>
E:\>

So while PKI view and the other error messages I was getting all pointed to the most common cause, it actually turned out that the CRl did get downloaded, but was not cryptographically relevant to what the system believes is the Root CA certificate.

Root cause

Inspection of the CRLs generated and the Root certificates installed showed what had caused the problem. In order to test the CDP extensions I had reissued the Root CA certificate, causing the Root CA to have three active certificates. Each with a different key.

This CA has three CA certificates

This CA has three CA certificates

When validating the Issuing CA certificate, validation would end at the last certificate issued, however the CA still signs its CRLs with the key pair of the first certificate.

I guess for me there is nothing left but to reinstall the entire chain.

BUG (and work around): Persistent routing issue on Win2k8 clusters

October 9th, 2009 No comments

Another good (shoudl I say brilliant?) information from our collegue Elianne van der Kamp.

Yesterday we discovered an issue with Windows 2008 clusters: manually added persistent routes disappear from the active routes table, when taking offline (or failing over) a cluster group containing an ip-address-resource.

This issue is documented here. This same article also describes a workaround for when you have multiple gateways on multiple NIS’c.

By changing your route add command from e.g. <route add 10.1.0.0 mask 255.255.255.0 10.1.0.1 –p> to <route add 10.1.0.0 mask 255.255.255.0 0.0.0.0 if 25>

With this second command you bind the route to the interface instead of an ip-address. And since it is now bound to a local device any cluster failover will leave the route in the routing table.

However this will not solve the issue we discovered yesterday: We are using 2 gateways ‘behind’ the same interface. So binding the route to the interface will not help here.

Example interface 18: 192.168.251.36 mask 255.255.255.0 192.168.251.1, with added route 192.168.250.0 mask 255.255.255.0 192.168.251.3 –p.

When an ip-address will be taken offline (fails over) the Active route 192.168.250.0 255.255.255.0 192.168.251.3 will be removed.

Accidentally we found out that adding the interface to the route will solve this new issue (thanks our collegue Enrico). So our new route command will have to look like this:

<Route add 192.168.250.0 mask 255.255.255.0 192.168.251.3 if 18>. This will leave the route in the active routes table.

Why does this work? And is it reliable?

Since we couldn’t find any google/Microsoft hits on this particular issue, we had to do a little registry digging.

The standard command <Route add 192.168.250.0 mask 255.255.255.0 192.168.251.3 > just adds the persistent route to the registry which triggers the ‘bug’.

However the new command <Route add 192.168.250.0 mask 255.255.255.0 192.168.251.3 if 18> also makes 14 changes in the cluster part of the registry telling it that this route is bound to the adapter and to be left behind on the local server in case of a failover

So I think it look pretty reliable. We did lots of reboots and failovers on the cluster and the routes seem pretty persistent now..

Timeline of the SMB2 vulnerability

October 6th, 2009 No comments

While researching the SMB2 vulnerability I decided to make a time line. It really shows how devastating a 0-day can be in the wrong hands

Date Event
7 September Laurent Gaffié releases PoC code on his blog
8 September The news is picked up by Sans ISC
HD Moore ports the exploit to Metasploit
Microsoft confirms the existence of the flaw
Microsoft releases an advisory
9 September The BSOD exploit is published on Milw0rm
15 September A working remote code execution exploit is released in Immunity Canvas
18 September A working remote code execution exploit is released for metasploit
Microsoft releases a tool to disable SMB2
9 October Microsoft announces a patch

To date Microsoft has not released a patch. I will continue to update this post.

A tool to disable SMB2 is here. Instructions on how to disable SMB2 manually are in the workaround section of this advisory.

On the 9th of October Microsoft announced a patch for this issue and the ISS FTP issue.

Get rid of Event ID 5156: The Windows Filtering Platform has allowed a connection

October 5th, 2009 3 comments

When you install McAfee on Windows Server 2008, and probably Windows Vista also, you can get a lot of messages in your security log. Like this one:

ID 5156

Event ID 5156 means that WFP has allowed a connection. When most connections are allowed your security log will fill up very fast.

You can disable Object Access auditing but then you’ll miss other events which might be of interest. So, instead, let’s just disable Success Auditing for Filtering Platform Connections. It’s not possible to disable auditing subcategories with a policy or other GUI tool, but I found out that you can enable and disable specific subcategories with a special command-line tool: Auditpol.exe, which is included with Windows Vista and Windows Server 2008. I used the following command:

auditpol /set /subcategory:”Filtering Platform Connection” /success:disable /failure:enable

As you can see this disables Success auditing for the Filtering Platform Connection subcategory.

For more info check out this article:

http://msdn.microsoft.com/en-us/library/bb309058(VS.85).aspx

Windows 2008 KMS activation limit workaround

September 11th, 2009 1 comment

Another tip from Elianne van de Kamp, which I of course couldn’t keep to myself. Your Windows 2008 KMS key (replacement of the Volume License Key/VLK) can be registered for a maximum of ten times on six different machines. If you want to extend this you will have to file a request at your Microsoft representative with lots of information:

  • Organization name
  • Agreement number
  • Authorization number
  • Requester name, telephone, etc
  • Product
  • Last 5 digits of your KMS key
  • Number of additional activations
  • And last but not least: A good reason why you need extra activations.

The process takes 48 hours to complete, which means you have to wait that long before your extra activations are available. The first step to activate your KMS key is to register it with:

slmgr -ipk xxxxx-xxxxx-xxxxx-xxxxx-xxxxx

It will tell you the key is valid (or not, but you then have another problem). Then you have to activate it with:

slmgr –ato

When the key is out of activations it will respond with “ERROR: 0xc004c008: the key is valid, but cannot be activated.”

Instead of filing a 2 day taking request you can use a quick workaround:

  • Enter the KMS key as the registration key on the KMS server.  (Control Panel – System – Change product key).
  • Activate the key. You will get a message the key cannot be registered. Choose activation by phone.
  • Call MS activation line. Enter the numbers into the automated response, and you will receive the 8 times 5 new key.
  • Enter the numbers and you’re all done, the KMS server will now be activated.

You can check this with:

slmgr –dlv