Archive

Archive for the ‘Networking’ Category

V6 World Congress 2012 – day 3+4

February 13th, 2012 No comments

Day 3+4 of the V6 World 2012 Congress
were also interesting.

In many ways my conclusions of the first two conference days were reconfirmed but additionally I learned that;

  1. IPv4 is here to stay and it will take many more years before IPv4 is completely history. On the other hand, it is still unpredictable when IPv4 address space is really exhausted. See Geoff Huston’s IPv4 reports. Predictions are currently somewhere later this year or perhaps 2013 or even later, depending on the demand for IPv4 addresses and the way the use for IPv6 actually evolves.
  2. Any NAT technology like Carrier Grade NAT (CGN) has disadvantages to make it a viable transition mechanism. As such it is less preferred.
  3. Dual stack is the preferred transition mechanism. Where the content provider or hosting party provides access to (web) services via both IPv4 as well as IPv6.
  4. The sheer size of the IPv6 address space requires to get rid of the classical IPv4 thinking regarding address waste and the use of private IP addressing. These limitations simply don’t exist anymore. Currently the smallest routable IPv6 subnet is a /64 subnet which could have up to a maximum of 18,446,744,073,709,551,616 unique IPv6 addresses. Think about it; each /64 subnet is significantly larger than the existing IPv4 address space altogether!
  5. If the 100 biggest web sites in the world (the likes of FB, Google, Yahoo, Amazon, Microsoft, Akamai, etc) would get accessible via IPv6, it would ensure that the world would adopt IPv6 much faster.
  6. End users need to have IPv6 enabled Customer-premises equipment (CPE) such as mobile devices, DSL modems and routers, before the content providers and ISPs really would benefit the transition. Without these IPv6 enabled devices there is simply no demand for IPv6. This chicken-egg problem needs to be addressed on either side; both the ISPs as well as content providers needs to provide IPv6 solutions to address the IPv4 exhaustion and global expansion of Internet enabled devices.
  7. If vendors say that they are IPv6 ready, do not take this for granted. Many implementations have shown (interoperability) limitations when deployed. Inform the vendor of any issues so that this gets improved.
  8. Basic IP address space allocation:
    a. /32 for Schuberg Philis
    b. /48 per customer environment
    c. /64 per smallest subnet (VLAN)
    d. /127 potentially for point-to-point links (on demand, implementation specific, not internet routable)

The Dual stack mechanism
an interesting solution to implement IPv6 is providing web services in a so called ‘dual stack’ setup. This means that content is provided both via IPv4 as via IPv6. There are several dual stack scenario’s.

Tore Anderson of Redpill Linpro AS discussed the following scenario, an IPv6 centric dual stack set up, his recommendation and in his opinion most future proof:

IPv6 centric dual stack set up

I found this an interesting view as it emphasizes on the majority of components within an IT (hosting) environment being IPv6 configured but only a small part of it being IPv4 capable. This scenario would lead to a more future proof set up, focusing on the phase out of IPv4 altogether.

A bogon is a …
Something I also learned but until this conference I’ve never heard of before, is a so called bogon.

It’s wikipedia definition is:

“a bogus IP address, and an informal name for an IP packet on the public Internet that claims to be from an area of the IP address space reserved, but has not yet been allocated or delegated by the Internet Assigned Numbers Authority (IANA) or a delegated Regional Internet Registry (RIR).”

I actually found this quite funny because I’ve been around in this industry for some time now so I was somewhat surprised that I haven’t heard of this word before.

All in all,
This was a very interesting conference where I learned that IPv6 is inevitable and that the momentum is now to prepare for it. 6 June 2012 is the day to mark your agenda; it will be the day when IPv6 will permanently be turned on by many internet (content) providers.

Don Lee from Facebook and author of the Cisco Press book “Enhanced IP Services for Cisco Networks” argued; “It’s our job as IT professionals to make the transition to IPv6 as smooth as possible. The end users should not bother about it, much less as they bother nowadays about IPv4.”

And as Paul Zawacki, Enterprise Architect at Oracle so eloquently put; “IPv6 is a giant leap in IT evolution. It might very well be the most challenging moment in your professional career.”

I feel that these remarks are no understatements… Do you?

Categories: Internet, IPV6, Networking Tags:

V6 World Congress 2012 – day 2

February 9th, 2012 No comments

A marathon day
Day 2 of the IPv6 conference was actually pretty good. It was a ‘marathon’ day of +10hrs of presentations and panel discussions. Unfortunately during the last ‘talking heads’ sessions the best part of me already left the building and concentration dropped. Nonetheless it was a good day and the welcome drinks+bites at the end of the day were rewarding :-)

The opening speech
was done by John Curran, the founder and president of ARIN (the American Internet Registrar, the equivalent of the European RIPE organization). John was involved in IPng the early RFCs of what eventually became known as IPv6. How cool is that!?

My colleague Erwin Blekkenhorst (maintainer of IPv6.net) also tweeted a lot of interesting remarks and sound bites. Follow ‘@ipv6dotnet’ for getting those tweets.

During the panel discussions several companies shared their views and experiences on the IPv6 implementation and IPv4 to IPv6 transition. Better said co-existence or ‘dual stack’ providing your services via IPv4 and IPv6 in parallel.

I will not bore you with an exhaustive summary (send me a message and I will) of each presentation but I’d like to condense it into a) it’s interesting and worthwhile being at this conference and b) I feel that this is the environment were ‘it’ actually happens; the Internet industry adopting IPv6.

My conclusions
of the second day would be:

  1. Moving from IPv4 to IPv6 is inevitable. Not being part of it is basically ‘missing the boat’ and loosing the competitive advantage.
  2. Be prepared before actually implementing IPv6. Have a sound strategy resp implementation plan.
  3. Implementing IPv6 is a ‘journey‘. Take it on a step by step basis and learn as you go and grow.
  4. Despite many (hw or sw) vendors say that they support IPv6 they do not always interact as you’d expect.
  5. So in addition; try before you die (i.e. perform a POC ensuring that your design is providing what you aim for. Feed the findings back to the hw/sw vendors.
  6. Expect to spend a lot of time on awareness and training. Knowledge on IPv6 is the critical success factor.
  7. From a Schuberg Philis IPv6 Task Force perspective we seem to be aligned with what the industry as a whole is doing; we are part of the IPv6 community for some time now and are already enabled on connectivity level. Application layer IPv6 is our next challenge.
  8. I believe it is important that Schuberg Philis and our customers who are able to participate are part of the IPv6 World Day June 6, 2012. Let’s go for it!
    The FUTURE is NOW!

Categories: Internet, IPV6, Networking Tags:

V6 World Congress 2012

February 7th, 2012 No comments

I’m visiting the V6 World Congress 2012 together with collegue Erwin Blekkenhorst (a long time IPv6 adept and owner of ipv6.net as well as its corresponding Facebook web page). This IPv6 congress is held Feb 7-10 in Paris, France.

V6 World Congress 2012, Paris, France, Feb7-10
Central question of this congress is: “Enterprises Migration: How and When?”

Amongst others, both Erwin and me are IPv6 task force members within Schuberg Philis and we are determined to increase the IPv6 awareness with our fellow collegues and our customers. The questions we would like to address are: How will it impact us, our business and what will it mean to our customers, what are the ways to ‘migrate’ safely from IPv4 to IPv6 resp to operate a dual stack setup?

On this blog I’ll be posting our experiences and impressions of this congress on a day-to-day basis.

Day 1 – Technical Tutorial Day – Tue Feb 7th

1 Basic Design Concepts of IPv6 and the differences with IPv4 by Peter van de Velde – Cisco Belgium
  This presentation discussed the various characteristics of IPv6 protocol also when compared to IPv4. This presentation was a ‘so-so’ start with information already widely known but it was a start nonetheless. The stop word of Gunter ‘as such’ at some point became a bit annoying after a while.  
2 Innovative IPv6 First Hop Security (FHS) and Technologies Regarding V4 to V6 Translation by Andrew Yourtchenko – Cisco Technical Leader
  Interesting presentation focussing on L2 security including defining trust relationship with hosts and their nearest router(s) aka router authorization, securing link-operation, RA-Guard, SeND, Address Watch and Device tracking. Things that I learned was ‘address glean‘ to monitor address allocation and store bindings (to glean = to gather slowly and with extreme care, bit by bit). It was a boring presentation but with interesting topics. Andrew is a good an passionate speaker, but this subject is really something you need to dive into by looking into the slides, reading through the theory and eventuelly actually getting your hands dirty on it to really understand what the different technologies mean and how you could use it to its advantage.  
3 IPv6 and the BGP Routing Infrastructure by Susan Hares – Distinguished Engineer, Huawei Technologies
  Surprisingly interesting presentation especially due to the many statistics on BGP routing explaining the nature of evalution and migration from IPv4 to IPv6. A topic I really need to understand better. Things I learned was the IPv4 Address report and its IPv6 equivalent. Susan also referred to Geoff Huston’s work in the IPv6 arena. Another thing I have never heard of was a bogon. Its definition on wikipedia is a bogus IP address. Susan is a scientist and clearly an experienced person in the BGP area. She calls herself a BGP geek. How true. 
4 Content Providers and ISP projects to enable IPv6 on their site or for their access networks by Jordi Palet Martinez – ConsulIntel
  This presentation was the best presentation of the day from my point of view. It discussed the theory of migration versus coexistence and transition. IPv4 will still be around for the next decades and can not -by nature- simply be turned off nor deprecated. The terminology ‘migration’ is therefor not really describing the challange instead it is confusing. Jordi discussed the native IPv6 versus dual stack, tunneling and NAT approaches.

His conclusions were:
1. Dual stack as much as possible.
2. Tunneling, managed as much as possible via softwires or 6RD
3. Tunneling, unmanaged if no other way via technologies like Teredo or 6to4NAT
4. Translation & CGN like NAT64, DS-LITE, NAT444.

Next Jordi discussed his experiences in Spain at the Ministry of Industry, Tourism and Trade (MITYC) and at a Spanish publisher. Another interesting topic was his experiences with the IPv6 Awareness and Training Road show in Spain.

His conclusions were:
1. Do not design nor implement IPv6 as an IPv4 project.
2. Training and knowledge is essential
3. Planning is key
4. A V6 implementation might not be as expensive as you might think, as many old networks devices and servers already support IPv6 (if necessary after firmware or OS upgrade).

Categories: Internet, IPV6, Networking Tags:

Securing networks with Cisco ASA

December 20th, 2011 No comments

The Cisco ASA firewall offers protection for Denial of Service attacks, such as SYN floods, TCP excessive connection attacks etc.
With the Policy Framework functionality, you can configure granular controls for TCP Connection limits and timeouts. For example, you can control and limit the maximum number of simultaneous TCP and UDP connections that are allowed towards a specific host (or subnet), the maximum number of simultaneous embryonic connections allowed (for SYN flood attacks), the per-client max number of connections allowed etc.

STEP1: Identify the traffic to apply connection limits using a class map
ASA(config)# access list CONNECTIONS-ACL extended permit ip any 10.1.1.1 255.255.255.255
ASA(config)# class-map CONNECTIONS-MAP
ASA(config-cmap)# match access-list CONNECTIONS-ACL

STEP2: Add a policy map to set the actions to take on the class map traffic
ASA(config)# policy-map CONNECTIONS-POLICY
ASA(config-pmap)# class CONNECTIONS-MAP
! The following sets connection number limits
ASA(config-pmap-c)# set connection {[conn-max n] [embryonic-conn-max n]
[per-client-embryonic-max n] [per-client-max n] [random-sequence-number {enable | disable}]}

The conn-max n argument sets the maximum number of simultaneous TCP and/or UDP connections that are allowed, between 0 and 65535.
The embryonic-conn-max n argument sets the maximum number of simultaneous embryonic connections allowed, between 0 and 65535.
The per-client-embryonic-max n argument sets the maximum number of simultaneous embryonic connections allowed per client, between 0 and 65535.
The per-client-max n argument sets the maximum number of simultaneous connections allowed per client, between 0 and 65535.

! The following sets connection timeouts
ASA(config-pmap-c)# set connection timeout {[embryonic hh:mm:ss] {tcp hh:mm:ss
[reset]] [half-closed hh:mm:ss] [dcd hh:mm:ss [max_retries]]}

STEP3: Apply the Policy on one or more interfaces or Globaly
ASA(config)# service-policy CONNS-POLICY {global | interface interface_name}

 

 

The IP audit feature provides basic IPS support for the ASA. It supports a basic list of signatures, and you can configure the ASA to perform one or more actions on traffic that matches a signature.

STEP:1 To define an IP audit policy for informational signatures
ASA(config)# ip audit name policy_name info [action [alarm] [drop] [reset]]

STEP:2 To define an IP audit policy for attack signatures
ASA(config)# ip audit name policy_name attack [action [alarm] [drop] [reset]]

Where alarm generates a system message showing that a packet matched a signature, drop drops the packet, and reset drops the packet and closes the connection. If you do not define an action, then the default action is to generate an alarm.

STEP:3 To assign the policy to an interface
ASA(config)# ip audit interface interface_name policy_name

STEP:4 To disable signatures
ASA(config)# no ip audit signature [signature]

Categories: Cisco Tags:

Page load performance with a Cisco ACE4710

November 16th, 2011 No comments

The ACE has two different ways of treating the L7 connections internally, that we call “proxied” and “unproxied”. In essence, the proxied mode means that the traffic will be processed by one of the CPU (normally to inspect/modify the L7 data), while, on the unproxied mode, the ACE sets up a hardware shortcut (Fastpath) that allows forwarding traffic without the need to do any processing on it.

For a L7 connection, the ACE will proxy it at the beginning, and, once all the L7 processing has been done it will unproxy the connection to save resources until L7 processing is required again. Before it goes ahead with the unproxying, it needs to see the ACK for the last L7 data sent.
In packet captures, we see that the client is taking approximately 200ms to send this acknowledgement each time. When a connection is composed of many HTTP requests, the proxy/unproxy process can add up a total delay of several seconds.

The configuration of a sorry/backup server farm with for example a HTTP redirect to a sorry page will cause the ACE to treat the connections to the VIP as a L7 and influence the total page load time.

The proxy/unproxy delay can have a big impact for situations in which the client is taking a long time to send the acknowledgement, so, the ACE allows to change the behavior. It is possible to define a “round-trip-time” threshold so that connections from clients with a RTT value higher than the threshold are never unproxied.
You can do this by setting the threshold to 0 to ensure to keep connections always proxied. To do this, you would need to configure a parameter map like the one below and add it to the policy-map.
parameter-map type connection
set tcp wan-optimization rtt 0

Even though this setting will most likely solve the issue, it also has some drawbacks. The main one is that the ACE appliance only supports up to 256K simultaneous L7 connections in proxied state (which includes also the connections towards the servers, so, it would be 128K for client connections), so, if the amount of simultaneous connections reaches that limit, new connections would be dropped. The second issue, although not so impacting, would be that the maximum number of connections per second supported would also go down slightly due to the increased processing needed.

Categories: Cisco Tags:

Online DNSSEC verification

November 16th, 2011 No comments
Categories: Technology Tags:

F5 BigIP LTM IPv6 RA

November 2nd, 2011 No comments

In order to have the F5 BigIP LTM announce IPv6 Router Advertisements (RA) you have to logon to the console and create the following config file:

#
# /etc/radvd.conf
#
interface [interface name]
{
AdvSendAdvert on;
MinRtrAdvInterval 5;
MaxRtrAdvInterval 10;
AdvDefaultPreference low;
AdvHomeAgentFlag off;
prefix xxxx:xxxx:xxxx::/yy
{
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr off;
};
};

You have to use lower-case characters for the interface or vlan name otherwise this will not work!

Then stop the service: bigstart stop radvd
And start the service again: bigstart start radvd

Categories: F5, IPV6 Tags:

OpenFlow

October 29th, 2011 No comments

OpenFlow, the exciting new networking technology recently bursting out of academia and into industry, has generated considerable buzz since Interop Las Vegas 2011, which has been called “The Coming Out Party For OpenFlow.”

Openflow

OpenFlow began at a consortium of universities, led by Stanford and Berkeley, as a way for researchers to use enterprise-grade Ethernet switches as customizable building blocks for academic networking experiments. They wanted their server software to have direct programmatic access to a switch’s forwarding tables, and so they created the OpenFlow protocol. The protocol itself is quite minimal — a 27-page spec that is an extremely low-level, yet powerful, set of primitives for modifying, forwarding, queuing and dropping matched packets. OpenFlow is like an x86 instruction set for the network, upon which layers of software can be built.

In an OpenFlow network, the various control plane functions of an L2 switch — Spanning Tree Protocol, MAC address learning, etc. — are determined by server software rather than switch firmware.

Today, the OpenFlow protocol has moved out of academia and is driven by the Open Networking Foundation, a nonprofit industry organization whose members include many major networking equipment vendors and chip technology providers and has a board of some of the largest network operators in the world like Google, Microsoft, Yahoo, Facebook, Deutsche Telekom and Verizon.

Most current OpenFlow solutions incorporate a three-layer architecture, where the first layer is comprised of the all-important OpenFlow-enabled Ethernet switches. Typically, these are physical Ethernet switches that have the OpenFlow feature enabled. We’ve also seen OpenFlow-enabled hypervisor/software switches and OpenFlow-enabled routers. More devices are certainly coming.

There are two layers of server-side software: an OpenFlow Controller and OpenFlow software applications built on top of the Controller.

The Controller is a platform that speaks southbound directly with the switches using the OpenFlow protocol. Northbound, the Controller provides a number of functions for the OpenFlow software applications — these include marshalling the switch resources into a unified view of the network and providing coordination and common libraries to the applications.

At the top layer, the OpenFlow software applications implement the actual control functions for the network, such as switching and routing. The applications are simply software written on top of the unified network view and common libraries provided by the Controller. Thus, those applications can focus on implementing a particular control algorithm and then can leverage the OpenFlow layers below it to instantiate that algorithm in the network.

This three-layer OpenFlow Architecture should feel very familiar to software architects. For example, consider the Web application server architecture: applications sitting on top of a Web application server sitting on top of a database layer. Each of the lower layers presents an abstraction/API upward that simplifies the design of the layers above it.

The big picture is that OpenFlow and the larger movement in the networking industry called “Software-Defined Networking” promise true disruption because they enable rapid innovation — new networking functionality implemented as a combination of software applications and programmable devices, effectively bypassing the multi-year approval/implementation stages of traditional networking protocols. This acceleration is possible because of the layered design of the software/hardware architecture.

Categories: Networking, Technology Tags:

Impact of TCP offload and ‘Received Side Scaling’ on traffic handling

March 9th, 2010 No comments
 

While doing a performance test on one of our customer environments we observed the impact of TCP offload and “Receive Side Scaling” (RSS) settings on the interface card on Windows web servers in combination with traffic handling.

Setup:

1. 2x Mercury Load Runner generators hitting public URL of customer

2. Served by 3x Windows2003 SP2 servers, running IIS6

3. Load being balanced by Cisco CSS11503 to web farm.

 

The CPU performance graph of the web servers with TCP offload and RSS enabled on the internet facing (FRONT) interface:image1-with-tcp-offload-enabled

 

Similarly but a more outdated graph even more clearly showing that traffic is alternating from one web server to another:

image1-1-with-TCP-offloading-enabled

 

Most interesting right!?

What makes this traffic to alternate if the load balancer has been set up to distribute the load evenly across the farm resp each Load Runner vuser to clear its cookies and session cache after each request?

We then stumbled over this read, knowing that TCP offload to network card is a classic one , but still:
http://blogs.msdn.com/psssql/archive/2010/02/21/tcp-offloading-again.aspx

And found out the characteristic that when TCP offload and RSS were disabled, the load is more evenly spread across the web farm:

 image2-with-tcp-offload-disabled

I find this pretty cool.

Any comments?

 

CA will not start… What do you mean, cannot download CRL…

January 20th, 2010 3 comments

As part of my work I was installing a Microsoft PKi infrastructure with two tiers. A root CA and an issuing CA.

Since the root CA is in another domain then the issuing CA, it took some fiddling and tweaking around with my CDP and AIA extensions, but that is another blogpost all together.

I knew I was in for some fun when when the following happened:

  • I installed my Issuing CA and generated the certificate request
  • I issued the request to my Root CA and generated the Issuing CA certificate
  • I tried to install the Issuing CA certificate and got the following error:
Cannot verify certificate chain. Do you whish to ignore the error and continue? The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2168885613)

Cannot verify certificate chain. Do you whish to ignore the error and continue? The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2168885613)

My first reaction was to call one of the network guest and notify him that I needed http access to the Issuing CA to the CDP location. But whil on the phone, I decided to try and to my surprise I was actually able to manually pull down the crl.

Intregued, I decided to check a few things:

  • I could download the CRL from both CDP locations with Internet Exporer
  • I could open the downloaded CRLs
  • I could telnet to port 80 of the both webservers
  • I could telnet to port 80 manually issue the GET /crl/CRLname.crl HTTP/1.0 command and get data back

O.K. what is going on here… Lets open PKI view, which is now included in Windows 2008 and Vista and can be downloaded for Windows 2000 and 2003.

It seemed that PKI view as in agreement, it too could not download the CRL from the CDP location

PKI view shows "Unable To Download" for both CDP locations

PKI view shows "Unable To Download" for both CDP locations

This did sent me on a wild goose chase:

But, as stated, I would use certutil to get the “best” answer on how is my configuration.
Certutil -verify -urlfetch “certfile.cer” will check *every* CDP and AIA URL (including OCSP) and tell you how they are all doing *at that specific instance in time” since it goes to the URLs immediately.
Brian

I exported the Issuing CA certificate from the certificate database of the Root CA and ran the command against is and this is what I found

E:\>certutil -verify -urlfetch <certfile>.cer
Issuer:
CN=Root CA
Subject:
CN=Issuing CA
Cert Serial Number: 115d5f6400020000000b
<snip>

—————-  Certificate AIA  —————-
Verified “Certificate (0)” Time: 0
[0.0] http://IIS1.domain1local/crl/Root-CA.crt

Verified “Certificate (0)” Time: 0
[1.0] http://IIS2.domain1.local/crl/Root-CA.crt

—————-  Certificate CDP  —————-
Wrong Issuer “Base CRL (13)” Time: 0
[0.0] http://IIS1.domain1.local/crl/Root-CA.crl

Wrong Issuer “Base CRL (13)” Time: 0
[1.0] http://IIS2.domain1.local/crl/Root-CA.crl

<snip>
E:\>

So while PKI view and the other error messages I was getting all pointed to the most common cause, it actually turned out that the CRl did get downloaded, but was not cryptographically relevant to what the system believes is the Root CA certificate.

Root cause

Inspection of the CRLs generated and the Root certificates installed showed what had caused the problem. In order to test the CDP extensions I had reissued the Root CA certificate, causing the Root CA to have three active certificates. Each with a different key.

This CA has three CA certificates

This CA has three CA certificates

When validating the Issuing CA certificate, validation would end at the last certificate issued, however the CA still signs its CRLs with the key pair of the first certificate.

I guess for me there is nothing left but to reinstall the entire chain.