Archive

Posts Tagged ‘VMWare’

ESXi tcpdump

December 7th, 2010 3 comments

For troubleshooting purposes sometimes it might be usefull to perform a tcpdump on the ESXi host instead of from a VM.

After removing the security setting in the dvPortGroup explicitly allowing promiscous mode you can connect a vmkernle port to the dvPortGroup and perform a tcpdump via the commandline

tcpdump-uw -i <vmk#> -n -w /vmfs/volumes/<your_datastore>/<your_file_name>.cap

The help for tcpdump-uw is the following:


~ # tcpdump-uw --help
tcpdump-uw version 4.0.0
libpcap version 1.0.0
Usage: tcpdump-uw [-aAdDefIKlLnNOpqRStuUvxX] [ -B size ] [ -c count ]
                [ -C file_size ] [ -E algo:secret ] [ -F file ] [ -G seconds ]
                [ -i interface ] [ -M secret ] [ -r file ]
                [ -s snaplen ] [ -T type ] [ -w file ] [ -W filecount ]
                [ -y datalinktype ] [ -z command ] [ -Z user ]
                [ expression ]

Control+C to interrupt the capture ;-)

Categories: VMWare, vSphere 4 Tags: , ,

BlackhatEU : Virtual Forensics

April 15th, 2010 No comments

By Christiaan Beek

From isfullofcrap Flickr photo stream. Creative Commons License

From isfullofcrap Flickr photo stream. Creative Commons License

BlackhatEU : Virtual Forensics
By Christiaan Beek

What are the challenges when you have to do forensics on a virtual environment?
•    What are the tools available?
•    Are the tools forensically sound?
•    Where is the data?
•    Who owns the data?
•    What forensic techniques do we use?
•    How to acquire data from the cloud?

Citrix is a nightmare for forensics investigators. There is no personal hard disk to investigate, only a personal profile which does not have very much data in it.
Read more…

Blackhat talk: Cloudburst – VMWare guest to host escapes by Kostya Kirtchinsky

July 31st, 2009 No comments

Kostya started of by telling everybody: “I’m not a virtualisation expert”

Then he started to explain how he was able to build up his cloudburst exploit.he focused on the guest os devices, because the device are omnipresent in all VMWare pruducts, they run on the host, can be accessed from the guest, are written in C/C++ and parse some complex data.

Read more…

vSphere 4 Labmanager released

July 14th, 2009 No comments

VMware has released Labmanager for Vpshere 4. http://www.vmware.com/products/labmanager/

VMware vCenter Lab Manager is the ideal solution for IT organizations who want to provide self-service provisioning and management capabilities to internal teams. Policy-based access control reduces administrative burden for IT, lowers infrastructure management costs and empowers project teams to deliver applications more quickly and with greater agility.

Deliver Higher Service Levels and Lower Infrastructure Costs

Lab Manager offers unique capabilities to simplify management of the internal cloud for dev/test:

  • Self Service Portal – Provides on-demand access to a library of virtual machine configurations for end users while eliminating time-consuming provisioning tasks for IT by 95%.
  • Automated Resource Management – Allows dynamic allocation of resources in a multi-team environment, enforces quotas and access rights, and reclaims unused infrastructure services.
  • Enterprise Scalability – Provides long-term return on investment with a scalable architecture for worldwide deployment, best in class performance and seamless integrations with in-house and 3rd party solutions.
  • Categories: VMWare, vSphere 4 Tags: , ,

    ESX Cluster Stretched over two DC’s…

    July 2nd, 2009 No comments

    While doing some research found this article on the Pro’s and Con’s of stretched ESX cluster across two datacenters.

    A stretched cluster is the practice of having ESX member servers in a cluster that are geographically separated.   The reason this is generally done is to provide the ability to dynamically move workloads from one datacenter to another.   Often, the customer is also considering it for disaster recovery purposes (“I’ll just VMotion in case of a disaster”).  Can this be done – ABSOLUTELY – but not considered lightly.

    More here: http://virtualgeek.typepad.com/virtual_geek/2008/06/the-case-for-an.html

    Categories: VMWare Tags: , , ,

    VMware breaks the 50,000 SPECweb2005 barrier using VMware vSphere 4

    June 17th, 2009 No comments

    Looking forward to seeing if it delivers on the promises of performace, here it is an iteresting reading. You may be interested to have a look at this white paper first What’s New in VMware vSphere™ 4: Performance Enhancements

    VMware breaks the 50,000 SPECweb2005 barrier using VMware vSphere 4

    VMware has achieved a SPECweb2005 benchmark score of 50,166 using VMware vSphere 4, a 14% improvement over the world record results previously published on VI3. Our latest results further strengthen the position of VMware vSphere as an industry leader in web serving, thanks to a number of performance enhancements and features that are included in this release. In addition to the measured performance gains, some of these enhancements will help simplify administration in customer environments.

    The key highlights of the current results include:

    1. Highly scalable virtual SMP performance.
    2. Over 25% performance improvement for the most I/O intensive SPECweb2005 support component.
    3. Highly simplified setup with no device interrupt pinning.

    Let me briefly touch upon each of these highlights.

    Virtual SMP performance

    The improved scheduler in ESX 4.0 enables usage of large symmetric multiprocessor (SMP) virtual machines for web-centric workloads. Our previous world record results published on ESX 3.5 used as many as fifteen uniprocessor (UP) virtual machines. The current results with ESX 4.0 used just four SMP virtual machines. This is made possible by several improvements that went into the CPU scheduler in ESX 4.0.

    From a scheduler perspective, SMP virtual machines present additional considerations such as co-scheduling. This is because in case of a SMP virtual machine, it is important for ESX scheduler to present the applications and the guest OS running in the virtual machine with the illusion that they are running on a dedicated multiprocessor machine. ESX implements this illusion by co-scheduling the virtual processors of a SMP virtual machine. While the requirement to co-schedule all the virtual processors of a VM was relaxed in the previous releases of ESX, the relaxed co-scheduling algorithm has been further refined in ESX 4.0. This means the scheduler has more choices in its ability to schedule the virtual processors of a VM. This leads to higher system utilization and better overall performance in a consolidated environment.

    ESX 4.0 has also improved its resource locking mechanism. The locking mechanism in ESX 3.5 was based on the cell lock construct. A cell is a logical grouping of physical CPUs in the system within which all the vCPUs of a VM had to be scheduled. This has been replaced with per-pCPU and per-VM locks. This fine-grained locking reduces contention and improves scalability. All these enhancements enable ESX 4.0 to use SMP VMs and achieve this new level of SPECweb2005 performance.

    Very high performance gains for workloads with large I/O component

    I/O intensive applications highlight the performance enhancements of ESX 4.0. These tests show that high-I/O workloads yield the largest gains when upgrading to this release.

    In all our tests, we used SPECweb2005 workload which measures the system’s ability to act as a web server. It is designed with three workloads to characterize different web usage patterns: Banking (emulate online banking), E-commerce (emulates an E-commerce site) and Support (emulates a vendor support site that provides downloads). The performance score of each of the workloads is measured in terms of the number of simultaneous sessions the system is able to support while meeting the QoS requirements of the workload. The aggregate metric reported by the SPECweb2005 workload normalizes the performance scores obtained on the three workloads.

    The following figure compares the scores of the three workloads obtained on ESX 4.0 to the previous results on ESX 3.5. The figure also highlights the percentage improvements obtained on ESX 4.0 over ESX 3.5. We used an HP ProLiant DL585 G5 server with four Quad-Core AMD Opteron processors as the system under test. The benchmark results have been reviewed and approved by the SPEC committee.

    Sw2005_KL

    We used the same HP ProLiant DL585 G5 server and the physical test infrastructure in the current as well as the previous benchmark submission on VI3. There were some differences between the two test configurations (for example, ESX 3.5 used UP VMs while SMP VMs were used on ESX 4.0; ESX 4.0 tests were run on currently available processors that have a slightly higher clock speed). To highlight the performance gains, we will look at the percentage improvements obtained for all the three workloads rather than the absolute numbers.

    As you can see from the above figure, the biggest percentage gain was seen with the Support workload, which has the largest I/O component. In this test, a 25% gain was seen while ESX drove about 20 Gbps of web traffic. Of the three workloads, the Banking workload has the smallest I/O component, and accordingly had relatively smaller percentage gain.

    Highly simplified setup

    ESX 4.0 also simplifies customer environments without sacrificing performance. In our previous ESX 3.5 results, we pinned the device interrupts to make efficient use of hardware caches and improve performance. Binding device interrupts to specific processors is a technique common to SPECweb2005 benchmarking tests to maximize performance. Results published in the http://www.spec.or/osg/web2005 website reveal the complex pinning configurations used by the benchmark publishers in the native environment.

    The highly improved I/O processing model in ESX 4.0 obviates the need to do any manual device interrupt pinning. On ESX, the I/O requests issued by the VM are intercepted by the virtual machine monitor (VMM) which handles them in cooperation with the VMkernel. The improved execution model in ESX 4.0 processes these I/O requests asynchronously which allows the vCPUs of the VM to execute other tasks.

    Furthermore, the scheduler in ESX 4.0 schedules processing of network traffic based on processor cache architecture, which eliminates the need for manual device interrupt pinning. With the new core-offload I/O system and related scheduler improvements, the results with ESX 4.0 compare favorably to ESX 3.5.

    Conclusions

    These SPECweb2005 results demonstrate that customers can expect substantial performance gains on ESX 4.0 for web-centric workloads. Our past results published on ESX 3.5 showed world record performance in a scale-out (increasing the number of virtual machines) configuration and our current results on vSphere 4 demonstrate world class performance while scaling up (increasing the number of vCPUs in a virtual machine). With an improved scheduler that required no fine-tuning for these experiments, VMware vSphere 4 can offer these gains while lowering the cost of administration.

    View article…

    VMware apologies for the Hyper-V crashes video

    June 15th, 2009 No comments

    From: http://www.virtualization.info/2009/06/vmware-apologies-for-hyper-v-crashes.html

    When we look at the competition in the IT industry there’s nothing that beats the marketing guerrilla we are experiencing in the virtualization space.

    This is perfectly understandable considering that the vendor in control of the hypervisor is able to influence and in many ways able to control all the other companies that provide other pieces of the computing stack.
    For the first time ever the absolute domain of the OS vendor is threatened by the hypervisor vendor so that the former tries to turn virtualization into a platform feature while the latter tries to impose the technology as absolutely independent.

    It’s also true that compared to ten years ago the vendors have new tools to spread fear, uncertainty and doubts (FUD) against their competitors: paid bloggers, Twitter, Facebook, YouTube and so much more are available to influence the prospects and build armies of fanboys that are ready to overreact and defend their beloved products no matter what.

    Nowadays is becoming increasingly common that marketing departments cross the line.
    It’s much more uncommon to see a company that publicly apologies for a bad marketing action.

    It’s the case of VMware which apologized for distributing a video of Microsoft Hyper-V crashing when its virtual machines were running a certain version of the proprietary VMmark benchmark platform.

    The video, which was available here, was realized by the VMware Performance Team and uploaded on YouTube by Scott Drummonds, Technical Marketing Manager at the company.
    Despite Drummonds is in the VMware Performance Team, where every aspect of the virtual infrastructure is taken deadly seriously, he didn’t publish any technical information about the test environment.

    The lack of details unleashed a number of negative comments obliging Bruce Herndon, Senior Manager of R&D at VMware, to unveil that VMmark was executed inside Hyper-V virtual machines with unsupported configurations.

    At the end of the saga Drummonds had to apology and Herndon had to admit that:

    One of the more interesting emails I received pointed out that it unreasonable to blame Hyper-V for the collapse of these very large and very busy websites. Hyper-V’s stability issues would bring down individual VMs or small groups when the parent partition blue screened. I think that this is a reasonable observation, so its worth including here. I can’t say that Hyper-V was responsible for the MSDN and TechNet crashes. That would be for Microsoft to say, when and if they choose to expose the issue behind the outage.

    Of course Microsoft couldn’t be happier to overreact: part 1, part 2, part 3, part 4 and part 5.

    Microsoft’s answer on VMWare’s labmanager?

    June 15th, 2009 No comments

    From http://www.virtualization.info/2009/06/microsoft-launches-visual-studio-lab.html

    The few vendors busy in the virtual lab automation space (which include VMware, Surgient, VMLogix, Skytap and the almost died StackSafe) may soon have a big, big problem called Microsoft.

    After wasting years not leveraging its huge developers community to spread virtualization in every corner of the world, the company is finally moving on.

    Announced in November 2008, the integration between Visual Studio 2010, System Center Virtual Machine Manager (SCVMM) 2008 and Hyper-V 1.0/2.0 for virtual lab automation scenarios is now a reality called Visual Studio 2010 Lab Management.

    The product just entered the beta 1 phase and has the potential to become a huge hit in the .NET world.

    vs2010VLA

    more

    VMWare ESX Timekeeping and Active Directory

    June 11th, 2009 No comments

    Some nice articles which explain timekeeping on vmware and how to virtualize Active Directory safely on VMWare time wise.

    Time synchronisation on Active Directory is particularly important because of Kerberos, if clocks are more then 5 minutes (Default value) out of sync from the Domain Controller authentication fails. NTP is your friend here.