Perf Vsphere SQL Scalability [A2Z eBooks]

download Perf Vsphere SQL Scalability [A2Z eBooks]

of 15

Transcript of Perf Vsphere SQL Scalability [A2Z eBooks]

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    1/15

    Performance and Scalability of

    Microsoft SQL Server on VMware vSphere 4

    W H I T E P A P E R

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    2/15

    2

    VMware white paper

    Tbl of Contnts

    I n t o d u c t i o n 3

    H i g h l i g h t s 3

    Pfomnc Tst envionmnt 4

    woklod Dscon 4

    hd Confguon 5

    Sv hd Dscon 5

    Sog hd nd Confguon 5

    Cln hd 5

    Sof Confguon 6

    Sof Vsons 6

    Sog Lyou 6

    Bncmk Modology 7

    Sngl Vul Mcn Scl U tss 7

    Mull Vul Mcn Scl Ou tss 7

    exmns vS 4: Fus nd enncmns 7

    Pfomnc rsults 8

    Sngl Vul Mcn pfomnc rlv o Nv 8

    Mull Vul Mcn pfomnc nd Sclbly 9

    pfomnc imc of indvdul Fus 10

    Vul Mcn Mono 10

    Lg pgs 11

    pfomnc imc of vS enncmns 11

    Vul Mcn Mono 11

    Sog Sck enncmns 11

    Nok Ly 12

    rsouc Mngmn 12

    Conclusion 13

    D i s c l i m s 1 3

    acknoldgmnts 13

    about th autho 13

    rfncs 14

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    3/15

    VMware white paper

    IntoductionVMware vSphere 4 contains numerous perormance related enhancements that make it easy to virtualize a resource heavy database

    with minimal impact to per ormance. The improved resource management capabilities in vSphere acilitate more eectiveconsolidation o multiple SQL Server virtual machines (VMs) on a single host without compromising perormance or scalability.

    Greater consolidation can signicantly reduce the cost o physical inrastructure and o licensing SQL Server, even in smaller-scale

    environments.

    This paper describes a detailed perormance analysis o Microsot SQL Server 2008 running on vSphere. The perormance test places a

    signicant load on the CPU, memory, storage, and network subsystems. The results demonstrate efcient and highly scalable

    perormance or an enterprise database workload running on a virtual platorm.

    To demonstrate the perormance and scalability o the vSphere platorm, the test:

    MeasuresperformanceofSQLServer2008inan8virtualCPU(vCPU),58GBvirtualmachineusingahighendOLTPworkloadderived

    rom TPC-E1.

    Scalestheworkload,database,andvirtualmachineresourcesfrom1vCPUto8vCPUs(scaleuptests).

    Consolidatesmultiple2vCPUvirtualmachinesfrom1to8virtualmachines,effectivelyovercommittingthephysicalCPUs(scale out tests).

    QuantifiestheperformancegainsfromsomeofthekeynewfeaturesinvSphere.

    The ollowing metrics are used to quantiy perormance:

    SinglevirtualmachineOLTPthroughputrelativetonative(physicalmachine)performanceinthesameconfiguration.

    Aggregatethroughputinaconsolidationenvironment.

    HighlightsThe results show vSphere can virtualize large SQL Server deployments with perormance comparable to that on a

    physical environment. The experiments show that:

    SQLServerrunningina2vCPUvirtualmachineperformedat92percentofaphysicalsystembootedwith2CPUs.

    An8vCPUvirtualmachineachieved86percentofphysicalmachineperformance.

    The statistics in Table 1 demonstrate the resource intensive nature o the workload.

    Table 1. Comparison of workload profiles for physical machine and virtual machine in 8 CPU configuration

    Mtic Physicl Mchin Vitul Mchin

    tougu n nscons scond* 3557 3060

    avg sons m of ll nscons** 234 ms 255 ms

    Dsk i/O ougu (iOpS) 29 K 255 K

    Dsk i/O lncs 9 ms 8 ms

    Nok ck cv

    Nok ck snd

    10 K/sc

    16 K/sc

    85 K/sc

    8 K/sc

    Nok bndd cv

    Nok bndd snd

    118 Mb/sc

    123 Mb/sc

    10 Mb/sc

    105 Mb/sc

    1 See disclaimer on page 13.

    *Workload consists o a mix o 10 transactions. Metric reported is the aggregate o all transactions.

    **Averageoftheresponsetimesofall10transactionsintheworkload.

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    4/15

    4

    VMware white paper

    In the consolidation experiments, the workload was run on multiple 2 vCPU SQL Server virtual machines:

    AggregatethroughputisshowntoscalelinearlyuntilphysicalCPUsarefullyutilized.

    WhenphysicalCPUsareovercommitted,vSphereevenlydistributesresourcestovirtualmachines,whichensurespredictableper-ormance under heavy load.

    Pfomnc Tst envionmntIn this section, the workloads characteristics, hardware and sotware congurations, and the testing methodology are described.

    The same components were used or both the physical machine and vir tual machine experiments.

    Workload DescriptionTheworkloadusedintheseexperimentsismodeledontheTPC-Ebenchmark.ThisworkloadwillbereferredtoastheBrokerage

    workload.TheBrokerageworkloadisanon-comparableimplementationoftheTPC-Ebusinessmodel2.

    TheTPC-Ebenchmarkusesadatabasetomodelabrokeragefirmwithcustomerswhogeneratetransactionsrelatedtotrades,

    account inquiries, and market research. The brokerage irm in turn interacts with inancial markets to execute orders on behal othe customers and updates relevant account inormation.

    Thebenchmarkisscalable,meaningthatthenumberofcustomersdefinedforthebrokeragefirmcanbevariedto representthe

    workloads o dierent-sized businesses. The benchmark deines the required mix o transactions the benchmark must maintain.

    The TPC-E metric is given in transactions per second (tps). It speciically reers to the number o Trade-Result transactions the

    server can sustain over a period o time.

    TheBrokerageworkloadconsistsoftentransactionsthathaveadenedratioofexecution.Ofthesetransactiontypes,fourupdate

    thedatabase,andtheothersareread-only.TheI/Oloadisquiteheavyandconsistsofsmallaccesssizes.ThediskI/Oaccessesconsist

    o random reads and writes with a read- to-write ratio o 7:1.

    The workload is sized by number o customers dened or the brokerage rm. To put the storage requirements in context, at the

    185,000-customerscaleinthe8CPUexperiments,approximately1.5terabytesofstoragewereusedforthedatabase.

    In terms o the impact on the system, this workload spends considerable execution time in the operating system kernel contextwhich incurs more virtualization overhead than user-mode code. This workload also was observed to have a large cache resident

    workingsetandisverysensitivetothehardwaretranslationlookasidebuer(TLB).Ecientvirtualizationofkernellevelactivity

    within the guest operating system code and intelligent scheduling decisions are critical to perormance with this workload.

    2 See disclaimer on page 13.

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    5/15

    VMware white paper

    Hardware ConfigurationThe experiments were run in the VMware perormance laboratory. The test bed consisted o a ser ver running the database sotware

    (in both native and virtual environments), a storage backend with sufcient spindles to support the storage bandwidth requirements,

    and a client machine running the benchmark driver. Figure 1 is a schematic o the test bed conguration:

    Figure 1. Test bed configuration

    Server Hardware Description

    Table 2. Server hardware

    Dll po edg 2970

    Dul Sock Qud-Co Sv

    269Ghz aMD Oon pocsso 2384

    64GB mmoy

    Storage Hardware and Configuration

    Table 3. Storage hardware

    eMC CLarON CX3-40 y

    180 15K rpM dsk dvs

    LUN confguon fo 8 vCpU vul mcn : 32 d + 1 log LUN n bo Nv nd eSX

    LUN confguon fo mull vul mcns : 4 d + 1 log LUN n c VM

    Client Hardware

    Thesingle-virtualmachinetestswererunwithasingleclient.Anadditionalclientwasusedforthemulti-virtualmachinetestsbecausea

    single client benchmark driver could drive only our virtual machines. Table 4 gives the hardware conguration o the two client systems:

    Table 4. Client hardware

    Dll Po edg 1950 HP Polint DL580 G5

    Sngl-sock, qud-co sv Dul-sock, qud-co sv

    266Ghz inl Xon X5355 ocsso 29Ghz inl Xon X7350 ocsso

    160GB mmoy 64GB mmoy

    Software Configuration

    2-socket/8 core

    AMD server

    1Gb direct-attach4-way and 8-way

    Intel clients

    4Gb/s Fibre

    Channel switch

    EMC CX3-40,

    180 drives

    Server Hardware Description

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    6/15

    6

    VMware white paper

    Software Versions

    The operating system sotware, Microsot SQL Server 2008, and benchmark driver programs were identical in both native and

    virtual environments.

    Table 5. Software Versions

    Soft Vsion

    VM eSX 40 rtM buld #164009

    Ong sysm (gus nd nv) Mcosof wndos Sv 2008

    ens (Buld 60001: Svc pck 1)

    Dbs mngmn sof Mcosof SQL Sv 2008 ens

    edon 100160022

    Storage Layout

    Measures were taken to ensure there were no hot spots in the storage layout and that the database was uniormly laid out over all

    availablespindles.Multiple,ve-diskwide,RAID-0LUNswerecreatedontheEMCCLARiiONCX3-40.WithinWindows,theseLUNswere

    convertedtodynamicdiskswithstripedvolumescreatedacrossthem.Eachofthesevolumeswasusedasadataleforthedatabase.

    The same database was used or experiments comparing native and virtual environments to ensure that the conguration o the

    storagesubsystemwasidenticalforbothexperiments.ThiswasachievedbyimportingthestorageLUNSasRawDiskMapping(RDM)

    devicesinESX.AschematicofthestoragelayoutisinFigure 2.

    Figure 2. Storage layout

    Benchmark Methodology

    Windows Server 2008

    str

    ipe

    str

    ipe Dynamic disk 1

    Dynamic disk 2

    VMware ESX

    Dynamic disk 3

    RDM1 RDM3RDM2

    LUN1 LUN3LUN2

    str

    ipe

    str

    ipe

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    7/15

    VMware white paper

    Single Virtual Machine Scale Up Tests

    The tests include native and virtual experiments at 1, 2, 4, and 8 CPUs. For the native experiments, the bcdedit Windows utility was

    used to speciy the number o processors to be booted. The size o the database, the memory size, and SQL Server buer cache were

    careully scaled to ensure the workload prole was unchanged, while getting the best perormance rom the system or each data

    point.Allthescaleupexperimentsranat100percentCPUutilizationinnativeandvirtualcongurations.Forexample,at2vCPUs,in

    the guest, the workload would saturate both virtual CPUs.Table6 describes the conguration used or each experiment.

    Table 6. Brokerage workload configuration for scale up tests

    Numb of CPUs Dtbs Scl(Number of Customers)

    Numb of UsConnctions

    Vitul Mchin/Ntiv Mmoy

    SQL Sv BuffCch Siz

    1 30,000 65 9GB 7GB

    2 60,000 120 16GB 14GB

    4 115,000 220 32GB 28GB

    8 185,000 440 58GB 535GB

    Multiple Virtual Machine Scale Out TestsThe experiment includes identical 2 vCPU virtual machines with the same load applied to each. In order to provide low disk latencies

    forthehighvolumeofIOPSfromeachvirtualmachine,thedatadisksforindividualVMsareconguredonexclusivespindlesandtwo

    virtual machines shared one log disk. Table 7 describes the conguration used or each vir tual machine in the multi-virtual machine

    experiments.

    Table 7. Brokerage workload configuration for each 2 vCPU vir tual machine

    Dtbs Scl Number of UserConnections

    Vitul Mchin Mmoy SQL Sv Buff Cch

    20000 35 7GB 5GB

    Experiments with vSphere 4: Features and Enhancements

    The 4 vCPU conguration mentioned above in Table6wasusedtoobtaintheresultswithESXfeaturesandenhancements.

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    8/15

    8

    VMware white paper

    Pfomnc rsultsThesectionsbelowdescribetheperformanceresultsofexperimentswiththeBrokerageworkloadonSQLServerinanativeand

    virtual environment. Single and multiple virtual machine results are examined, and then results are detailed that show improvementsduetospecicESX4.0features.

    Single Virtual Machine Performance Relative to NativeHow vSphere 4 perorms and scales relative to native are shown in Figure 3 below. The results are normalized to the throughput

    observed in a 1 CPU native conguration:

    Figure 3. Scale-up performance in vSphere 4 compared with native

    Thegraphdemonstratesthe1and2vCPUvirtualmachinesperformingat92percentofnative.The4and8vCPUvirtualmachines

    achieve88and86percentofthenon-virtualthroughput,respectively.At1,2,and4vCPUsonthe8CPUserver,ESXisabletoeectively

    ooadcertaintaskssuchasI/Oprocessingtoidlecores.HavingidleprocessorsalsogivesESXresourcemanagementmoreexibilityin making virtual CPU scheduling decisions. However, even with 8 vCPUs on a ully committed system, vSphere still delivers excellent

    perormance relative to the native system.

    The scaling in the graph represents the throughput as all aspects o the system are scaled such as number o CPUs, size o the

    benchmark database, and SQL Server buer cache memory.Table 8showsESXscalingcomparablytothenativecongurations

    ability to scale perormance.

    Table 8. Scale up performance

    Compison Performance Gain

    Nv 8 CpU vs 4 CpU 171

    vS 8 vCpU vs 4 vCpU 167

    0

    1

    4

    5

    6

    3

    2

    1 2 4 8

    Throughput

    (Normalizedto1CPUNativeResult)

    Number of Physical or Virtual CPUs

    Native

    ESX 4.0

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    9/15

    VMware white paper

    Multiple Virtual Machine Performance and ScalabilityThese experiments demonstrate that multiple heavy SQL Server vir tual machines can be consolidated to achieve scalable aggregate

    throughput with minimal perormance impact to individual virtual machines. Figure 4 shows the total benchmark throughput as

    eight2vCPUSQLServervirtualmachinesareaddedtotheBrokerageworkloadontothesingle8-wayhost:

    Figure 4. Consolidation of multiple SQL Server virtual machines

    Each2vCPUvirtualmachineconsumesabout15percentofthetotalphysicalCPUs,5GBofmemoryintheSQLServerbuercache,

    andperformsabout3600I/Ospersecond(IOPS).

    Asthegraphdemonstrates,thethroughputincreaseslinearlyasuptofourvirtualmachines(8vCPUs)areadded.AsthephysicalCPUs

    wereovercommittedbyincreasingthenumberofvirtualmachinesfromfourtosix(afactorof1.5),theaggregatethroughputincreasesbya

    actor o 1.4 .

    AddingeightvirtualmachinestothissaturatesthephysicalCPUsonthishost.ESX4.0nowschedules16vCPUsontoeightphysicalCPUs,

    yetthebenchmarkaggregatethroughputincreasesafurther5percentastheESXschedulerisabletodelivermorethroughputusing

    thefewidlecyclesleftoverinthe6vCPUconguration.Figure5showstheabilityofESXtofairlydistributeresourcesinthe8vCPU

    conguration:

    Figure 5. Overcommit fairness for 8 virtual machines

    0.0

    1.0

    1.5

    0.5

    4.0

    4.5

    5.0

    5.5

    6.0

    3.0

    3.5

    2.0

    0

    20

    80

    100

    120

    60

    40

    2.5

    1 2 4 6 8

    Throughput

    (Normalizedto1VMresult)

    Physi

    calCPUUtilization(%)

    Number of 2 vCPU VMs

    Linear Performance Improvement Until CPU SaturationCPU

    Overcommitted

    ESX 4.0

    Physical CPU Utilization

    CPU Overcommitted(# CPUs > # Physical CPUs)

    PercentageofAggregateTh

    roughput

    forEachVM

    Throughput Distribution for 8 x 2 vCPU VMs

    75.0

    87.5

    100.0

    62.5

    50.0

    37.5

    0.0

    12.5

    25.0

    VM 4

    VM 3

    VM 2

    VM 1

    VM 8

    VM 7

    VM 6

    VM 5

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    10/15

    10

    VMware white paper

    Table9 highlights the resource intensive nature o the eight virtual machines that were used or the scale out experiments:

    Table 9. Aggregate system metrics for eight SQL Server virtual machines

    agggt Thoughput inTnsctions p Scond

    Host CPU Utilization Disk I/O Thoughput(IOPS)

    Network Packet Rate Network Bandwidth

    2760 100% 23K 8 K/s cv

    75 K/s snd

    9 Mb/sc cv

    98 Mb/sc snd

    Performance Impact of Individual FeaturesInthissection,wequantifytheperformancegainsfromkeyexistingESXfeaturessuchaschoiceinvirtualmachinemonitorandlarge

    page support.

    Virtual Machine Monitor

    Modernx86processorsfromAMDandIntelhavehardwaresupportforCPUinstructionsetandmemorymanagementunit(MMU)

    virtualization. More details on these technologies can be ound in [1],[2], and [3].ESX3.5andvSphereeectivelyleveragethese

    eatures to reduce the virtualization overhead and increase perormance as compared to native or most workloads.Figure6 details

    theimprovementsduetoAMDshardwareMMUassupportedbyVMware.ResultsareshownascomparedtoVMwaresall-software

    approach,binarytranslation(BT),andamixedmodeofAMD-VandvSpheressoftwareMMU.

    Figure 6. Benefits of hardware assistance for CPU and memory virtualization

    Intheseexperiments,enablingAMD-Vresultsinan18percentimprovementinperformancecomparedwithBT.Mostofthe

    overhead in binary translation o this workload comes rom the cost o emulating privileged, high resolution timer-related calls used

    by SQL Server. RVI provides another 4 percent improvement in perormance. When running on processors that include support or

    AMD-VRVI,ESXchoosesRVIasthedefaultmonitortypeforWindowsServer2008guests.

    0.80

    0.88

    0.92

    0.84

    1.12

    1.16

    1.20

    1.24

    1.28

    1.04

    1.08

    0.96

    1.00

    Binary Translation (BT) Hardware-Assisted CPU

    Virtualization

    Hardware-Assisted Memory

    & CPU Virtualization

    ThroughputNormalizedtoBT

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    11/15

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    12/15

    12

    VMware white paper

    Abriefdescriptionoftheseenhancementsandfeaturesisgivenbelow:

    PVSCSI: A New Paravirtualized SCSI Adapter and Driver

    ThePVSCSIdriverisinstalledintheguestoperatingsystemaspartofVMwareToolsandsharesI/Orequestandcompletionqueueswiththehypervisor.ThetightercouplingenablesthehypervisortopollforI/Orequestsfromguestandcompleterequests

    usinganadaptiveinterruptcoalescingmechanism.ThisbatchingofI/Orequestsandcompletionofinterruptssignificantly

    improvestheCPUefficiencyforhandlinglargevolumesofI/Ospersecond(IOPs).

    A6percentimprovementinthroughputwiththePVSCSIdriverandadapterisobservedovertheLSILogicparalleladapterandguestdriver.

    I/O Concurrency Improvements

    InpreviousreleasesofESX,I/OrequestsissuedbytheguestareroutedtotheVMkernelviathevirtualmachinemonitor(VMM).

    OncetherequestsreachtheVMkernel,theyexecuteasynchronously.TheexecutionmodelinESX4.0allowstheVMMto

    asynchronouslyhandletheI/OrequestsallowingvCPUsintheguesttoexecuteothertasksimmediatelyafterinitiatinganI/O

    request.ThisimprovedI/Oconcurrencymodelisdesignedaroundtworingstructuresperadapterwhicharesharedbythe

    VMM and the VMkernel.

    Theworkloadbenefitedsignificantlyfromthisoptimizationandsawa9percentimprovementinthroughput. Virtual Interrupt Coalescing

    InordertofurtherreducetheCPUcostofI/Oinhigh-IOPSworkloads,ESX4.0implementsvirtualinterruptcoalescingtoallow

    forbatchingofI/Ocompletionstohappenattheguestlevel.Thisissimilartothephysicalinterruptcoalescingthatisprovidedin

    many Fibre Channel host bus adapters.

    Withthisoptimization,thediskI/OinterruptratewithinaWindowsServer2008guestwashalfofthatwithinthenativeoperating

    system in the 8 vCPU coniguration.

    Interrupt Delivery Optimizations

    InESX3.5,virtualinterruptsforWindowsguestswerealwaysdeliveredtovCPU0.vSpherepostsinterruptstothevCPUthat

    initiatedtheI/O,allowingforbetterscalabilityinsituationswherevCPU0couldbecomeabottleneck.

    ESX4.0uniformlydistributesvirtualI/OinterruptstoallvCPUsintheguestfortheBrokerageworkload.

    Network Layer

    NetworktransmitcoalescingbetweentheclientandserverbenetedthisbenchmarkbytwopercentonvSphere.Transmitcoalescing

    wasavailablefortheenhancedvmxnetnetworkingadaptersinceESX3.5anditwasfurtherenhancedinvSphere.Dynamictransmit

    coalescing was also added to the E1000g adapter.

    In these experiments, the parameter Net.vmxnetThroughputWeight,availablethroughtheAdvancedSoftwareSettingsconguration

    tab in the VI Client, was changed rom the deault value 0 to 128, thus avoring transmit throughput over response time.

    Settingthisparametercouldincreasenetworktransmitlatenciesandmustbetestedforitsimpactbeforebeingset.Noincreasein

    transactionresponsetimeswereobservedfortheBrokerageworkloadduetothissetting.

    Resource Management

    vSphere includes several enhancements to the CPU scheduler that help signicantly in the single virtual machine as well as multiple

    virtual machine perormance and scalability o this workload.

    Further Relaxed Co-scheduling & Removal of Cell

    Relaxedco-schedulingofvCPUsisallowedinESX3.5buthasbeenfurtherimprovedinESX4.0.InpreviousversionsofESX,the

    scheduler would acquire a lock on a group o physical CPUs within which vCPUs o a virtual machine were to be scheduled.

    InESX4.0thishasbeenreplacedwithfiner-grainedlocking,reducingschedulingoverheadsincaseswherefrequentscheduling

    decisionsareneeded.Boththeseenhancementsgreatlyhelpthescalabilityofmulti-vCPUvirtualmachines.

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    13/15

    1

    VMware white paper

    Enhanced Topology / Load Aware Scheduling

    AgoalofthevSphereCPUresourcescheduleristomakeCPUmigrationdecisionsinordertomaintainprocessorcacheaffinity

    while maximizing last level cache capacity.

    Cache miss rates can be minimized by always scheduling a vCPU on the same core. However, this can result in delays in scheduling

    certaintasksaswellaslowerCPUutilization.Bymakingintelligentmigrationdecisions,theschedulerinESX4.0strikesagood

    balance between high CPU utilization and low cache miss rates.

    The scheduler also takes into account the processor cache architecture. This is especially important in light o the dierences

    between the various processor architectures in the market. Migration algorithms have also been enhanced to take into account

    the load on the vCPUs and physical CPUs.

    ConclusionTheexperimentsinthispaperdemonstratethatvSphereisoptimizedout-of-the-boxtohandletheCPU,memory,andI/O

    requirements o most common SQL Server database congurations. The paper also clearly quantied the perormance gains rom

    the improvements in vSphere. These improvements can result in better perormance, higher consolidation ratios, and lower total cost

    or each workload.

    The results show that perormance is not a barrier or conguring large multi-CPU SQL Server instances in virtual machines or

    consolidating multiple virtual machines on a single host to achieve impressive aggregate throughput. The small dierence observed

    in perormance between the equivalent deployments on physical and virtual machineswithout employing sophisticated tuning

    at the vSphere layerindicate that the benets o virtualization are easily available to most SQL Server installations.

    DisclimsAlldataisbasedonin-labresultswiththeRTMreleaseofvSphere4.Ourworkloadwasafair-useimplementationoftheTPC-E

    businessmodel;theseresultsarenotTPC-EcompliantandarenotcomparabletoocialTPC-Eresults.TPCBenchmarkandTPC-Eare

    trademarks o the Transaction Processing Perormance Council.

    The throughput here is not meant to indicate the absolute perormance o Microsot SQL Server 2008, nor to compare its perormance

    toanotherDBMS.SQLServerwasusedtoplaceaDBMSworkloadonESX,andobserveandoptimizetheperformanceofESX.

    Thegoaloftheexperimentwastoshowtherelative-to-nativeperformanceofESX,anditsabilitytohandleaheavydatabaseworkload. It was not meant to measure the absolute perormance o the hardware and sotware components used in the study.

    The throughput o the workload used does not constitute a TPC benchmark result.

    acknoldgmntsTheauthorwouldliketothankAravindPavuluri,RezaTaheri,PritiMishra,ChethanKumar,JeBuell,NikhilBhatia,SeongbeomKim,

    FeiGuo,WillMonin,ScottDrummonds,KaushikBanerjee,andKrishnaRajafortheirdetailedreviewsandcontributionstosectionsof

    this paper.

    about th authoPriya Sethuraman is a Senior Perormance Engineer in VMwares Core Perormance Group where she ocuses on optimizing the

    perormance o applications in a virtualized environment. She is passionate about perormance and the opportunity to make

    softwarerunecientlyonalargesystem.ShepresentedherworkonEnterpriseApplicationPerformanceandScalabilityonESXatVMworldEurope2009.

    PriyareceivedherMSinComputerSciencefromUniversityofIllinoisUrbana-Champaign.PriortojoiningVMware,shehadextensive

    experienceindatabaseperformanceonUNIXplatforms.

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    14/15

    14

    VMware white paper

    rfncs[1]AMDVirtualization(AMD-V)Technology.http://www.amd.com/us-en/0,,3715_15781_15785,00.html.RetrievedApril14,2009.

    [2]PerformanceEvaluationofAMDRVIhardwareAssist.http://www.vmware.com/pdf/RVI_performance.pdf.

    [3]PerformanceEvaluationofIntelEPTHardwareAssist.http://www.vmware.com/pdf/Perf_ESX_Intel-EPT-eval.pdf.

    [4]LargePagePerformance.Technicalreport,VMware,2008.http://www.vmware.com/resources/techresources/1039.

    [5]ImprovingPerformancewithInterruptCoalescingforVirtualMachineDiskI/OinVMwareESXServer.IrfanAhmad,AjayGulati,and

    AliMashtizadeh,SecondInternationalWorkshoponVirtualizationPerformance:Analysis,Characterization,andTools(VPACT09),held

    withIEEEInternationalSymposiumonPerformanceAnalysisofSystemsandSoftware(ISPASS),2009.

  • 8/7/2019 Perf Vsphere SQL Scalability [A2Z eBooks]

    15/15

    VMware, Inc. 3401 Hillview Ave Palo Alto CA 94304 USA Tel 877-486-927 3 Fax 650-427-50 01 www.vmware.com

    Copyright 2009 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual

    property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.

    VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other

    marks and names mentioned herein may be trademarks of their respective companies. VMW_09Q2_WP_VSPHERE_SQL_P15_R1