NUMA on Hyper-V 2012 R2


my hosts have 48gb ram , 2 pcpus giving me 2 numa nodes, , have 5 vms need run on each, 4 9100mb ram , 1 7000mb. should fit, , does. prior migrating hyper-v these hosts running xenserver 5.5, not numa aware (and possibly had lower memory overhead windows anyway), , vms powered on, no problems. spanning on nuam nodes, performance ok didn't care.

then moved couple of hosts 2012 (not r2) , ran vms in hyper-v. because hyper-v numa aware thought it'd nice take advantage of split 7000mb vm 2 smaller ones, , assigned 1 of plus 2 of larger vms onto each numa node. assigned vms numa nodes using wmi http://rcmtech.wordpress.com/2013/08/12/set-hyper-v-2012-vm-numa-node/

2012 r2 seems have changed/removed ability configure vm sit on particular numa node though, above doesn't work anymore.

so i'm experimenting various "allow span numa" tickboxes host , vms, mixed success. i've gone 5 vms, can 4 power on, fifth saying "not enough memory", yet on other hosts 5 running. try vm again later , might power on, might not. seems pretty random, i've done lot of testing various combinations of "allow numa spanning" setting vms , seems not make difference. once able 3 vms power on...!

what assume (hope) if hostand vms set allow numa spanning, should power on, final vm end spanning across 2 nodes if other 4 have been placed entirely within node. doesn't seem happen consistently.

the hosts have 5 vms running show 2-3gb free ram.

so can offer advice? doing daft? there must going on i'm not aware of or not taking account. don't inconsistent behaviour.

thanks in advance!

essentially numa node = 1 memory controller.

in intel's case means 1 numa node per cpu.

now, in hyper-v spanning numa nodes tied number of vcpus assigned vm. note vcpu = 1 thread physical cpu , vcpu threads must processed in parallel through cpu pipeline.

this numa node spanning comes play.

if more vcpus cores on 1 cpu assigned vm then cpu processing pipeline has juggle threads make sure go through pipeline simultaneously. means performance loss due need bump threads on second cpu.

it bears testing on high load sql or exchange server verify having more vcpus assigned physical cores yields expected results.


philip elder wssmb mvp blog: http://blog.mpecsinc.ca



Windows Server  >  Hyper-V



Comments

Popular posts from this blog

server manager error: ADAM.events.xml could not be enumerated.

Cannot access Anywhere Access using domain name?

WMI Failure: Unable to update Local Resource Group