A vSphere 5 implementation consists of a fully automated DRS cluster with the following vNetwork Standard Switch configuration.
1. Switch A: two vmnics, one port group named sFTP-PG
2. Switch B: one vmnic, one port group named Web-PG
3. All vmnics are trunked to the same physical switch
DRS has been configured with an anti-affinity rule for all virtual machines in the Web-PG so that no more than half of the virtual machines run on any given host. The virtual machine on WebPG are experiencing significant network latency. The network team has determined that a performance bottleneck is occurring at the ESXi host level.
Which solution will require the least amount of administrative work and least amount of service interruption?
A.
Move a vmnic from Switch A to Switch B on each host.
B.
Mograte Web-PG to Switch A and sFTP-PG to Switch B on each host.
C.
Convert the three Switch B’s to a single vNetwork Distributed Switch.
D.
Use vMotion to redistribution some of the VMs in Web-PG to a different ESXi host.
I think B is the correct Answer because first C does not make sense as we don’t have 3 B’s, and if we Migrate Web-PG to switch one it would have two vmnics rather than one on Switch two.
Answer is A.
B. Portgroup Migration between vStandard Switches is not possible.
C. Converting switches to vDS is not a solution because the problem is at Esxi level.Uplinks remain overloaded.
D. DRS is already in fully automated mode so Vmotion isn’t fixing the problem.
With vDS you can spread the traffic on three uplinks.
But I can’t explain that C talk about THREE switches. I can read about only two.
Anyway, I think the tecnically correct answer is A, but C can be a commercial response.
I’m sorry for my previous comment.
I think C is wrong: the physical switch is the third switch.
So the only possible response is A.
I picked D because DRS only work on CPU load, it doesn’t work on network load. The anti-affinity rule obviously stopped the best possible resource allocation. So manually vMotion the VMs away from the host is the least distruction way of solving the problem.
Corret, DRS ONLY takes into account CPU and memory resources – not network – so everybody who keeps repeating – “Oh, DRS is already taking care of it” is wrong.
Also, removing an uplink from the sftp PG could have an impact on performance for that PG.
my 2 cents:
C: is really saying convert 2 vss to a vds with 3 uplinks, other than the improper phase, it is a good solution (may not be easy to get mgr approved)
B: I think that it is easiest, no downtime, swap the two VMs the affect is same as A:
A: you need to shutdown the hosts pull/put a nic, in the real world, what kind approval and downtime you need?
let me know please if my thought is wrong.
correct. Ans is A
A is wrong, it will be correct if it was taking about moving uplink.
how is moving a additional vmnic to a over used switch will help?
IMO D is correct since DRS only take CPU and memory in account.
A is correct, VMnic is uplink.
This diagram helped:
http://blogs.vmware.com/vsphere/files/2013/01/net-diagram.png
It seems like people are doubting between answer A and answer D. The correct answer is D.
Answer A is incorrect because you can switch around as many vmnics as you wish, but the amount of network traffic leaving the host will remain the same, which is where the performance bottleneck is, so you have not solved the issue. As for DRS, it only works for CPU and memory bottlenecks, so the argument that VMs would have been moved is not valid.
Answer D is correct because when you migrate a VM to a different host, it will no longer be affected by the performance bottleneck at the host level of the source host.