What should you configure?

Your network contains an Active Directory domain named contoso.com. The domain
contains two member servers named Server1 and Server2. All servers run Windows Server
2012 R2.
Server1 and Server2 have the Failover Clustering feature installed. The servers are
configured as nodes in a failover cluster named Cluster1.
You add two additional nodes to Cluster1.
You need to ensure that Cluster1 stops running if three nodes fail.
What should you configure?

Your network contains an Active Directory domain named contoso.com. The domain
contains two member servers named Server1 and Server2. All servers run Windows Server
2012 R2.
Server1 and Server2 have the Failover Clustering feature installed. The servers are
configured as nodes in a failover cluster named Cluster1.
You add two additional nodes to Cluster1.
You need to ensure that Cluster1 stops running if three nodes fail.
What should you configure?

A.
Affinity-None

B.
Affinity-Single

C.
The cluster quorum settings

D.
The failover settings

E.
A file server for general use

F.
The Handling priority

G.
The host priority

H.
Live migration

I.
The possible owner

J.
The preferred owner
K.
Quick migration
L.
the Scale-Out File Server

Explanation:
C) The quorum configuration in a failover cluster determines the number of failures that the
cluster can sustain.
http://technet.microsoft.com/en-us/library/cc731739.aspx



Leave a Reply 4

Your email address will not be published. Required fields are marked *


leonard

leonard

C. The cluster quorum settings

https://technet.microsoft.com/en-us/library/cc731739.aspx

Node and Disk Majority (recommended for clusters with an even number of nodes)

Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.

Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.

Joe

Joe

C – add a witness… you then have the 4 nodes plus the witness (so 5 nodes really)

If 2 fail, you still have 3 nodes (over half) if one more fails the cluster fails.

Magwif

Magwif

Answer C
The question is asking for the cluster to stay up if 3 disks fail
there is only 2 ways this could happen
1. Add a fifth disk and a witness
2. with the current number of disks use No Majority: Disk Only, this is obviously not recommended as it leaves you with a single point of failure

Apu

Apu

also with dynamic quorum option you can configure this.