What will happen if later the disk with the two replicas fails?

You have two local (non-shared) disks on a cluster node. You put one local metadb replica on one
disk and two local metadb replicas on the other disk.
What will happen if later the disk with the two replicas fails?

You have two local (non-shared) disks on a cluster node. You put one local metadb replica on one
disk and two local metadb replicas on the other disk.
What will happen if later the disk with the two replicas fails?

A.
You are guaranteed that the node stays up and running, and can reboot. As long as Solaris
Volume Manager can find one valid copy of the configuration, the node will stay up.

B.
You are guaranteed to stay up and running. However, if you reboot, you may have to manually
delete the broken metadb replicas before being able to join the cluster.

C.
When Solaris Volume Manager discovers you have less than 50% of your local metadbs
remaining, it will fail immediately.

D.
When Solaris Volume Manager discovers you have less than 50% of your local metadbs
remaining, it will prompt you to fix the broken ones and you can stay operational without rebooting.

Explanation:



Leave a Reply 0

Your email address will not be published. Required fields are marked *