You are going to create live zones on you server. Disk space is critical on this server so you need
to reduce the amount of disk space required for these zones. Much of the data required for each of
these zones is identical, so you want to eliminate the duplicate copies of data and store only data
that is unique to each zone.
Which two options provide a solution for eliminating the duplicate copies of data that is common
between all of these zones?
A.
Create the zones by using sparse root zones.
B.
Set the dedup property to on and the dedupratio to at least 1.5 for the zpool.
Create a separate ZFS file system for each zone in the zpool.
C.
Put all of the zones in the same ZFS file system and set the dedupratio property for the ZFS file
system to at least 1.5.
D.
Put all of the zones in the same ZFS file system and set the dedup property for the file system
to on.
E.
Put each zone in a separate ZFS file system within the same zpool.
Set the dedup property to on for each ZFS file system.
Explanation:
n Oracle Solaris 11, you can use the deduplication (dedup) property to remove
redundant data from your ZFS file systems. If a file system has the dedup property enabled,
duplicate data blocks are removed synchronously. The result is that only unique data is stored,
and common components are shared between files.
A: solaris11 zone only be whole root, so it’s incorrect
B: dedupratio is not existing ifs property, so it’s incorrect
C: see B, this is incorrect too
D and E are the correct answers.
man zpool
The deduplication ratio specified for a pool, expressed
as a multiplier. This value is expressed as a single
decimal number. For example, a -=dedupratio=- value of 1.76
indicates that 1.76 units of data were stored but only 1
unit of disk space was actually consumed. This property
can also be referred to by its shortened column name,
dedup.
Is it possible to put all of the zones in the same zfs file system?
You can put it into the ome zpool but zfs, imho.
Okay, dedupratio is a read-only properties.