Which three statements are true about Oracle Data Pump export and import operations?
A.
You can detach from a data pump export job and reattach later.
B.
Data pump uses parallel execution server processes to implement parallel import.
C.
Data pump import requires the import file to be in a directory owned by the oracle owner.
D.
The master table is the last object to be exported by the data pump.
E.
You can detach from a data pump import job and reattach later.
Explanation:
B: Data Pump can employ multiple worker processes, running in parallel, to
increase job performance.
D: For export jobs, the master table records the location of database objects within a dump file set.
/ Export builds and maintains the master table for the duration of the job. At the end of an export
job, the content of the master table is written to a file in the dump file set.
/ For import jobs, the master table is loaded from the dump file set and is used to control the
sequence of operations for locating objects that need to be imported into the target database.
Please don’t blame me if I’m wrong, but I think A, B, E are the correct answers.
I can’t find documentation that the master table finally is exported together with the data. Reading about the contents of this table it isn’t necessary I think.
Attach and detach is possible for impdp and expdp
I Jeroen,
I agree with you!
But why these answers are so wrong ???
I think D is correct.
http://docs.oracle.com/cd/E16655_01/server.121/e17639/dp_overview.htm
Tracking Progress Within a Job
While the data and metadata are being transferred, a master table is used to track the progress within a job. The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:
•For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.
•For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.
A – E – TRUE
When a Datapump Export or Import session is launched, a Datapump Job is automatically started. This way, we can
–> detach from and reattach to long-running jobs without affecting the job itself.
D – TRUE
At the end of an export job, the content of the master table is written to a file in the dump file set.
Yes, ADE.
A,D,E
its written not exported so then D is false
ADE
In my opinion AD and E
B FALSE
http://docs.oracle.com/database/121/SUTIL/dp_import.htm#SUTIL921
PARALLEL=integer
The value you specify for integer specifies the maximum number of processes of active execution operating on behalf of the import job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process, idle workers, and worker processes acting as parallel execution coordinators in parallel I/O operations do not count toward this total.
ADE
wHY answer C is false?
The User performing the import only needs read access to the directory. I don’t know why B is false, Anyone knows?
B – false
Data Pump can employ multiple worker processes, running in parallel, to increase job performance.
No any server proces
ADE
Agree , A D E
E is not right.
http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsynta023.htm
Dropped
Adds the datafile to the control file, but marks it as offline and does not flash it back. You can then restore and recover the datafile to the same time or SCN.
This is talking about datapump not flashback. Probably posted in wrong place.
I believe all should be true. Directory file on the o/s should be owned by oracle.
taken the exam today .
bit change in question , select two not three , also options changed.
correct answers : AE
ADE
Not B
By default impdp doesnt uses parallelism. You have to explicit write the clausule PARRALEL=n
ADE