You want to capture column group usage and gather extended statistics for better cardinality
estimates for the CUSTOMERS table in the SH schema.
Examine the following steps:
1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS (‘SH’, ‘CUSTOMERS’)
FROM dual statement.
2. Execute the DBMS_STATS.SEED_COL_USAGE (null, ‘SH’, 500) procedure.
3. Execute the required queries on the CUSTOMERS table.
4. Issue the SELECT DBMS_STATS.REPORT_COL_USAGE (‘SH’, ‘CUSTOMERS’) FROM dual
statement.
Identify the correct sequence of steps.
A.
3, 2, 1, 4
B.
2, 3, 4, 1
C.
4, 1, 3, 2
D.
3, 2, 4, 1
Explanation:
Step 1 (2). Seed column usage
Oracle must observe a representative workload, in order to determine the appropriate column
groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it
should observe the workload.
Step 2: (3) You don’t need to execute all of the queries in your work during this window. You can
simply run explain plan for some of your longer running queries to ensure column group
information is recorded for these queries.
Step 3. (1) Create the column groups
At this point you can get Oracle to automatically create the column groups for each of the tables
based on the usage information captured during the monitoring window. You simply have to call
the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires
just two arguments, the schema name and the table name. From then on, statistics will be
maintained for each column group whenever statistics are gathered on the table.
Note:
* DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the
SQL operations the database has processed for a given object.
* The Oracle SQL optimizer has always been ignorant of the implied relationships between data
columns within the same table. While the optimizer has traditionally analyzed the distribution of
values within a column, he does not collect value-based relationships between columns.
* Creating extended statisticsHere are the steps to create extended statistics for related table
columns withdbms_stats.created_extended_stats:
1 – The first step is to create column histograms for the related columns.2 – Next, we run
dbms_stats.create_extended_stats to relate the columns together.
Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended
statistics are created via a select statement.
B
B
To detect column groups:
Connect SQL*Plus to the database as user sh, and then create the customers_test table and gather statistics for it:
CONNECT SH/SH
DROP TABLE customers_test;
CREATE TABLE customers_test AS SELECT * FROM customer;
EXEC DBMS_STATS.GATHER_TABLE_STATS(user, ‘customers_test’);
Enable workload monitoring.
In a different SQL*Plus session, connect as SYS and run the following PL/SQL program to enable monitoring for 300 seconds:
BEGIN
DBMS_STATS.SEED_COL_USAGE(null,null,300);
END;
/
As user sh, run explain plans for two queries in the workload.
The following examples show the explain plans for two queries on the customers_test table:
EXPLAIN PLAN FOR
SELECT *
FROM customers_test
WHERE cust_city = ‘Los Angeles’
AND cust_state_province = ‘CA’
AND country_id = 52790;
SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY(‘plan_table’, null,’basic rows’));
EXPLAIN PLAN FOR
SELECT country_id, cust_state_province, count(cust_city)
FROM customers_test
GROUP BY country_id, cust_state_province;
SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY(‘plan_table’, null,’basic rows’));
Sample output appears below:
PLAN_TABLE_OUTPUT
——————————————————————————-
Plan hash value: 4115398853
—————————————————-
| Id | Operation | Name | Rows |
—————————————————-
| 0 | SELECT STATEMENT | | 1 |
| 1 | TABLE ACCESS FULL| CUSTOMERS_TEST | 1 |
—————————————————-
8 rows selected.
PLAN_TABLE_OUTPUT
——————————————————————————-
Plan hash value: 3050654408
—————————————————–
| Id | Operation | Name | Rows |
—————————————————–
| 0 | SELECT STATEMENT | | 1949 |
| 1 | HASH GROUP BY | | 1949 |
| 2 | TABLE ACCESS FULL| CUSTOMERS_TEST | 55500 |
—————————————————–
9 rows selected.
The first plan shows a cardinality of 1 row for a query that returns 932 rows. The second plan shows a cardinality of 1949 rows for a query that returns 145 rows.
Optionally, review the column usage information recorded for the table.
Call the DBMS_STATS.REPORT_COL_USAGE function to generate a report:
SET LONG 100000
SET LINES 120
SET PAGES 0
SELECT DBMS_STATS.REPORT_COL_USAGE(user, ‘customers_test’)
FROM DUAL;
The report appears below:
LEGEND:
…….
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
…………………………………………………………………….
###############################################################################
COLUMN USAGE REPORT FOR SH.CUSTOMERS_TEST
…………………………………..
1. COUNTRY_ID : EQ
2. CUST_CITY : EQ
3. CUST_STATE_PROVINCE : EQ
4. (CUST_CITY, CUST_STATE_PROVINCE,
COUNTRY_ID) : FILTER
5. (CUST_STATE_PROVINCE, COUNTRY_ID) : GROUP_BY
###############################################################################
In the preceding report, the first three columns were used in equality predicates in the first monitored query:
…
WHERE cust_city = ‘Los Angeles’
AND cust_state_province = ‘CA’
AND country_id = 52790;
All three columns appeared in the same WHERE clause, so the report shows them as a group filter. In the second query, two columns appeared in the GROUP BY clause, so the report labels them as GROUP_BY. The sets of columns in the FILTER and GROUP_BY report are candidates for column groups
I read this piece of writing fully on the topic of the difference of newest and preceding technologies, it’s awesome article.|
B