Friday, October 30, 2009

Compute Oracle Index Statistics

Many times, we forget to gather index statistics after creating the index. Optimizer will not use the index effectively if index does not have statistics.

Prior to Oracle9i, we have to issue two commands for creating index and gathering index statistics.

SQL> create index idx on test(object_name);

Index created.

SQL> exec dbms_stats.gather_index_stats(null, 'IDX');

PL/SQL procedure successfully completed.

SQL>

Starting in Oracle 9i, we have a compute statistics clause. We can both creating index and gathering index statistics in one command.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

SQL> create table test as select * from user_objects;

Table created.

SQL> create index idx on test(object_name);

Index created.

SQL> select table_name, num_rows, last_analyzed
2 from user_tables
3 where table_name ='TEST'
4 /

TABLE_NAME NUM_ROWS LAST_ANAL
------------------------------ ---------- ---------
TEST

SQL> drop index idx;

Index dropped.

SQL> create index idx on test(object_name) compute statistics;

Index created.

SQL> select table_name, num_rows, last_analyzed
2 from user_tables
3 where table_name ='TEST'
4 /

TABLE_NAME NUM_ROWS LAST_ANAL
------------------------------ ---------- ---------
TEST 184 31-OCT-09

SQL>

In Oracle10g, we do not need to use compute statistics clause at all. Oracle gather statistics while creating index automatically.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> create index idx on test(object_name);

Index created.

SQL> select num_rows, last_analyzed from user_ind_statistics where index_name ='IDX';

NUM_ROWS LAST_ANAL
---------- ---------
50484 30-OCT-09

SQL>

Statistics Lock in Oracle10g

Oracle10g has one of the useful feature that we can lock the table statistics. When statistics on a table are locked, all the statistics depending on the table, including table statistics, column statistics, histograms and statistics on all dependent indexes, are considered to be locked. No one can gather statistics on a table when it is locked. But ofcourse, we can overwrite the statistics with force option. How does it useful for DBA's on day to day activities? At what circumstances, this feature is useful?

We can use this feature in the following circumstances...

1. There are tables where you want to setup gathering statistics manually. You can stop gathering statistics during the regular schedule by locking the statistics.

2. Some cases, Queries works fine with old statistics. You can avoid gathering statistics at this situation.

3. Some time, tables are bigger and automatic gathering statistics might fail silently. In this scenario, we might need to lock the table and collect the statistics seperately. Refer these links Post1, Post2, Post3

4. Sometime, gathering statistics, creating histograms takes very long time on bigger table and we can avoid such a bigger table while collecting statistics for all the tables in schema or DB level.

5. For some reason, if we want to use any specific parameter to gather statistics on particular table, then we can use this option to lock the statistics and gather statistics in different time.

How do we lock the statistics?

Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> execute dbms_stats.lock_table_stats('SCOTT','EMP');

PL/SQL procedure successfully completed.

SQL> execute dbms_stats.lock_schema_stats('SCOTT');

PL/SQL procedure successfully completed.

SQL>

Index Creation on Locked tables : In oracle10g, when you create index, the statistics also will be generated automatically. (Please refer my another Post for gathering statistics while creating index) . When the table is locked, statistics will not be generated while creating the index. We need to use FORCE option to gather the statistics while creating index for locked objects.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> create table test as select * from dba_objects;

Table created.

SQL> exec dbms_stats.lock_table_stats(null, 'TEST');

PL/SQL procedure successfully completed.

SQL> create index idx on test(object_name);

Index created.

SQL> select num_rows, last_analyzed from user_ind_statistics where index_name ='IDX';

NUM_ROWS LAST_ANAL
---------- ---------

SQL> drop index idx;

Index dropped.

SQL> create index idx on test(object_name) compute statistics;
create index idx on test(object_name) compute statistics
*
ERROR at line 1:
ORA-38029: object statistics are locked

SQL> create index idx on test(object_name);

Index created.

SQL> exec dbms_stats.gather_index_stats(null, 'IDX');
BEGIN dbms_stats.gather_index_stats(null, 'IDX'); END;

*
ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at "SYS.DBMS_STATS", line 10640
ORA-06512: at "SYS.DBMS_STATS", line 10664
ORA-06512: at line 1

SQL> exec dbms_stats.gather_index_stats(null, 'IDX',force=>true);

PL/SQL procedure successfully completed.

SQL> select num_rows, last_analyzed from user_ind_statistics where index_name ='IDX';

NUM_ROWS LAST_ANAL
---------- ---------
50484 30-OCT-09

SQL> alter index idx rebuild compute statistics;
alter index idx rebuild compute statistics
*
ERROR at line 1:
ORA-38029: object statistics are locked

SQL> alter index idx rebuild;

Index altered.

SQL> exec dbms_stats.gather_index_stats(null, 'IDX',force=>true);

PL/SQL procedure successfully completed.

SQL>

Monday, October 26, 2009

PLSQL_OPTIMIZE_LEVEL

I came to Austin for interview and i am on my way to Newjersy. I am hanging in the Austin Airport and i have another three hours for my flight departure. I thought, i write about one of the new Oracle10g optimization parameter PLSQL_OPTIMIZE_LEVEL.

This parameter determine the optimization level to compile the PLSQL code. The higher setting, oracle use more efforts to compile the code. This parameter will eliminate the dead code and moving the code out of the loop which does the same thing for each iteration. This has three valid values, which are 0,1 and 2. But default value for this parameter is 2.

Let us discuss about each value for this parameter. Please note, Oracle has not provided any detail level example for each value of this parameter. So i can't demonstrate exactly what oracle does for each level. Indeed, we can see the performance improvement in each level.

PLSQL_OPTIMIZE_LEVEL = 0 The value 0 works some what as pre 10g release. Oracle documentation says it works better than 9i. Let me write a procedure and run in oracle10g with value 0.

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> alter session set plsql_optimize_level =0;

Session altered.

SQL> set serveroutput on
SQL> create or replace procedure test as
2 a integer;
3 b integer;
4 c integer;
5 d integer;
6 v_time integer;
7 begin
8 v_time := Dbms_Utility.GET_CPU_TIME();
9 for j in 1..10000000 loop
10 a:= 100;
11 b:= null;
12 c:= nvl(b,1)+a;
13 end loop;
14 Dbms_Output.Put_Line(Dbms_Utility.GET_CPU_TIME()-v_time);
15 end;
16 /

Procedure created.

SQL> execute test;
770

PL/SQL procedure successfully completed.

The above procedure runs in 770 mseconds in oracle10g with plsql_optimize_level=0.

PLSQL_OPTIMIZE_LEVEL = 1 It eliminates unnecessary computation and exceptions. Since oracle has not given any example for the each value, i guess, it removes the statement b:=NULL in the TEST procedure. It does not make any sense to assign NULL value for each iteration of the loop in the TEST Procedure.

SQL> alter procedure test compile plsql_optimize_level = 1;

Procedure altered.

SQL> execute test;
502

PL/SQL procedure successfully completed.

The above procedure executes in 502 seconds and it is better then the plsql_optimize_level=0.

PLSQL_OPTIMIZE_LEVEL = 2 It moves the unnecessary dead code relatively far from its original location. I guess, it moves the assignment statement out of loop. Since it assigns the same value for each iteration which is not meaningful. Be aware that, some time, oracle takes long time to compile the procedure when we have value 2. Since oracle has to rewrite the code during the compilation stage and not during the execution stage.

SQL> alter procedure test compile plsql_optimize_level = 2;

Procedure altered.

SQL> execute test;
301

PL/SQL procedure successfully completed.

SQL>

The above procedure runs in 301 seconds and performance is far better then the value 1.

Monday, October 19, 2009

How to use histogram in Oracle

I would like to write about Oracle Histogram today. Histogram is very nice feature to help cost based optimizer to make right decision.

What is Histogram? Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer determine the fastest table join order.

What are the advantage of Histogram? Histograms are useful in two places.

1. Histograms are useful for Oracle optimizer to choose the right access method in a table.

2. It is also useful for optimizer to decide the correct table join order. When we join multiple tables, histogram helps to minimize the intermediate result set. Since the smaller size of the intermediate result set will improve the performance.

Type of Histograms: Oracle uses two types of histograms for column statistics: height-balanced histograms and frequency histograms.

1. Height - balanced Histograms : The column values are divided into bands so that each band contains approximately the same number of rows. For instances, we have 10 distinct values in the column and only five buckets. It will create height based(Height balanced) histograms and it will evenly spread values through the buckets. A height-based histogram is when there are more distinct values than the number of buckets and the histogram statistics shows a range of rows across the buckets

2. Frequency Histograms : Each value of the column corresponds to a single bucket of the histogram. This is also called value based histogram. Each bucket contains the number of occurrences of that single value. Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified.

Method_opt Parameter: This is the parameter which tells about creating histogram while collecting the statistics. The default is FOR ALL COLUMNS SIZE AUTO in Oracle10g. But in oracle9i, the default is FOR ALL COLUMN SIZE 1 which will turn off the histogram collection.

FOR ALL [INDEXED HIDDEN] COLUMNS [size_clause]
FOR COLUMNS [size clause] columnattribute [size_clause] [,columnattribute [size_clause]...]

size_clause is defined as size_clause := SIZE {integer REPEAT AUTO SKEWONLY}

integer : Number of histogram buckets. Must be in the range [1,254]

REPEAT : Collects histograms only on the columns that already have histograms.

AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns. We have a table called sys.col_usage$ that stores information about column usage. dbms_stats use this information to determine whether histogram is required for the columns.

SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.

Let me demonstrate how optimizer works with and without histogram as below two scenario. We take the emp table for this demonstration. The table has around 3.6 million records. The table emp_status column is highly skewed. It has two distinct values(Y,N). We have bitmap index on emp_status column.

Scenario 1 Let us generate the statistics without any histogram and see what kind of execution path optimizer is using. Without the histogram, oracle assume that, the data is evenly distributed and optimizer think that, we will have around 1.8 million record for emp_status Y and around another 1.8 million records for emp_status N.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> select count(*),emp_status from scott.emp
2 group by emp_status;

COUNT(*) E
---------- -
1 N
3670016 Y

SQL> execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'SCOTT', TABNAME => 'EMP',ESTIMATE_PERCENT =>
10, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',CASCADE => TRUE);

PL/SQL procedure successfully completed.

SQL> select ename from scott.emp where emp_status='Y';

3670016 rows selected.

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1832K 15M 5374 (5) 00:01:05
* 1 TABLE ACCESS FULL EMP 1832K 15M 5374 (5) 00:01:05
--------------------------------------------------------------------------

SQL> select ename from scott.emp where emp_status='N';

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1832K 15M 5374 (5) 00:01:05
* 1 TABLE ACCESS FULL EMP 1832K 15M 5374 (5) 00:01:05
--------------------------------------------------------------------------

Conclusion: Optimizer is using full table scan for the query which returns 3670016 records as well as it using full table scan for query which returns just only one record. This is obvisouly incorrect. This problem will be resolved by collecting histogram. Let us see in the next scenario.

Scenario 2 : Let us generate the statistics with histogram and see what kind of execution path optimizer is using. FOR COLUMN SIZE 2 EMP_STATUS will create two bucket for column emp_status. If we are not sure the distinct number of values in the column, then we can use AUTO option to collect histogram. With this histogram, oracle optimizer knows that, the column emp_status is highly skewed and it has two bucket and one bucket has around 3.6 million records with emp_status Y and another bucket has only one record with emp_status N. Now depends upon the query, optimizer decides whether to use index or Full table scan.

SQL> execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'SCOTT', TABNAME => 'EMP',ESTIMATE_PERCENT =>
10, METHOD_OPT => 'FOR COLUMNS SIZE 2 EMP_STATUS',CASCADE => TRUE);

PL/SQL procedure successfully completed.

SQL> select ename from scott.emp where emp_status='Y';

3670016 rows selected.

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 3681K 31M 5375 (5) 00:01:05
* 1 TABLE ACCESS FULL EMP 3681K 31M 5375 (5) 00:01:05
--------------------------------------------------------------------------

SQL> select ename from scott.emp where emp_status='N';

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 9 1 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID EMP 1 9 1 (0) 00:00:01
2 BITMAP CONVERSION TO ROWIDS
* 3 BITMAP INDEX SINGLE VALUE IDX_EMP
--------------------------------------------------------------------------

Conclusion: Optimizer is using full table scan for the query which returns 3670016 records. At the same time, optimizer is using index scan when for other query which returns one record. This scenario, the optimizer choose the right execution plan based on the query WHERE clause.

Data dictionary objects for Histogram:
user_histograms
user_part_histograms
user_subpart_histograms
user_tab_histograms
user_tab_col_statistics

Thursday, October 8, 2009

Transferring statistics between database

In general, development DB usually will have only portion of the data when we compared to Production database. In such a scenario, when we fix any production issues, obviously we make the changes in Dev DB and test the code and move to Prod DB. While testing the code in Dev DB, if we want to compare the execution plan between Dev and Prod, then we can copy the Prod DB statistics into Dev DB and forcast the optimizer behaviour in development server.

DBMS_STATS has an ability to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. This article is tested in oracle10g. Here are the steps to transfer the statistics.

Source database : orcl
Source schema : sales
Target database : oradev
Target schema : sales


Now our goal is to copy the statistics from sales@orcl to sales@ordev.

Let us follow the below steps to copy the statistics from source(production) to target(development). I am running all the steps in System Schema..

step1. First create a stat table in the source database. The statistics table is created in SYSTEM schema.

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> connect system/password@orcl
Connected.
SQL> EXEC DBMS_STATS.create_stat_table('SYSTEM','STATS_TABLE');

PL/SQL procedure successfully completed.

SQL>

Step2. Export the sales schema statistics.

SQL> EXEC DBMS_STATS.export_schema_stats('SALES','STATS_TABLE',NULL,'SYSTEM');

PL/SQL procedure successfully completed.

SQL>

Step3. Export the STATS_TABLE by using expdp or exp utility and move the dump file to target(ordev) server.

Step4. Import the dump file into target database by using impdp or imp utility. Here i imported the dump file in system schema at target server.

Step5. Import the statistics into application schema(sales@ordev). Please remember, previous step, we imported the stats_table content into system schema by using impdp method. But this step, we are importing the statistics into relevant data dictionary table by using dbms_stats pacakge.

SQL> EXEC DBMS_STATS.import_schema_stats('SALES','STATS_TABLE',NULL,'SYSTEM');

PL/SQL procedure successfully completed.

SQL>

Step6. Drop the stats_table in target server.

SQL> EXEC DBMS_STATS.drop_stat_table('SYSTEM','STATS_TABLE');

PL/SQL procedure successfully completed.

SQL>

Note : We can follow step1 and step2 to backup the statistics before we gather new statistics. It is always good to backup the statistics before we overwrite the new statistics. In case, if we see any performance problem with new statistics, then we can import the old statistics. This option is very useful to transfer the statistics from one DB to another DB.

In oracle10g, it automatically save the statistics for last 31 days by default. We can restore the past statistics within the database at any time. This option is useful to restore the statistics in the same database. Please refer this Restoring statistics

Wednesday, October 7, 2009

Refreshing Stale Statistics

Oracle optimizer use the statistics information to choose the right path and execute the query efficiently. It is important to maintain the recent statistics to run the reports efficiently. Oracle highly recommends to use DBMS_STATS to gather statistics. Why oracle recommends to use DBMS_STATS? Click here to answer your question. This article is based on Oracle10g.

DBMS_STATS package has wonderful feature that capable of analyzing the stale statistics. I am going to discuss about collecting stale statistics in dbms_stats package.

In general, Gathering statistics consumes lot of resource and CPU time. Once we gathered statistics on a table, we do not need to collect the statistics on the same table until we make reasonable amount of data changes. Let us say, we have a schema called sales. This schema has lot of tables and many table has huge number of records. We schedule to analyze the entire schema every day at 2AM. In day to day DML activities, some of the tables are not having any changes or very minimum changes. In this scenario, we do not need to analyze the tables which are having very minimum changes or no changes. But scheduler automatically start analyzing all the tables in the schema at 2AM every day. This process unnecessarily consuming extra resource and degrade the server performance.

How do we stop analyzing the tables when there is no DML activity or very minimal DML activity? Yes... We can... Oracle introduced feature in DBMS_STATS package where oracle collect statistics on schema level or database level, only when the statistics are stale or out of date.

What is stale statistics? Oracle will record an approximate count of the number of rows that have been inserted,updated, and deleted in a table. The information will be recorded in user_tab_modifications view. When that count reaches a threshold percentage of the number of rows in the table , then the statistics are considered stale. The table monitoring should be enabled for recording the DML changes in user_tab_modification view. In oracle10g, Oracle automatically enable table monitoring and record the DML changes in user_tab_modifications view. Prior to oracle10g, we need to enable the table monitoring manually.

How do we enable table monitoring? In oracle10g, the table monitoring is default when statistic_level parameter is TYPICAL. Prior to Oracle10g, we need to enable table monitoring manually. Prior to Oracle10g, the below command is used to enable or disable the table monitoring. But since Oracle10g, the below command does not have any effect.

ALTER TABLE table_name MONITORING[NOMONITORING]

What is threshold percentage? Oracle automatically determines the threshold. Oracle doesn't officially document the threshold, so the threshold, and the entire algorithm, is subject to change over time.

Let me give an example how to analyze stale statistics :

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> BEGIN
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname => 'SALES',
4 estimate_percent => 20,
5 block_sample => TRUE,
6 method_opt => 'FOR COLUMNS SIZE 10',
7 options => 'GATHER AUTO',
8 cascade => TRUE);
9 END;
10 /

PL/SQL procedure successfully completed.

SQL>

Note : It is very important that, we should use GATHER AUTO or GATHER STALE to analyze the stale statistics. Also table monitoring is mandatory. Since oracle10g, the table monitoring is enabled by default. So we do not need to worry about table monitoring since oracle10g.

This feature is very useful for large and complex databases where refreshing statistics for all objects can cause a heavy drain on server resources.

Thursday, October 1, 2009

Analyze Versus DBMS_STATS

Cost based optimizer is preferred method for oracle optimizer. In order to make good use of the CBO, you need to create accurate statistics. Prior to oracle8i, we use ANALYZE command to gather statistics.

DBMS_STATS package is introduced in oracle8i. Since Oracle8i, Oracle highly recommeds to use DBMS_STATS instead of ANALYZE command. This article is written in oracle10g. I am going to address below topics in this thread....

1. Why oracle recommends to use DBMS_STATS package?
2. What are the advantages of DBMS_STATS compared to ANALYZE?
3. How do we use DBMS_STATS package to analyze the table?
4. What are new features in each version for DBMS_STATS?

Why oracle recommends to use DBMS_STATS since Oracle8i?

1. Gathering statistics can be done in Parallel. This option is not available in ANALYZE command.

2. It is used to collect the stale statistics. I discussed about collecting stale statistics in another topic. Please refer stale statistics to know more about collecting stale statistics.

3. DBMS_STATS is a PLSQL pacakge. So it is easy to call. But ANALYZE does not.

4. It is used to collect statistics for external tables. But ANALYZE does not.

5. DBMS_STATS used to collect system statistics. But ANLAYZE does not.

6. Some time, ANALYZE does not produce accurate statistics. But DBMS_STATS does.

7. We can not use ANLAYZE command to gather statistics for partition or sub partition level. But we can use DBMS_STATS to analyze any specific partition or sub partition. This is especially useful for partition table. We do not need to analyze the Historical data whenever we refresh the current partition.

8. We can transfer statistics from one DB to another DB when we collected statistics through DBMS_STATS. But it can not be done when we use ANALYZE command to collect the statistics. Please refer statistics transfer to know more about trasferring statistics.

What force you to use ANALYZE command in all Oracle versions? ANALYZE can be used to collect the statistics like CHAIN_CNT, AVG_SPACE, and EMPTY_BLOCKS. DBMS_STATS will not collect these statistics. We might need to use ANALYZE in case if we want to see chained rows, average space and empty blocks.

There are several parameter exists for collecting statistics on table level, schema level, database level and system level. But i do not want to explain all the parameters which are already in Oracle help. Still i would like to explain some parameters.

estimate_percent: Percentage of rows or blocks to estimate. The valid rage is 0.000001 to 100. when we pass NULL for this parameter, then it computes. Compute is same as 100% sample. For instance, if we pass 20%, then it takes roughly around 20% of rows or 20% blocks depends on the BLOCK_SAMPLE parameter. This Parameter is used for analyzing on table, index, schema level and database level.

block_sample: This determines whether or not to use random block sampling instead of random row sampling. Block sampling would be slightly less accurate in the case where rows have roughly the same lifecycle and, thus, values are spread non-uniformly throughout the table. In case if you want to drive in deep on this, David Aldridge has a nice article on block sampling. This Parameter is used for analyzing on table, schema level and database level.

method_opt: This parameter tells about histogram in table. It determine which column should have histogram and number of histogram created for the table columns. This Parameter is used for analyzing on table, schema level and database level. Please refer histogram to know more about Histogram in Oracle.

granularity:This parameter is useful when you want to gather statistics on specific partition or sub partition in a table. The valid parameters are ALL, AUTO, GLOBAL, GLOBAL AND PARTITION, PARTITION, SUBPARTITION. This Parameter is used for analyzing on table, index, schema level and database level only if the table or index is partitioned.

no_invalidate: Does not invalidate the dependent cursors or currently parsed SQL statement if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE. Use DBMS_STATS.AUTO_INVALIDATE to have Oracle decide when to invalidate dependent cursors. This Parameter is used for analyzing or deleting statistics on table, index, schema level, database level.

Degree: Degree of parallelism. It has three valid values.

NULL : Oracle takes the value which is specified in degree clause of create or alter table statement.

DBMS_STATS.DEFAULT_DEGREE : It takes the value based on number of CPU's and init parameters.

DBMS_STATS.AUTO_DEGREE : It determines the value automatically. It is either 1 or default degree according to the size of the object.

Options: This parameter is used only for analyzing the data on schema level or DB level. There are multiple values for this parameters. The valid value for this parameters are GATHER, GATHER AUTO, GATHER STALE, GATHER EMPTY, LIST AUTO, LIST STALE, LIST EMPTY. Let me explain these valid values in short. Since these values are important to gather statistics on schema level.

GATHER-Gather statistics for all the objects in a schema or database.

GATHER AUTO-Gather statistics when the statistics are stale or when there is no statistics. It does both GATHER STALE and GATHER EMPTY.

GATHER STALE-Gather statistics only when it is stale. Does not collect when there is no statistics.

GATHER EMPTY-Gather statistics only when no statistics.

LIST AUTO: Returns a list of objects to be processed with GATHER AUTO.

LIST STALE: Returns list of stale objects as determined by looking at the user_tab_modifications

LIST EMPTY: Returns list of objects which currently have no statistics.

Gathering_mode : This parameter is used only for gathering system statistics. The valid modes are NOWORKLOAD, INTERVAL,START and STOP. The default is NOWORKLOAD. The START and STOP is used to stop and start the system statistics.

Example for collecting statistics on table:

DBMS_STATS.GATHER_TABLE_STATS(
OWNNAME => 'TANDEB',
TABNAME => 'CUSTOMER',
PARTNAME => 'PART092009'
GRANULARITY => 'PARTITION',
ESTIMATE_PERCENT => 10,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
CASCADE => TRUE,
NO_INVALIDATE => TRUE);

Example for collecting statistics on Schema:

DBMS_STATS.GATHER_SCHEMA_STATS(
OWNNAME => 'SCOMPPRD',
ESTIMATE_PERCENT => 10,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
OPTIONS => 'GATHER',
CASCADE => TRUE,
NO_VALIDATE => TRUE);


Example for collecting system statistics:

DBMS_STATS.GATHER_SYSTEM_STATS(
GATHERING_MODE => 'INTERVAL',
INTERVAL => 10);

Example for collecting database statistics:

DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT => 10,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
CASCADE => TRUE,
NO_VALIDATE => TRUE,
GATHER_SYS => FALSE)


New feature in Oracle9i:

1. Introduced to gather system statistics. Such as I/O and CPU utilization.

2. It can direct the database to select the appropriate sample size to generate accurate statistics. A new value for the ESTIMATE_PERCENT parameter, DBMS_STATS.AUTO_SAMPLE_SIZE will let Oracle decide the sample size necessary to ensure generation of accurate statistics.

3. Oracle9i introduced new values for the size clause in the METHOD_OPT parameter automate the decisions regarding the columns on which histograms need to be created while letting administrators control the factors affecting such decisions. Besides specifying a numeric value for the size clause, administrators have the new options (AUTO, SKEWONLY, REPEAT)

4. Oracle9i introduced new feature to enable or disable the table monitoring in schema level or DB level in one command.

DBMS_STATS.alter_schema_tab_monitoring('MYSCHEMA', TRUE); DBMS_STATS.alter_schema_tab_monitoring('MYSCHEMA', FALSE);

DBMS_STATS.alter_database_tab_monitoring(TRUE); DBMS_STATS.alter_database_tab_monitoring(FALSE);

New feature in Oracle10g:

1. Oracle10g enable table monitoring automatically. Table monitoring is required to collect the stale statisics. We do not need to enable monitoring explicitly. This feature is disabled when statistics_level is BASIC. It enables the table monitoring feature when statistics_level is TYPICAL. ALTER TABLE [NO] MONITORING clauses as well as alter_schema_tab_monitoring and alter_database_tab_monitoring procedures of the dbms_stats package are now obsolete in oracle10g. But still it runs without any error. But there is no effect.

2. Oracle10g introduced two new values for Granularity parameter. These are AUTO and GLOBAL AND PARTITION. This parameter is applicable for analyzing partitioning tables.

AUTO : Oracle collect statistics GLOBAL level, Partition level, and sub-partition level only if sub partition method is LIST. If sub parition is not a LIST, then it collects only GLOBAL, Partition level.

GLOBAL AND PARTITION : Oracle gathers the global and partition level statistics. No sub-partition level statistics are gathered.

3. Oracle10g introduced new value DBMS_STATS.AUTO_DEGREE for Degree parameter. When you specify the auto_degree, Oracle will determine the degree of parallelism automatically. It will be either 1 (serial execution) or default_degree (the system default value based on number of CPUs and initialization parameters), according to the size of the object.

4. Oracle10g has ability to restore the previous statistics. Oracle saves last 31 days statistics by default. We can recover previous days statistics in case, optimizer behaves differently with current statistics. Please refer my another post restoring statistics

5. We can lock the table statistics. This would be helpful if you want to avoid gathering statistics during the maintenance window. Please refer my another post Locking statistics

6. Oracle10g has automatic statistics gathering feature. Oracle gather statistics for the entire database every day during the maintenance window. Please refer my another post automatic statistics gathering

7. The statistics will be collected automatically when we create index. In oracle9i, we need to use compute statistics clause to collect statistics while creating index. Please refer my another post compute index statistics

What is impact when we analyze the tables during the peak hours?

1. Oracle consume more resource when we gather statistics. This will slow down the overall performance in the sever.

2. When statistics are updated for a database object, Oracle invalidates any currently parsed SQL statements that access the object. The next time such a statement executes, the statement is re-parsed and the optimizer automatically chooses a new execution plan based on the new statistics. This will degrade the server performance during the peak hours. But we can control this by using the parameter NO_INVALIDATE. This has three values(TRUE, FALSE, DBMS_STATS.AUTO_INVALIDATE). TRUE will not invalidate the already parsed SQL statement. NO will invalidate parsed SQL statement immediately. AUTO_INVALIDATE decides when to invalidate the already parsed Statement.

Here is the oracle help to know more about dbms_stats procedure. Oracle Help