Monday, August 25, 2008

Expdp / Impdp

Data pump is a new feature in Oracle10g that provides fast parallel data load. With direct path and parallel execution, data pump is several times faster then the traditional exp/imp. Traditional exp/imp runs on client side. But impdp/expdp runs on server side. So we have much control on expdp/expdp compared to traditional exp/imp. When compared to exp/imp, data pump startup time is longer. Because, it has to setup the jobs, queues, and master table. Also at the end of the export operation the master table data is written to the dump file set, and at the beginning of the import job the master table is located and loaded in the schema of the user.

Following are the process involved in the data pump operation:

Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.

Shadow Process : When client log into the database, foreground process is created. It services the client data pump API requests. This process creates the master table and creates Advanced queuing queues used for communication. Once client process ends, shadow process also go away.

Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hands them over to the worker processes.

Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter. The worker process performs the task requested by MCP.

Advantage of Data pump

1. We can perform export in parallel. It can also write to multiple files on different disks. (Specify parameters PARALLEL=2 and the two directory names with file specification DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)

2. Has ability to attach and detach from job, monitor the job progress remotely.

3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE

4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job

5. Data can be exported from remote database by using Database link

6. Explicit DB version can be specified, so only supported object types are exported.

7. During impdp, we can change the target file names, schema, and tablespace. Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE

8. Has the option to filter data rows during impdp. Traditional exp/imp, we have this filter option only in exp. But here we have filter option on both impdp, expdp.

9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter.

10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT. But here, it decides where direct path can not be used , conventional path is used.

11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc)

Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility.

FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)

Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp. Exp and corresponding Expdp parameters...

FEEDBACK => STATUS
FILE => DUMPFILE
LOG => LOGFILE
OWNER => SCHEMAS
TTS_FULL_CHECK => TRANSPROT_FULL_CHECK

New parameters in expdp Utility

ATTACH Attach the client session to existing data pump jobs

CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)

DIRECTORY Location to write the dump file and log file.

ESTIMATE Show how much disk space each table in the export job consumes.

ESTIMATE_ONLY It estimate the space, but does not perform export

EXCLUDE List of objects to be excluded

INCLUDE List of jobs to be included

JOB_NAME Name of the export job

KEEP_MASTER Specify Y not to drop the master table after export

NETWORK_LINK Specify dblink to export from remote database

NOLOGFILE Specify Y if you do not want to create log file

PARALLEL Specify the maximum number of threads for the export job

VERSION DB objects that are incompatible with the specified version will not be exported.

ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear text in the dump file set when the password is not specified. We can define any string as a password for this parameter.

COMPRESSION Specifies whether to compress metadata before writing to the dump file set. The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We can use NONE if we want to disable during the expdp.

SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database. The sample_percent indicates the probability that a block of rows will be selected as part of the sample.

Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility.

FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES

Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp. imp and corresponding impdp parameters...

DATAFILES => TRANSPORT_DATAFILES
DESTROY =>REUSE_DATAFILES
FEEDBACK =>STATUS
FILE =>DUMPFILE
FROMUSER =>SCHEMAS, REMAP_SCHEMAS
IGNORE =>TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW=>SQLFILE
LOG =>LOGFILE
TOUSER =>REMAP_SCHEMA

New parameters in impdp Utility

FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. Valid only when NETWORK_LINK parameter is used.

FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.

NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The dump file will be not be created in server when we use this parameter. To get a consistent export from the source database, we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. These two parameters are only valid when we use NETWORK_LINK parameter.

REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.

REMAP_SCHEMA Loads objects to a different target schema name.

REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.

TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This is useful if the storage characteristics of the source and target database are different. The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.

TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.

For instance, TRANSFORM=storage:N:table

ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.

CONTENT, INCLUDE, EXCLUDE are same as expdp utilities.

Prerequisite for expdp/impdp:

Set up the dump location in the database.

system@orcl> create directory dumplocation
2 as 'c:/dumplocation';

Directory created.

system@orcl> grant read,write on directory dumploc to scott;

Grant succeeded.

system@orcl>

Let us experiment expdp & impdp utility as different scenario...... We have two database orcl, ordb. All the below scenarios are tested in Oracle10g R2 version.


Scenario1 Export the whole orcl database.

Export Parfile content:

userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation

Scenario2 Export the scott schema from orcl and import into ordb database. While import, exclude some objects(sequence,view,package,cluster,table). Load the objects which came from RES tablespace into USERS tablespace in target database.

Export Parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott

Import parfile content:

userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('LOAD_EXT')"

Scenario3 Export the emp table from scott schema at orcl instance and import into ordb instance.

Expdb parfile content:

userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp

Impdp parfile content:

userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE

Scenario4 Export only specific partition in emp table from scott schema at orcl and import into ordb database.

Expdp parfile content:

userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20

Impdp parfile content: If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).

scott@ordb> delete part_emp where deptno=10;

786432 rows deleted.

scott@ordb> delete part_emp where deptno=20;

1310720 rows deleted.

scott@ordb> commit;

Commit complete.

userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append

Scenario5 Export only tables in scott schema at orcl and import into ordb database.

Expdp parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott

Impdp parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace

Scenario6 Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Import the dump file in ordb database. While importing, load only deptno 10 in target database.

Expdp parfile content:

userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:"in('EMP','DEPT')"
query="where deptno in(10,20)"

Impdp parfile content:

userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query="where deptno = 10"
table_exists_action=APPEND

Scenario7 Export the scott schema from orcl database and split the dump file into 50M sizes. Import the dump file into ordb datbase.

Expdp parfile content:

userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table

As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB, the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB, then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:

userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario8 Export the scott schema from orcl database and split the dump file into four files. Import the dump file into ordb datbase.

Expdp parfile content:

userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table

As per the above parfile content, initially four files will be created - schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence of the substation variable is incremented each time. Since there is no FILESIZE parameter, no more files will be created.

Impdp parfile content:

userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario9 Export the scott schema from orcl database and split the dump file into three files. The dump files will be stored in three different location. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job. After export is successful, import the dump file into ordb database.

Expdp parfile content:

userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table

As per above expdp par file content, it place the dump file into three different location. Let us say, entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and place 10 files in each file system.

Impdp parfile content:

userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace

Scenario10 We are in orcl database server. Now export the ordb data and place the dump file in orcl database server. After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user and source database schema users should have identical privileges. If there no identical privileges, then we get the below error.

C:\impexpdp>expdp parfile=networkexp1.par

Export: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 12:06:40

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user

Expdp parfile content:

userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb

As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.

Impdp parfile content:

userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace

Scenario11 Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed with out writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.

Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database. If we do not have much space in the file system to place the dump file, then we can use this option to load the data.

Impdp parfile content:

userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace

Scenario12 Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.

Expdp parfile content:

userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table

Impdp parfile content:

userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training

Scenario 13 Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.

SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.

If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter is not valid for network exports.

Expdp parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20

As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.

Impdp parfile content:

userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace

Managing Data Pump jobs

The datapump clients expdp and impdp provide an interactive command interface. Since each expdp and impdp operation has a job name, you can attach to that job from any computer and monitor the job or make adjustment to the job.

Here are the data pump interactive commands.

ADD_FILE Adds another file or a file set to the DUMPFILE set.

CONTINUE_CLIENT Changes mode from interactive client to logging mode

EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running.

KILL_JOB Detaches all currently attached client sessions and terminates the job

PARALLEL Increase or decrease the number of threads

START_JOB Starts(or resume) a job that is not currently running. SKIP_CURRENT option can skip the recent failed DDL statement that caused the job to stop.

STOP_JOB stops the current job, the job can be restarted later

STATUS Displays detailed status of the job, the refresh interval can be specified in seconds. The detailed status is displayed to the output screen but not written to the log file.

Scenario14 Let us start the job and in between, we stop the job in middle and resume the job. After some time, let us kill the job and check the job status for every activity....

We can find what jobs are running currently in the database by using the below query.

SQL> select state,job_name from dba_datapump_jobs;

STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01

SQL>

C:\impexpdp>impdp parfile=schemaimp1.par

Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:06:51

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE

Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes

C:\impexpdp>

When we want to stop the job, we need press Control-M to return Import> prompt. Once it is returned to prompt(Import>), we can stop the job as above by using stop_job command.

After the job is stoped, here is the job status.

SQL> select state,job_name from dba_datapump_jobs;

STATE JOB_NAME
------------------------------ ------------------------------
NOT RUNNING SYS_IMPORT_FULL_01

SQL>

Now we are attaching job again..... Attaching the job does not restart the job.

C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01

Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:17:11

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options

Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2009 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp

Worker 1 Status:
State: UNDEFINED
Import>

After attaching the job, here is the job status.

SQL> select state,job_name from dba_datapump_jobs;

STATE JOB_NAME
------------------------------ ------------------------------
IDLING SYS_IMPORT_FULL_01

SQL>

Attaching the job does not resume the job. Now we are resuming job again.....

Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2009 14:17
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par

SQL> select state,job_name from dba_datapump_jobs;

STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01

SQL>

Now again we are killing the same job.... Before we kill, we need to press Control-C to return the Import> prompt.

Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes

C:\impexpdp>

Now the job is disappared in the database.

SQL> select state,job_name from dba_datapump_jobs;

no rows selected

SQL>

Wednesday, August 6, 2008

Local Index Versus Global Index on Partition Table

Most of the interviewer ask these question on Partition index...

1. What is local index?
2. What is global index?
3. When would you force to create Global index on Partition table?
4. When would you recommend to create global index versus local index?

To answer these question....

1. What is local index?

Local Partitioned indexes are easier to manage and each partition of local indexes are associated with that partition. They also offer greater availability and are common in DSS environments. When we take any action(MERGE, SPLIT,EXCHANGE etc) on local partition, it impacts only that partition and other partition will be available. We can not explicity add local index to new partition. Local index will be added implicitly to new partition when we add new partition on table. Likewise, we can not drop the local index on specific partition. It can be dropped automatically when we drop the partition from underlying table. Local indexes can be unique when partition key is part of the composit index. Unique local indexes are useful for OLTP environment. We can can create bitmap indexes on partitioned tables, with the restriction that the bitmap indexes must be local to the partitioned table. They cannot be global indexes.

SQL> CREATE TABLE employees
2 (employee_id NUMBER(4) NOT NULL,
3 last_name VARCHAR2(10),
4 department_id NUMBER(2))
5 PARTITION BY RANGE (department_id)
6 (PARTITION employees_part1 VALUES LESS THAN (10) TABLESPACE ODS_STAGE_DATA,
7 PARTITION employees_part2 VALUES LESS THAN (20) TABLESPACE ODS_STAGE_DATA,
8 PARTITION employees_part3 VALUES LESS THAN (30) TABLESPACE ODS_STAGE_DATA);

Table created.

SQL> declare
2 v_no number :=1;
3 begin
4 delete employees;
5 for i in 1..10 loop
6 insert into employees values(v_no,'name...',v_no);
7 v_no := v_no+1;
8 end loop;
9 end;
10 /

PL/SQL procedure successfully completed.

SQL>
SQL> create index idx_local on employees(last_name) local;

Index created.

SQL>

2. What is global index?

Global index used in OLTP environments and offer efficient access to any individual record. We have two types of Global index. They are global Non-partitioned index and Global partitioned index. Global nonpartitioned indexes behave just like a nonpartitioned index.

Global partitioned index partition key is independent of Table partition key. The highest partition of a global index must have a partition bound, all of whose values are MAXVALUE. If you want to add new partition, always, you need to split the MAX partition. If a global index partition is empty, you can explicitly drop it by issuing the ALTER INDEX DROP PARTITION statement. If a global index partition contains data, dropping the partition causes the next highest partition to be marked unusable. You cannot drop the highest partition in a global index.

Example of Global Non-partitioned index.

SQL> CREATE INDEX employees_global_idx ON employees(employee_id);

Index created.

SQL>

Example of Global Partitioned index .

SQL> CREATE INDEX employees_global_part_idx ON employees(employee_id)
2 GLOBAL PARTITION BY RANGE(employee_id)
3 (PARTITION p1 VALUES LESS THAN(3),
4 PARTITION p2 VALUES LESS THAN(6),
5 PARTITION p3 VALUES LESS THAN(9));
PARTITION p3 VALUES LESS THAN(9))
*
ERROR at line 5:
ORA-14021: MAXVALUE must be specified for all columns

SQL> CREATE INDEX employees_global_part_idx ON employees(employee_id)
2 GLOBAL PARTITION BY RANGE(employee_id)
3 (PARTITION p1 VALUES LESS THAN(3),
4 PARTITION p2 VALUES LESS THAN(6),
5 PARTITION p3 VALUES LESS THAN(11),
6 PARTITION p4 VALUES LESS THAN(20),
7 PARTITION p5 VALUES LESS THAN(MAXVALUE));

Index created.
Now the partition p4 is empty. Let us drop the empty partition and see the status.

SQL> select count(*) from employees where
2 employee_id between 12 and 20;

COUNT(*)
----------
0
SQL> ALTER index employees_global_part_idx drop partition p4;

Index altered.

SQL> SELECT partition_name,status from user_ind_partitions where
2 index_name='EMPLOYEES_GLOBAL_PART_IDX';

PARTITION_NAME STATUS
------------------------------ --------
P1 USABLE
P2 USABLE
P3 USABLE
P5 USABLE

Now we will drop the partition P3 and see status. When we drop this partition, it should invalidate the next highest partition. Here, P5 is next highest partition.

SQL> alter index employees_global_part_idx drop partition p3;

Index altered.

SQL> SELECT partition_name,status from user_ind_partitions where
2 index_name='EMPLOYEES_GLOBAL_PART_IDX';

PARTITION_NAME STATUS
------------------------------ --------
P1 USABLE
P2 USABLE
P5 UNUSABLE

SQL> alter index employees_global_part_idx rebuild;
alter index employees_global_part_idx rebuild
*
ERROR at line 1:
ORA-14086: a partitioned index may not be rebuilt as a whole

SQL> alter index employees_global_part_idx rebuild partition p5;

Index altered.

SQL> SELECT partition_name,status from user_ind_partitions where
2 index_name='EMPLOYEES_GLOBAL_PART_IDX';

PARTITION_NAME STATUS
------------------------------ --------
P1 USABLE
P2 USABLE
P5 USABLE

Partition index can be maintained by using UPDATE GLOBAL INDEXES. Index will be available during the maintenance and it is available online. We do not need to rebuilt the index after the index maintenance.

For example,

SQL> alter table employees drop partition employees_part3
2 update global indexes;

Table altered.

SQL> SELECT partition_name,status from user_ind_partitions where
2 index_name='EMPLOYEES_GLOBAL_PART_IDX';

PARTITION_NAME STATUS
------------------------------ --------
P1 USABLE
P2 USABLE
P5 USABLE

SQL>

3. When would you force to create Global index on Partition table?

When you create a Primary key, you will be forced to create Global index. When you create unique index, you are forced to create global index. Enforcing unqiueness is most common reason for global indexes. If you try to create local index on unique key, you would get the below error.

ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index

So, we can create local index on unique key when we add the partition key as part of composit key in the index. Let me demonstrate this.

The below example, unique index failed since, partition key is not part of index composit key.

SQL> create unique index idx_emp_id on employees(employee_id) local;
create unique index idx_emp_id on employees(employee_id) local
*
ERROR at line 1:
ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index

The below example, it allowed to create global index.

SQL> create unique index idx_emp_id on employees(employee_id);

Index created.

SQL> drop index idx_emp_id;

Index dropped.

The below case, unique index is successfully created, since partition key department_id is part of composit keys.

SQL> create unique index idx_emp_id on employees(employee_id,department_id) local;

Index created.

SQL>

4. When would you recommend to create global index versus local index?

We can use Global index if Query that return a SMALL number of rows from a potentially LARGE number of partitions.

Partitions

Partitioned tables, available since 8.0.5, have been incrementally enhanced in every version of Oracle including the current version. The below article is written in oracle10g.

Advantage of Partitions
1. We can divide and more managable pieces of data.
2. Recovery time can be reduced
3. The impact level is minimal when the data are corrupted.
4. Logical import/export can be done on partition level.
5. Accessing time is faster - Partition Pruning, partition wise join
6. Easy of Administration

Before we start partition the table, we need to think about two important factors. One is what would be the partition key in a table. Another important factor is, what type of partition we need to implement.

Here are the type of partitions.

1. Range Partition (Introduced in 8.0.5)
2. Hash Partition (Introduced in 8i)
3. Range - Hash Partition (Introduced in 8i)
4. List Partition (Introduced in 9i)
5. Range - List Partition (Introduced in 9i)
6. Range - Range Partition (Introduced in 11g)
7. List - Range Partition (Introduced in 11g)
8. List - List Partition (Introduced in 11g)
10. List - Hash Partition (Introduced in 11g)

Oracle 11g also introduced REF Partitions, Interval Partitions, Virtual columns based Partitions.

Range Partitions
The data is distributed based on a range of values of the partitioning key. For example, if we choose a date column as the partitioning key, the partition “JAN-2007” will contain all the rows that have the partition key values between 01-JAN-2007 and 31-JAN-2007 (assuming the range of the partition is from first of the month to the last date in the month). Range partition is often used with dates.

Here is the sample code to range partition in a table.
create table trans (
trans_id number,
trans_dt date,
product_code number,
store_id number,
trans_amount number(12,2),
status varchar2(10))
partition by range (trans_dt) (
partition q105 values less than (to_date('04/01/2005','mm/dd/yyyy')) tablespace y05q1,
partition q205 values less than (to_date('07/01/2005','mm/dd/yyyy')) tablespace y05q2,
partition q305 values less than (to_date('10/01/2005','mm/dd/yyyy')) tablespace y05q3,
partition q405 values less than (to_date('01/01/2006','mm/dd/yyyy')) tablespace y05q4,
partition pmax values less than (maxvalue) tablespace users )

Hash Partitions
Hash partitioning maps data to partitions based on a hashing algorithm that Oracle applies to a partitioning key that you identify. The hashing algorithm evenly distributes rows among partitions, given partitions approximately the same size. Hash partitioning is useful when there is no obvious range key, or range partitioning will cause uneven distribution of data. The number of partitions must be a power of 2 (2, 4, 8, 16...) This is typically used where ranges aren't appropriate, i.e. customer number, product ID.

Here is the sample code for Hash Partitions.

CREATE TABLE invoices(invoice_no NUMBER NOT NULL,
invoice_date DATE NOT NULL,
comments VARCHAR2(500))
PARTITION BY HASH (invoice_no)
PARTITIONS 4
STORE IN (users, users, users, users);

or we can specify individually.

CREATE TABLE invoices(invoice_no NUMBER NOT NULL,
invoice_date DATE NOT NULL,
comments VARCHAR2(500))
PARTITION BY HASH (invoice_no)
(PARTITION invoices_q1 TABLESPACE users,
PARTITION invoices_q2 TABLESPACE users,
PARTITION invoices_q3 TABLESPACE users,
PARTITION invoices_q4 TABLESPACE users);

Range - Hash Partitions
It is combination of range and hash partitions. So basically, the first step is that the data is divided using the range partition and then each range partitioned data is further subdivided into a hash partition using hash key values. All sub partitions, together, represent a logical subset of the data.

The following statement creates a range-hash partitioned table. In this example, three range partitions are created, each containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the STORE IN clause distributes them across the 4 specified tablespaces (ts1, ...,ts4).

CREATE TABLE equipment(
equipno NUMBER,
equipname VARCHAR(32),
price NUMBER)
PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname)
SUBPARTITIONS 8 STORE IN (ts1,ts2,ts3,ts4)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2000),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

List Partitions
The data distribution is defined by a list of values of the partitioning key. This is useful for discrete lists. e.g: Regions, States etc. List partitioning enables you to control explicitly how rows map to partitions. You do this by specifying a list of discrete values for the partitioning column in the description for each partition.

create table sales (
product_id number,
trans_amt number,
sales_dt date,
state_code varchar2(2))
partition by list (state_code)
(partition ct values ('CT'),
partition ca values ('CA'),
partition def values (default));

Range-List Partition
This is also a combination of Range and List Partitions, basically first the data is divided using the Range partition and then each Range partitioned data is further subdivided into List partitions using List key values. Each sub partitions individually represents logical subset of the data not like composite Range-Hash Partition.

Like the composite range-hash partitioning method, the composite range-list partitioning method provides for partitioning based on a two level hierarchy. The first level of partitioning is based on a range of values, as for range partitioning; the second level is based on discrete values, as for list partitioning. This form of composite partitioning is well suited for historical data, but lets you further group the rows of data based on unordered or unrelated column values.

CREATE TABLE quarterly_regional_sales
(deptno number,
item_no varchar2(20),
txn_date date,
txn_amount number,
state varchar2(2))
TABLESPACE ts4
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
(PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
(SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q1_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q1_1999_southcentral VALUES ('OK', 'TX') ),
PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))
(SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')),
PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
(SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q3_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q3_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q3_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q3_1999_southcentral VALUES ('OK', 'TX')),
PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
(SUBPARTITION q4_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q4_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q4_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q4_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q4_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q4_1999_southcentral VALUES ('OK', 'TX')));

Oracle10g new features in Partitions
1. Global index can be Hash Partitioned in oracle10g.
In previous releases of the Oracle Database, only the range-partitioned global indexes were supported. Oracle 10g Database introduces the Hash Partitioning method for global indexes.
Hash partitioned global indexes can improve the performance of indexes where a small number of leaf blocks in the index have high contention in multiuser OLTP environments. It is useful when we have many processes accessing the same areas of the index.

CREATE INDEX hgidx ON tab (c1,c2,c3)
GLOBAL PARTITION BY HASH (c1,c2)
(PARTITION p1 TABLESPACE tbs_1,
PARTITION p2 TABLESPACE tbs_2,
PARTITION p3 TABLESPACE tbs_3,
PARTITION p4 TABLESPACE tbs_4);

2. Invalidating the dependent objects while dropping the partition
In previous release(oracle9i), the dependent objects will be invalid when we drop the particular partitions. But it is taken care in oracle 10g.

SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production

SQL> CREATE TABLE partitioned_table
2 ( x INT
3 , y INT )
4 PARTITION BY LIST (y)
5 ( PARTITION p1 VALUES (10)
6 , PARTITION p2 VALUES (20)
7 , PARTITION p3 VALUES (30) );

Table created.

SQL> CREATE PROCEDURE dependant_procedure AS
2 v_cnt PLS_INTEGER;
3 BEGIN
4 SELECT COUNT(*) INTO v_cnt
5 FROM partitioned_table PARTITION (p1);
6 END dependant_procedure;
7 /

Procedure created.

SQL> CREATE VIEW dependant_view
2 AS
3 SELECT *
4 FROM partitioned_table PARTITION (p1);

View created.

SQL> SELECT substr(object_name,1,20),status
2 FROM user_objects
3 WHERE object_name LIKE 'DEPENDANT%';

SUBSTR(OBJECT_NAME,1 STATUS
-------------------- -------
DEPENDANT_PROCEDURE VALID
DEPENDANT_VIEW VALID

SQL> ALTER TABLE partitioned_table DROP PARTITION p1;

Table altered.

SQL> SELECT substr(object_name,1,20),status
2 FROM user_objects
3 WHERE object_name LIKE 'DEPENDANT%';

SUBSTR(OBJECT_NAME,1 STATUS
-------------------- -------
DEPENDANT_PROCEDURE VALID
DEPENDANT_VIEW VALID

The dependent objects are valid when we drop the partitions. Let us do the same in oracle9i database and see the dependent object status.

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for IBM/AIX RISC System/6000: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production

SQL> CREATE TABLE partitioned_table
2 ( x INT
3 , y INT )
4 PARTITION BY LIST (y)
5 ( PARTITION p1 VALUES (10)
6 , PARTITION p2 VALUES (20)
7 , PARTITION p3 VALUES (30) );

Table created.
SQL> CREATE PROCEDURE dependant_procedure AS
2 v_cnt PLS_INTEGER;
3 BEGIN
4 SELECT COUNT(*) INTO v_cnt
5 FROM partitioned_table PARTITION (p1);
6 END dependant_procedure;
7 /

Procedure created.

SQL>
SQL> CREATE VIEW dependant_view
2 AS
3 SELECT *
4 FROM partitioned_table PARTITION (p1);

View created.

SQL> SELECT substr(status,1,20)
2 FROM user_objects
3* WHERE object_name LIKE 'DEPENDANT%'
SQL> /

SUBSTR(
-------
VALID
VALID

SQL> ALTER TABLE partitioned_table DROP PARTITION p1;

Table altered.

SQL> SELECT substr(status,1,20)
2 FROM user_objects
3 WHERE object_name LIKE 'DEPENDANT%'
4 /

SUBSTR(
-------
INVALID
INVALID

The dependent objects are INVALID in oracle9i when we drop the partition.

3. Redefining the Partition Online
In Previous release(prior to 10g) of oracle, if we want to move the partitioned table to different tablespace, it would move the entire table. This will result in massive amount undo and redo generation. In oracle10g R2, you can move single partition at a time without any downtime. So, if the table has 4 partitions, then we have to do the same step four times. It will result less amount of undo/redo generation when we do partition by partition.


In oracle10gR2, online redefinition can be done on single partition. Suppose you have a table TRANS that contains history of transactions. This table is partitioned on the TRANS_DATE, with each quarter as a partition. During the normal course of business, the most recent partitions are updated frequently. After a quarter is complete, there may not be much activity on that partition and it can be moved to a different location. However, the move itself will require a lock on the table, denying public access to the partition. In oracle10gR2, we can move the specific partition to different tablespace with no impact on its availability.


Steps to move the partition to different tablespace.
1. Create the table with partition.
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> CREATE TABLE accounts
2 ( invoice_no NUMBER,
3 sale_year INT NOT NULL )
4 PARTITION BY RANGE (sale_year)
5 ( PARTITION p1 VALUES LESS THAN (1999),
6 PARTITION p2 VALUES LESS THAN (2000),
7 PARTITION p3 VALUES LESS THAN (2001),
8 PARTITION p4 VALUES LESS THAN (2002));

Table created.
SQL> create index global_idx_accounts
2 on accounts(sale_year);

Index created.

SQL> select partition_name, tablespace_name from user_tab_partitions;

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P1 USERS
P2 USERS
P3 USERS
P4 USERS

Create the temp tables with same structure of partition table
SQL> create table accounts_int
2 tablespace sysaux
3 as
4 select * from accounts
5 where 1=2
6 /

Table created.

SQL>

3. Confirm that we can redefine the partition


SQL> begin
2 dbms_redefinition.can_redef_table(
3 uname => 'SCOTT',
4 tname => 'ACCOUNTS',
5 options_flag => dbms_redefinition.cons_use_rowid,
6 part_name => 'P1');
7 end;
8 /

PL/SQL procedure successfully completed.

4. Start the redefinition
SQL> alter table accounts add primary key(invoice_no);

Table altered.

SQL> begin
2 dbms_redefinition.start_redef_table (
3 uname => 'SCOTT',
4 orig_table => 'ACCOUNTS',
5 int_table => 'ACCOUNTS_INT',
6 part_name => 'P1'
7 );
8 end;
9 /

PL/SQL procedure successfully completed.
5. Sync the redefinition
SQL> begin
2 dbms_redefinition.sync_interim_table (
3 uname => 'SCOTT',
4 orig_table => 'ACCOUNTS',
5 int_table => 'ACCOUNTS_INT',
6 part_name => 'P1');
7 end;
8 /

PL/SQL procedure successfully completed.
6. Finish the redefinition
SQL> begin
2 dbms_redefinition.finish_redef_table (
3 uname => 'SCOTT',
4 orig_table => 'ACCOUNTS',
5 int_table => 'ACCOUNTS_INT',
6 part_name => 'P1'
7 );
8 end;
9 /
PL/SQL procedure successfully completed.
7. Check the that partition is moved to different tablespace
SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ -----------------------------
P1 SYSAUX
P2 USERS
P3 USERS
P4 USERS

SQL> select status from user_indexes where table_name='ACCOUNTS';

STATUS
--------
UNUSABLE
UNUSABLE

Note: If there is any global index on the table, they will be marked as UNUSABLE and must be rebuilt.


4. Drop a Table in Chunks Prior to 10g, when we drop the huge table in peak time, it would take lot of CPU, and more resoures. It would slow down the DB performance. It is taken care in oracle 10g. In Oracle Database 10g Release 2, when you drop a partitioned table, partitions are dropped one by one. Because each partition is dropped individually, fewer resources are required than when the table is dropped as a whole.


To demonstrate this new behavior, you can trace the session with a 10046 trace. alter session set events '10046 trace name context forever, level 12';
Then drop the table. If you examine the trace file, you'll see how a partitioned-table drop looks:

delete from tabpart$ where bo# = :1
delete from partobj$ where obj#=:1
delete from partcol$ where obj#=:1
delete from subpartcol$ where obj#=:1
Note that the partitions are deleted in serial fashion. This approach minimizes resource utilization during the drop process and enhances performance.

5. Partition-Change Tracking: No Need for MV Logs In oracle10g, it avoids the materialized view log when the master table is partitioned. The ability to fast-refresh an MV even when the base tables do not have MV logs is a powerful and useful feature, allowing you to do fast refreshes in partitioned MVs without the added performance overhead.

6.Quick Partition Split for Partitioned, Index-Organized Tables In oracle10g R2, quick split partition is introduced for IOT. Let us say, we have Partitions till Dec 2008. When we enter the data for Jan 2009, the data will be stored in MAX partition. Later, when we split the partition for Jan 2009, Prior to 10g, oracle create a new partition and moves the data from MAX partition to Jan 2009 partition. It requires more resources. To avoid this, oracle introduced new feature Quick split partition for IOT. In Oracle10gR2, oracle rename the MAX partition to Jan 2009 partition and create a new partition as a MAX partition. So it avoids moving the data from one partition to another partition.

Managing Oracle Partitions

1. Add Partition
2. Split Partition
3. Drop Partition
4. Exchange Partition
5. Merge Partition
6. Move Partition
7. Rename Partition
8. Truncate Partition
9. Coalesce Partition

Adding Partition

This section describes how to add new partitions to a partitioned table

Adding Partition on Range Partitioned table

Use the ALTER TABLE ... ADD PARTITION statement to add a new partition to the "high" end (the point after the last existing partition). To add a partition at the beginning or in the middle of a table, use the SPLIT PARTITION clause.

SQL> CREATE TABLE employees
2 (employee_id NUMBER(4) NOT NULL,
3 last_name VARCHAR2(10),
4 department_id NUMBER(2))
5 PARTITION BY RANGE (department_id)
6 (PARTITION employees_part1 VALUES LESS THAN (10) TABLESPACE tab1,
7 PARTITION employees_part2 VALUES LESS THAN (20) TABLESPACE tab2,
8 PARTITION employees_part3 VALUES LESS THAN (30) TABLESPACE tab3);

Table created.

SQL> CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;

Index created.

SQL> CREATE INDEX employees_global_idx ON employees(last_name);

Index created.

SQL> insert into employees values(1,'Mike',10);

1 row created.

SQL> insert into employees values(2,'Scott',20);

1 row created.

SQL> insert into employees values(3,'Rob',30);

1 row created.

SQL>

SQL> alter table employees
2 add partition employees_part4 values less than (41)
3 tablespace ods_stage_data
4 /

Table altered.
SQL> insert into employees values(50,'Mike',40);

1 row created.

SQL>
After adding new partition, what would happen for Local and Global indexes.. Let us check the indexes.


SQL> select index_name,status from user_indexes where
2 table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL> select index_name, partition_name, status from user_ind_partitions;

INDEX_NAME PARTITION_NAME STATUS
------------------------------ ------------------------------ --------
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART1 USABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART2 USABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART3 USABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART4 USABLE
SQL>

Local and global indexes associated with the range-partitioned table remain usable when we add new partitions.

Adding Partition on Hash partition table

When you add a partition to a hash-partitioned table, the database populates the new partition with rows rehashed from an existing partition as determined by the hash function.

SQL> CREATE TABLE invoices
2 (invoice_no NUMBER NOT NULL,
3 invoice_date DATE NOT NULL,
4 comments VARCHAR2(500))
5 PARTITION BY HASH (invoice_no)
6 PARTITIONS 4
7 STORE IN (ods_stage_data,ods_stage_data,ods_stage_data,ods_stage_data);

Table created.

SQL>
SQL> CREATE INDEX invoices_local_idx ON invoices(invoice_no) LOCAL;

Index created.

SQL>
SQL> CREATE INDEX invoices_global_idx ON invoices(invoice_date);

Index created.

SQL>
SQL> declare
2 v_no number := 1;
3 begin
4 for i in 1..10000 loop
5 insert into invoices values(v_no,sysdate+v_no,'xyz');
6 v_no := v_no +1;
7 end loop;
8 end;
9 /

PL/SQL procedure successfully completed.

SQL>
SQL> alter table invoices add partition;

Table altered.

SQL> select index_name ,status
2 from user_indexes where table_name='INVOICES';

INDEX_NAME STATUS
------------------------------ --------
INVOICES_GLOBAL_IDX UNUSABLE
INVOICES_LOCAL_IDX N/A

SQL>
SQL> select partition_name, status from user_ind_partitions
2 where index_name='INVOICES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
SYS_P156 UNUSABLE
SYS_P152 UNUSABLE
SYS_P153 USABLE
SYS_P154 USABLE
SYS_P155 USABLE

SQL> ALTER INDEX INVOICES_GLOBAL_IDX rebuild;

Index altered.

SQL> alter table invoices modify partition SYS_P156
2 rebuild unusable local indexes;

Table altered.

SQL> alter table invoices modify partition SYS_P152
2 rebuild unusable local indexes;

Table altered.

SQL> select index_name ,status
2 from user_indexes where table_name='INVOICES';

INDEX_NAME STATUS
------------------------------ --------
INVOICES_GLOBAL_IDX VALID
INVOICES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 where index_name='INVOICES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
SYS_P156 USABLE
SYS_P152 USABLE
SYS_P153 USABLE
SYS_P154 USABLE
SYS_P155 USABLE

SQL> alter table invoices
2 add partition MAX VALUES LESS THAN (MAXVALUE);
add partition MAX VALUES LESS THAN (MAXVALUE)
*
ERROR at line 2:
ORA-14261: partition bound may not be specified when adding this Hash partition

SQL> ALTER TABLE invoices
2 add partition dummypart;

Table altered.

SQL> select partition_name from user_tab_partitions
2 where table_name='INVOICES';

PARTITION_NAME
------------------------------
SYS_P156
SYS_P157
DUMMYPART
SYS_P152
SYS_P153
SYS_P154
SYS_P155

7 rows selected.

The local indexes for the new partition, and for the existing partition from which rows were redistributed, are marked UNUSABLE and must be rebuilt. All global indexes, or all partitions of partitioned global indexes, are marked UNUSABLE and must be rebuilt.

Adding Partition on List Partitioned table

We can add new partition with new list of values. We can not add default partition on List partition table.

SQL> create table sales
2 (deptno number,
3 deptname varchar2(20),
4 quarterly_sales number(10, 2),
5 state varchar2(2))
6 PARTITION BY LIST (state)
7 (PARTITION q1_northwest VALUES ('OR', 'WA'),
8 PARTITION q1_southwest VALUES ('AZ', 'UT', 'NM'),
9 PARTITION q1_northeast VALUES ('NY', 'VM', 'NJ'),
10 PARTITION q1_southeast VALUES ('FL', 'GA'),
11 PARTITION q1_northcentral VALUES ('SD', 'WI'),
12 PARTITION q1_southcentral VALUES ('OK', 'TX'))
13 /

Table created.

SQL> create index sales_local_idx on sales(deptno) local;

Index created.

SQL> create index sales_global_idx on sales(state);

Index created.

SQL> alter table sales
2 add partition MAX VALUES LESS THAN(MAXVALUE);
add partition MAX VALUES LESS THAN(MAXVALUE)
*
ERROR at line 2:
ORA-14310: VALUES LESS THAN or AT clause cannot be used with List partitioned
tables
SQL> alter table sales
2 ADD PARTITION q1_nonmainland VALUES ('HI', 'PR')
3 ;

Table altered.

SQL> select index_name ,status
2 from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX VALID
SALES_LOCAL_IDX N/A

SQL>
SQL> select partition_name, status from user_ind_partitions
2 where index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST USABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

7 rows selected.

Local and global indexes associated with the list-partitioned table remain usable.

We can also add new partition, sub partition on Range-Hash partition and Range-List partition.

Splitting Partition

The SPLIT PARTITION clause of the ALTER TABLE or ALTER INDEX statement is used to redistribute the contents of a partition into two new partitions. Consider doing this when a partition becomes too large and causes backup, recovery, or maintenance operations to take a long time to complete. You can also use the SPLIT PARTITION clause to redistribute the I/O load.

Splitting partition on Range partition table

You split a range partition using the ALTER TABLE ... SPLIT PARTITION statement. You specify a value of the partitioning key column within the range of the partition at which to split the partition. The first of the resulting two new partitions includes all rows in the original partition whose partitioning key column values map lower that the specified value. The second partition contains all rows whose partitioning key column values map greater than or equal to the specified value.

Let us take the Employees table. Now we add MAX partition on this table.

SQL> alter table employees add partition MAX VALUES LESS THAN (MAXVALUE);

Table altered.

SQL> insert into employees values(100,'Hendry',50);

1 row created.

Now we will try to add new partition. But at this scenario, oracle would not let to create a new partition. The only option is to split the MAX partition.


SQL> alter table employees
2 add partition employees_part5 values less than (51)
3 tablespace ods_stage_data
4 /
add partition employees_part5 values less than (51)
*
ERROR at line 2:
ORA-14074: partition bound must collate higher than that of the last partition

SQL>

Now we need to split the MAX partition.

SQL> Alter table employees
2 Split partition MAX at (51)
3 INTO
4 (PARTITION employees_part6 tablespace ods_stage_data,
5 PARTITION MAX)
6 ;

Table altered.
SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL> select index_name, partition_name, status from user_ind_partitions;
PARTITION_NAME STATUS

------------------------------ ------------------------------

EMPLOYEES_PART1 USABLE

EMPLOYEES_PART2 USABLE

EMPLOYEES_PART3 USABLE

EMPLOYEES_PART4 USABLE

EMPLOYEES_PART5 USABLE

MAX USABLE
6 rows selected.

Note that, the indexs are usable, because, MAX partition data are belonging to only new partition employees_part6. So Oracle rename the MAX partition as new partition and creates new partition as MAX partition.

SQL> insert into employees values(101,'Mani',51)
1 row created.
SQL> insert into employees values(102,'Mani',61)
1 row created.
SQL>insert into employees values(102,'Mani',71)
1 row created.
SQL> insert into employees values(102,'Mani',72)
1 row created.

SQL> Alter table employees
2 Split partition MAX at (61)
3 INTO
4 (PARTITION employees_part6,
5 PARTITION MAX)
6 ;

Table altered.

SQL>
SQL> select status from user_indexes where table_name='EMPLOYEES';

STATUS
--------
UNUSABLE
N/A

SQL> select index_name, partition_name, status from user_ind_partitions;

INDEX_NAME PARTITION_NAME STATUS
------------------------------ ------------------------------ --------
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART1 USABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART2 USABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART3 USABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART6 UNUSABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART4 USABLE
EMPLOYEES_LOCAL_IDX MAX UNUSABLE
EMPLOYEES_LOCAL_IDX EMPLOYEES_PART5 USABLE

7 rows selected.

Now GLOBAL index is UNUSABLE and local index is UNUSABLE for partitions(EMPLOYEES_PART6 ,MAX). Because, we splitted only MAX partition. so other partition local indxes are intact.
SQL> insert into employees values(101,'xx',60);
insert into employees values(101,'xx',60)
*
ERROR at line 1:
ORA-01502: index 'SCREPORT.EMPLOYEES_LOCAL_IDX' or partition of such index is
in unusable state

SQL> ALTER TABLE employees MODIFY PARTITION EMPLOYEES_PART6
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL>
SQL> ALTER TABLE employees MODIFY PARTITION MAX
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> alter index employees_global_idx rebuild;

Index altered.
SQL> insert into employees values(101,'xx',60);

1 row created.

In the below example, global index is rebuild automatically when we use UPDATE GLOBAL INDEX. But we need to rebuild the local index manually.

SQL> Alter table employees
2 Split partition MAX at (71)
3 INTO
4 (PARTITION employees_part7,
5 PARTITION MAX)
6 UPDATE GLOBAL INDEXES
7 ;

Table altered.

SQL> select status from user_indexes where table_name='EMPLOYEES';

STATUS
--------
VALID
N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART4 USABLE
EMPLOYEES_PART7 UNUSABLE
MAX UNUSABLE
EMPLOYEES_PART5 USABLE

8 rows selected.

SQL> ALTER TABLE employees MODIFY PARTITION EMPLOYEES_PART7
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> ALTER TABLE employees MODIFY PARTITION MAX
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

Splitting partition on Hash partition table

This option is not available in Hash partitions.

Splitting partition on List partition table

You split a list partition using the ALTER TABLE ... SPLIT PARTITION statement. The SPLIT PARTITION clause lets you specify a value list of literal values that define a partition into which rows with corresponding partitioning key values are inserted. The remaining rows of the original partition are inserted into a second partition whole value list is the remaining values from the original partition.

SQL> ALTER TABLE sales
2 SPLIT PARTITION q1_northwest VALUES ('OR')
3 INTO
4 ( PARTITION q1_northwest_1
5 TABLESPACE ods_stage_data,
6 PARTITION q1_northwest_2
7 TABLESPACE ods_stage_data)
8 /

Table altered.

SQL>

Originally q1_northwest partition has two values(OR,WA). After splitting this partition, this region becomes two paritions.

1. q1_northwest_1 which contains OR
2.q1_northwest_2 which contains WA

SQL> select status from user_indexes where table_name='SALES';

STATUS
--------
UNUSABLE
N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST_1 UNUSABLE
Q1_NORTHWEST_2 UNUSABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

8 rows selected.


SQL> alter index sales_global_idx rebuild;

Index altered.

SQL> alter table sales MODIFY PARTITION Q1_NORTHWEST_1
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> alter table sales MODIFY PARTITION Q1_NORTHWEST_2
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST_1 USABLE
Q1_NORTHWEST_2 USABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

8 rows selected.

SQL> select index_name from user_indexes where table_name='SALES';

INDEX_NAME
------------------------------
SALES_GLOBAL_IDX
SALES_LOCAL_IDX

SQL>

The global index and associated local indexs are unusable. We need to rebuild after splitting the partitions.

We can also split Range-Hash, Rang-List partition.

Dropping partitions

You can drop partitions from range, list, or composite range-list partitioned tables. For hash-partitioned tables, or hash subpartitions of range-hash partitioned tables, you must perform a coalesce operation instead.

Dropping partition on Range partition table

Dropping partition making global index UNUSABLE. But local index is intact.

SQL> alter table employees drop partition EMPLOYEES_PART7;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART4 USABLE
MAX USABLE
EMPLOYEES_PART5 USABLE

7 rows selected.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX UNUSABLE
EMPLOYEES_LOCAL_IDX N/A

SQL> alter index employees_global_idx rebuild;

Index altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL>

Dropping Partition on Hash partitioned table

We can not drop any partition on Hash partitioned table.

Dropping Partition on List partitioned table

SQL> alter table sales drop partition Q1_NORTHWEST_1;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST_2 USABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

7 rows selected.

SQL>
SQL> select index_name,status from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX UNUSABLE
SALES_LOCAL_IDX N/A

SQL>
SQL> alter index sales_global_idx rebuild;

Index altered.

SQL>
SQL> select index_name,status from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX VALID
SALES_LOCAL_IDX N/A

Dropping partition invalidate the global index. But the local indexes are intact..

We can also drop individual partitions on Rang-List, Rang-Hash partitions.

Exchange Partitions

We now switch the segments associated with the source table and the partition in the destination table using the EXCHANGE PARTITION

Exchanging Partition on Range partition table

SQL> declare
2 v_no number:=52;
3 begin
4 for i in 1..10 loop
5 insert into tempemp values(110,'yy',v_no);
6 v_no := v_no+1;
7 exit when v_no=60;
8 end loop;
9 end;
10 /

PL/SQL procedure successfully completed.

SQL> Alter table employees exchange PARTITION employees_part6 WITH TABLE tempemp;

Table altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX UNUSABLE
EMPLOYEES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 UNUSABLE
EMPLOYEES_PART4 USABLE
MAX USABLE
EMPLOYEES_PART5 USABLE

7 rows selected.

SQL> alter index employees_global_idx rebuild;

Index altered.

SQL> ALTER TABLE employees MODIFY PARTITION EMPLOYEES_PART6
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART4 USABLE
MAX USABLE
EMPLOYEES_PART5 USABLE

7 rows selected.

SQL>

Exchanging data with Hash Partition table

SQL> alter table invoices exchange partition sys_p152 with table exchinv;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 where index_name='INVOICES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
SYS_P156 USABLE
SYS_P152 UNUSABLE
SYS_P157 USABLE
DUMMYPART USABLE
SYS_P153 USABLE
SYS_P154 USABLE
SYS_P155 USABLE

7 rows selected.

SQL> select index_name ,status
2 from user_indexes where table_name='INVOICES';

INDEX_NAME STATUS
------------------------------ --------
INVOICES_GLOBAL_IDX UNUSABLE
INVOICES_LOCAL_IDX N/A

SQL> alter table invoices modify partition SYS_P152
2 rebuild unusable local indexes;

Table altered.

SQL> alter index INVOICES_GLOBAL_IDX rebuild;

Index altered.

Exchanging data with List Partition table

SQL> alter table sales exchange partition Q1_NORTHWEST_2 with table exchsales;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 where index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST_2 UNUSABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

7 rows selected.

SQL>
SQL> select index_name ,status
2 from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX UNUSABLE
SALES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 where index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST_2 UNUSABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

7 rows selected.

SQL> alter table sales modify partition Q1_NORTHWEST_2
2 rebuild unusable local indexes;

Table altered.

SQL>
SQL> alter index SALES_GLOBAL_IDX rebuild;

Index altered.

SQL> select partition_name, status from user_ind_partitions
2 where index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
Q1_NORTHWEST_2 USABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_SOUTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

7 rows selected.

SQL> select index_name ,status
2 from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX VALID
SALES_LOCAL_IDX N/A

Global index and associated local index will be unusuable and it needs to rebuild. We can exchange partition, sub partition on range-hash, range-list partitions.

Merge Partitions

We can use the ALTER TABLE ... MERGE PARTITION statement to merge the contents of two partitions into one partition. The two original partitions are dropped, as are any corresponding local indexes. We cannot use this statement for a hash-partitioned table or for hash subpartitions of a range-hash partitioned table.

Merging Partition on Range partition table

SQL> Alter table employees
2 Merge partitions employees_part1, employees_part6
3 Into partition employees_part16
4 /
Merge partitions employees_part1, employees_part6
*
ERROR at line 2:
ORA-14274: partitions being merged are not adjacent

SQL> Alter table employees
2 Merge partitions employees_part4, employees_part5
3 Into partition employees_part15
4 tablespace ods_stage_data;

Table altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX UNUSABLE
EMPLOYEES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART15 UNUSABLE
MAX USABLE

6 rows selected.

SQL>
SQL> alter index employees_global_idx rebuild;

Index altered.

SQL>
SQL> ALTER TABLE employees MODIFY PARTITION EMPLOYEES_PART15
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART15 USABLE
MAX USABLE

6 rows selected.

SQL>

Merging Hash Partitions - We can not merge two hash partitions...

Merging List Partitions

SQL> Alter table sales
2 Merge partitions Q1_NORTHWEST_2, Q1_SOUTHCENTRAL
3 Into partition newpart
4 /

Table altered.

SQL> select index_name,status from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX UNUSABLE
SALES_LOCAL_IDX N/A

SQL>
SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
NEWPART UNUSABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

6 rows selected.

SQL> alter index sales_global_idx rebuild;

Index altered.

SQL> ALTER TABLE sales MODIFY PARTITION newpart
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> select index_name,status from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX VALID
SALES_LOCAL_IDX N/A

SQL>
SQL>
SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
NEWPART USABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

6 rows selected.

SQL>

For range partition, we can merge only adjecent partitions. But in List partition, we can merge any two partitions. The global index, associated local index needs to be rebuilt after merging the partitions.

Moving Partitions

We can move partitions/sub partitions from one location to another location. We can move the partitions for any one of the below purpose.
1.Re-cluster data and reduce fragmentation
2.Move a partition to another tablespace
3.Modify create-time attributes
4.Store the data in compressed format using table compression

We can Move a range, hash, or list partition, using the ALTER TABLE ... MOVE PARTITION statement.

SQL> alter table sales
2 move partition newpart
3 tablespace ods_stage_data
4 update global indexes;

Table altered.

SQL> select index_name,status from user_indexes where table_name='SALES';

INDEX_NAME STATUS
------------------------------ --------
SALES_GLOBAL_IDX VALID
SALES_LOCAL_IDX N/A

SQL> ALTER TABLE sales MODIFY PARTITION newpart
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='SALES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
NEWPART USABLE
Q1_SOUTHWEST USABLE
Q1_NORTHEAST USABLE
Q1_SOUTHEAST USABLE
Q1_NORTHCENTRAL USABLE
Q1_NONMAINLAND USABLE

6 rows selected.

SQL>

Renaming Partitions

It is possible to rename partitions and subpartitions of both tables and indexes. One reason for renaming a partition might be to assign a meaningful name, as opposed to a default system name that was assigned to the partition in another maintenance operation.

SQL> alter table employees
2 rename partition employees_part1 to employees_part111;

Table altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART15 USABLE
MAX USABLE

6 rows selected.
SQL> select partition_name from user_tab_partitions
2 where table_name='EMPLOYEES';

PARTITION_NAME
------------------------------
EMPLOYEES_PART2
EMPLOYEES_PART3
EMPLOYEES_PART111
EMPLOYEES_PART15
MAX
EMPLOYEES_PART6

6 rows selected.

SQL>

Renaming partitions would not impact the indexes. Index will be intact after renaming the partitions.

Truncating Partitions

We can use the ALTER TABLE ... TRUNCATE PARTITION statement to remove all rows from a table partition. Truncating a partition is similar to dropping a partition, except that the partition is emptied of its data, but not physically dropped.

You cannot truncate an index partition. However, if local indexes are defined for the table, the ALTER TABLE ... TRUNCATE PARTITION statement truncates the matching partition in each local index. Unless you specify UPDATE INDEXES (cannot be specified for index-organized tables), any global indexes are marked UNUSABLE and must be rebuilt.

SQL> alter table employees
2 truncate partition employees_part111;

Table truncated.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX UNUSABLE
EMPLOYEES_LOCAL_IDX N/A

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='EMPLOYEES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
EMPLOYEES_PART1 USABLE
EMPLOYEES_PART2 USABLE
EMPLOYEES_PART3 USABLE
EMPLOYEES_PART6 USABLE
EMPLOYEES_PART15 USABLE
MAX USABLE

6 rows selected.

SQL> alter index employees_global_idx rebuild;

Index altered.

SQL> select index_name,status from user_indexes where table_name='EMPLOYEES';

INDEX_NAME STATUS
------------------------------ --------
EMPLOYEES_GLOBAL_IDX VALID
EMPLOYEES_LOCAL_IDX N/A

SQL>

Coalescing Partitions

Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of subpartitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.

The ALTER TABLE ... COALESCE PARTITION statement is used to coalesce a partition in a hash-partitioned table. The following statement reduces by one the number of partitions in a table by coalescing a partition

SQL> alter table invoices
2 COALESCE PARTITION;

Table altered.

SQL> select partition_name, status from user_ind_partitions
2 WHERE index_name='INVOICES_LOCAL_IDX';

PARTITION_NAME STATUS
------------------------------ --------
SYS_P156 USABLE
SYS_P152 USABLE
SYS_P157 USABLE
SYS_P153 USABLE
SYS_P154 UNUSABLE
SYS_P155 USABLE

6 rows selected.

SQL> select index_name,status from user_indexes where table_name='INVOICES';

INDEX_NAME STATUS
------------------------------ --------
INVOICES_GLOBAL_IDX UNUSABLE
INVOICES_LOCAL_IDX N/A

SQL> alter index INVOICES_GLOBAL_IDX rebuild;

Index altered.

SQL> alter table invoices MODIFY PARTITION SYS_P154
2 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

SQL>

Modifying List Partition

List partitioning allows you the option of adding and dropping literal values from the defining value list.

Adding values for List Partitions.

Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.

The following statement adds a new set of state code ('KS') to an existing partition list.

SQL> ALTER TABLE sales
2 MODIFY PARTITION q1_southeast
3 ADD VALUES ('KS');

Table altered.

This changes would not make index UNUSABLE. We do not need to rebuild the index.

Dropping values for List Partitions.

Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then the database returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.

SQL> ALTER TABLE sales
2 MODIFY PARTITION q1_southwest
3 DROP VALUES ('AZ');
ALTER TABLE sales
*
ERROR at line 1:
ORA-14518: partition contains rows corresponding to values being dropped

SQL> delete sales where state='AZ';

1 row deleted.
SQL> ALTER TABLE sales
2 MODIFY PARTITION q1_southwest
3 DROP VALUES ('AZ');

Table altered.
SQL>

Oracle11g New features in Partitions

Interval Partitioning: Introduced in 11g, interval partitions are extensions to range partitioning. These provide automation for equi-sized range partitions. Partitions are created as metadata and only the start partition is made persistent. The additional segments are allocated as the data arrives. The additional partitions and local indexes are automatically created

REF Partitioning: This partitioning scheme has been introduced with the assumption that related tables would benefit from same partitioning strategy. The detail table inherits the partitioning strategy of the master table through PK-FK relationship. There is no need for the partitioning key to be stored in the detail table and by specifying “PARTITION BY REFERENCE” keyword, the detail table inherits the partitioning strategy of the master table.

Virtual Column Based partitioning: In the previous versions of Oracle, a table could be partitioned only if the partition key physically existed in the table. The new functionality in Oracle 11G, “Virtual columns”, removes this restriction and allows partitioning key to be defined by an expression that uses one or more columns of a table. The virtual columns are stored as metadata only.

The below two data dictionary is used to view the partition info.
1. dba_tab_partitions
2. dba_part_tables