Thursday 21 June 2012

Linux Top Command Usege

top
The top command is probably the most useful one for an Oracle DBA managing a database on Linux. Say the system is slow and you want to find out who is  gobbling up all the CPU and/or memory. To display the top processes, you use the command top.

Note that unlike other commands, top does not produce an output and sits still. It refreshes the screen to display new information. So, if you just issue top and leave the screen up, the most current information is always up. To stop and exit to shell, you can press Control-C.

It's probably needless to say that the top utility comes in very handy for analyzing the performance of database servers. Here is a partial top output








Let's analyze the output carefully. The first thing you should notice is the "idle" column under CPU states; it's 0.0%—meaning, the CPU is completely occupied doing something. The question is, doing what? Move your attention to the column "system", just slightly left; it shows 5.6%. So the system itself is not doing much. Go even more left to the column marked "user", which shows 1.0%. Since user processes include Oracle as well, Oracle is not consuming the CPU cycles. So, what's eating up all the CPU?

The answer lies in the same line, just to the right under the column "iowait", which indicates 91.2%. This explains it all: the CPU is waiting for IO 91.2% of the time.

So why so much IO wait? The answer lies in the display. Note the PID of the highest consuming process: 16143. You can use the following query to determine what the process is doing:

select s.sid, s.username, s.program
from v$session s, v$process p
where spid = 16143
and p.addr = s.paddr
/
       SID USERNAME PROGRAM
------------------- -----------------------------
       159 SYS      rman@prolin2 (TNS V1-V3)

The rman process is taking up the IO waits related CPU cycles. This information helps you determine the next course of action.

Rman Incremental Back up And Restore

Incremental backup and restore in oracle 10g

Here I am presenting the simple steps to demonstrate how to take incremental level backup,perform restore and recovery using RMAN.One can learn Rman incremental level backup easily by going through this post. I used oracle 10g express edition.

Difference between differential and cumulative incremental backup will be clear by going through this simple demo.

Simple scenarios has been taken.

Demonstrating DIFFERENTIAL INCREMENTAL BACKUP

1. Database is in NOARCHIVELOG mode.
2. Not using recovery catalog.
3. RMAN configuration setting that I used is following. I am posting the output of
RMAN> SHOW ALL;

using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/lib/oracle/xe/backup/cf_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 400 M FORMAT '/usr/lib/oracle/xe/backup/df_%U';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/dbs/snapcf_XE.f'; # default

4. Since database is in NOARCHIVELOG mode, it is necessary to shut down the database cleanly using immediate,normal or transactional options.

RMAN> shutdown

5. To take incremental 0 level backup,mount the database. This step is required even in non-incremental backups.

RMAN> startup mount

6. Issue
RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'FULL_INC';
This takes the full backup of the database and also includes controlfile as well as spfile.

7. Issue
SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
1 1 0 52428800 1 YES UNUSED
0

2 1 1 52428800 1 NO CURRENT
1077758 08-MAR-11


SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
------------------
1084366


8. RMAN> ALTER DATABASE OPEN;

9. Now, perform some DML operations. For example,

SQL> CREATE TABLE T1(C1 NUMBER);
SQL> INSERT INTO T1 VALUES(10);
SQL> /
SQL> /
SQL> COMMIT;

10. Again, shutdown the database to take incremental level backup.
RMAN>SHUTDOWN
RMAN>STARTUP MOUNT
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC_1';
It backups only the changes made since the last incremental level n or lower backup.

11. Open the database again to perform some insertions.
RMAN> ALTER DATABASE OPEN;

12. SQL> INSERT INTO T1 VALUES(1);
SQL> /
SQL> /
SQL> /
SQL> COMMIT;

13. Shutdown the database again to take incremental level 2 backup.
RMAN> SHUTDOWN
RMAN> STARTUP MOUNT
RMAN> BACKUP INCREMENTAL LEVEL 2 DATABASE TAG 'INC_1';

14. Open database again to made some insertions.
RMAN> ALTER DATABASE OPEN;
SQL> INSERT INTO T1 VALUES(9);
SQL> /
SQL> /
SQL> COMMI;

15. select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
1 1 0 52428800 1 YES UNUSED
0

2 1 1 52428800 1 NO CURRENT
1077758 08-MAR-11


SQL> select checkpoint_change# from v$database;


CHECKPOINT_CHANGE#
------------------
1084663

Here all the changes since the last backup has been stored in the redo logs.

16. Now, delete controlfile.
$ mv oradata/XE/controlfile cf.bak

17. RMAN> SHUTDOWN ABORT;

18. RMAN> STARTUP
Database will not open. Database will only go upto the NOMOUNT state since controlfile has been lost.

19. RMAN> Now following steps are needed to recover the database.

20. RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/lib/oracle/xe/backup/cf_%F';
If controlfile autobackup is located in default location, then we can skip this step.

21. RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP

22. RMAN> ALTER DATABASE MOUNT;

23. RMAN> RESTORE DATABASE;
It will restore the database using the 0 level backup.

24. RMAN> RECOVER DATABASE
Watch the output carefully. You can recognise various backups that are being applied. Look for the tags that you have given to backupsets. It will applies all the incrementals one by one. First it will apply level 1 incremental, and then level 2. Then it will search for appropriate log sequence and applies the same if found. If log switching has not been taken place after the last incremental backup, then we get all the data without any data loss. And database will restore upto the current point in time. In our case no log switching has taken place and all the data since the last backup exist in the redologs. NOREDO option is needed when log switching has taken place. NOREDO option is not needed if log switch has not taken place.

25.RMAN> ALTER DATABASE OPEN RESETLOGS;

26. Now, view the table t1 and you will find the table restored upto the latest point in time if all the redos has been applied.

27. Always take full backup of the database after opening the database in resetlogs mode.

Demonstrating CUMULATIVE INCREMENTAL BACKUP

Here we will utilize same scenario as above,i.e. no recovery catalog mode,no archivelog mode.
We will start here by taking incremental level 0 backup.

1. RMAN> SHUTDOWN
2. RMAN> STARTUP MOUNT
3. RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'FULL_INC';
4. RMAN> ALTER DATABASE OPEN;

5. SQL> SELECT CHECKPOINT_CHANGE# FROM V$DATABASE;
6. Make some insertion.
SQL> INSERT INTO T1 VALUES(2);
SQL> /
SQL> /
SQL> COMMIT;

7. Again shutdown the database to take incremental level 1 database which will copies only the changed blocks since the last incremental level 0 backup.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC_1';

8.ALTER DATABASE OPEN;

9. SQL> INSERT INTO T1 VALUES(3);
SQL> /
SQL> /
SQL> COMMIT;

10. Again shutdown the database to take cumulative incremental level 1 backup, this time. This backups all the changes made after the last n-1 or lower backup,here it will backup all the changes since the incremental level 0 backup.
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'CUM_1';

11. RMAN> ALTER DATABASE OPEN;

12. Perform some DML again.
SQL> INSERT INTO T1 VALUES(9);
SQL>/
SQL> /
SQL> COMMIT;

13. Now, we will shutdown the database to take incremental level 1 backup this time.
RMAN> SHUTDOWN
RMAN> STARTUP MOUNT
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC_1_1';

14. RMAN> ALTER DATABASE OPEN;

15. SQL> INSERT INTO T1 VALUES(0);
SQL> /
SQL> COMMIT;

16. Delete controlfile.
$ mv oradata/XE/controlfile cf.bak

17. RMAN> SHUTDOWN ABORT

18. RMAN> STARTUP
Database will not open.It will go upto only NOMOUNT state.

19. RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/lib/oracle/XE/backup/cf_%F';

20. RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;

21. RMAN> ALTER DATABASE MOUNT;

22. RMAN> RESTORE DATABASE;,
It will restore the database from the last incremental level 0 database.

23. RMAN> RECOVER DATABASE;
Here NOREDO option is not necessary as it is assumed that redo logs has not been switched and complete recovery is possible. Check the same by looking at the contents of v$log and comparing with the checkpoint_change# that have noted in step 5. If all the changes since that value exist in the redo logs then NOREDO option is not needed. Else it is required.

Here first of all, cumulative incremental level 1 backup would restore as there is no need for the backup having tag 'INC_1' as cumulative incremental level 1 backup take backup of all the changes since the last level 0 backup. Thus our first incremental level 1 backup is not applied. Then it applies backup having tag 'INC_1_1' which we take after cumulative incremental level backup. After that it will apply all the changes recorded in the online redologs.
Thus , only two incremental level backups needed to apply here. But in the first scenario all the incremental backups had been applied. Thus we can say that using cumulative incremental backup in your incremental strategy provides faster recovery as number of incremental backups to be applied is less.

24. RMAN> ALTER DATBASE OPEN RESETLOGS;

25. Take whole database backup after opening database in resetlogs mode. It is a good practice to perform th same.

26. Now check your table. It must have all the changes that we made, keeping all the scenario same.

Wednesday 20 June 2012

Auditing In Oracle

There are two distinct type of auditing
Standard: Auditing based on statement, Privileges and object level.
Fine-grained: Auditing on a finer granule which is based on content i.e value > 10,000
The standard auditing having 3 levels of auditing:
Statement: Audit all action at any type of objects.
Privilege: audit action on system level privileges
Object_level: Specific audit action lie select, update, insert or delete.
For all the 3 level of auditing you can choose to audit by access (audit every time you access) or by session (audit only once per access during the session), you can also audit on if the access was successful (whenever successful) or not (whenever not successful)
DB: Enables database auditing and directs all audit records to the database audit trail, except for records that are always written to the operating system audit trail.
DB, extended: As per the DB value but also populate sqlbind and sqltext clob columns
OS: Enables database auditing and directs all audit records to an operating system file
NONE: Disables auditing (This value is the default.)
Note: You must bounce back the instance to activate audit parameter.

Setting Audit Trail:

SQL>alter system set audit_trail = db scope=spfile;
SQL>alter system set audit_file_dest = ‘c:\oracle\auditing’;
SQL>alter system set audit_trail = os scope=spfile;
Note: Note: if the audit_file_dest is not set then the default location is $oracle_home/rdbms/audit/
SQL>alter system set audit_sys_operations = true scope = spfile;
Note: this will audit all sys operations regardless ifaudit_ trail is set.
SQL>alter system set audit_trail = none scope=spfile;
Setup Audit Trail through Initialization Parameter
  1. Set "audit_trail = true" in the init.ora file.
  2. Run the $ORACLE_HOME/rdbms/admin/cataudit.sql script while connected as SYS

Audit Options

BY SESSION/BY ACCESS
BY SESSION causes Oracle to write a single record for all SQL statements of the same type issued in the same session. BY ACCESS causes Oracle to write one record for each access.
WHENEVER SUCCESSFUL/WHENEVER NOT SUCCESSFUL
WHENEVER SUCCESSFUL chooses auditing only for statements that succeed. WHENEVER NOT SUCCESSFUL chooses auditing only for statements that fail or result in errors.
Auditing Example:
Audit Session By Scott, Lori;
Audit Delete Any Table By Access Whenever Not Successful;
Audit Delete Any Table;
Audit Select Table, Insert Table, Delete Table, Execute Procedure By Access Whenever Not Successful;
Audit Delete On Scott.Emp;
Audit Select, Insert, Delete  On Jward.Dept By Access Whenever Successful;
Audit Select On Default Whenever Not Successful;
Audit Select Table By Appserve On Behalf Of Jackson;
Audit Alter, Index, Rename On Default By Session;
Audit Alter User;
Audit Lock Table By Access Whenever Successful;
Audit Delete On Scott.Emp By Access Whenever Successful;
Audit Delete Table, Update Table By Hr By Access;
Audit All By Hr By Access;
Audit Execute Procedure By Hr By Access;
Disabling Audit
Noaudit Table;
Noaudit All Privileges;
Turn Off All Auditing
Noaudit All;
Noaudit All Privileges;
Noaudit All On Default;
Purge Auditing
Delete From Sys.Aud$;
Truncate From Sys.Aud$
Delete From Sys.Aud$;
Delete From Sys.Aud$ Where Obj$Name='Emp';
View Audit Trail
The audit trail is stored in the SYS.AUD$ table. It's contents can be viewed directly or via the following views:
Dba_Audit_Exists, Dba_Audit_Object, Dba_Audit_Session, Dba_Audit_Statement, Dba_Audit_Trail, Dba_Obj_Audit_Opts, Dba_Priv_Audit_Opts, Dba_Stmt_Audit_Opts

Auditing Via Trigger

It is possible to audit the system by using triggers, there are a number of system-level triggers that can be fired such as database startup, logon, logoff, ddl, server error

Example Trigger Auditing:
create or replace trigger audit_insert after insert on vallep.employees for each row
insert into employees_table_audit
values (user, sysdate);
create or replace trigger logon_audit_trig
after logon on database
begin
insert into logon_audit values (user, sys_context('userenv', 'sessionid'), sysdate, null,
sys_context('userenv', 'host'));end;

Fine-Grain Auditing

Fine-grain auditing (FGA) allows you to audit users accessing data of a certain criteria. As per standard auditing you can audit select, insert, update and delete operations. You use the package dbms_fga to add, remove, enable and disable FGA auditing policies, it is virtually impossible to bypass these policies, one nice feature is that you can attach handlers (like a trigger) to the policies which can execute procedures which could email or page you. There are many options that can be applied to the dbms_fga package, so best to look up the oracle man pages but here are some simple example

Privilege: grant execute on dbms_fga to vallep;

Creating Auditing:
dbms_fga.add_policy (object_schema => 'vallep',
object_name => 'employees',policy_name => 'compensation_aud',

audit_columns => 'salary,commission_pct',
enable => false, statement_types => 'select');

Creating (handler):
dbms_fga.add_policy (object_schema => 'vallep',
object_name => 'employees',policy_name => 'compensation_aud',
audit_columns => 'salary,commission_pct',
enable => false,statement_types => 'select'
handler_schema => 'vallep'
handler_module => 'log_id');
create procedure vallep.log_id (schema1 varchar2, table1 varchar2, policy1
varchar2) as
beginutil_alert_pager(schema1, table1, policy1);/* send an alert via a pager */ end;

Removing auditing:
dbms_fga.drop_policy (object_schema => 'vallep',
object_name => 'employees', policy_name => 'compensation_aud');
Enabling auditing:
dbms_fga.enable_policy (object_schema => 'vallep',
object_name => 'employees',
policy_name => 'compensation_aud');

Disabling auditing:

dbms_fga.edisable_policy ( object_schema => 'vallep',object_name => 'employees', policy_name => 'compensation_aud');

Usful Tables:

DBA_AUDIT_POLICIES, DBA_FGA_AUDIT_TRAIL, DBA_COMMON_AUDIT_TRAIL

Maintenance

The audit trail must be deleted/archived on a regular basis to prevent the SYS.AUD$ table growing to an unacceptable size
Security
Only DBAs should have maintenance access to the audit trail. If SELECT access is required by any applications this can be granted to any users, or alternatively a specific user may be created for this. Auditing modifications of the data in the audit trail itself can be achieved as follows:
AUDIT INSERT, UPDATE, DELETE ON sys.aud$ BY ACCESS;



init parameters
Until Oracle 10g, auditing is disabled by default, but can be enabled by setting the AUDIT_TRAIL static parameter in the init.ora file.


From Oracle 11g, auditing is enabled for some system level privileges.

SQL> show parameter audit
NAMETYPEVALUE
-----------------------------------------------
audit_file_deststring?/rdbms/audit
audit_sys_operationsbooleanFALSE
audit_trailstringNONE
transaction_auditingbooleanTRUE

AUDIT_TRAIL can have the following values.
AUDIT_TRAIL={NONE or FALSE| OS| DB or TRUE| DB_EXTENDED| XML |XML_EXTENDED}

The following list provides a description of each value:
  • NONE or FALSE -> Auditing is disabled. Default until Oracle 10g.
  • DB or TRUE -> Auditing is enabled, with all audit records stored in the database audit trial (AUD$). Default from Oracle 11g.
  • DB_EXTENDED –> Same as DB, but the SQL_BIND and SQL_TEXT columns are also populated.
  • XML-> Auditing is enabled, with all audit records stored as XML format OS files.
  • XML_EXTENDED –> Same as XML, but the SQL_BIND and SQL_TEXT columns are also populated.
  • OS -> Auditing is enabled, with all audit records directed to the operating system's file specified by AUDIT_FILE_DEST.
Note: In Oracle 10g Release 1, DB_EXTENDED was used in place of "DB,EXTENDED". The XML options were brought in Oracle 10g Release 2.

The AUDIT_FILE_DEST parameter specifies the OS directory used for the audit trail when the OS, XML and XML_EXTENDED options are used. It is also the location for all mandatory auditing specified by the AUDIT_SYS_OPERATIONS parameter.

The AUDIT_SYS_OPERATIONS static parameter enables or disables the auditing of operations issued by users connecting with SYSDBA or SYSOPER privileges, including the SYS user. All audit records are written to the OS audit trail.

Run the $ORACLE_HOME/rdbms/admin/cataudit.sql script while connected as SYS (no need to run this, if you ran catalog.sql at the time of database creation).

Start Auditing
Syntax of audit command:
audit {statement_option|privilege_option} [by user] [by
{session|access}] [whenever {successful|not successful}]

Only the statement_option or privilege_option part is mandatory. The other clauses are optional and enabling them allows audit be more specific.

There are three levels that can be audited:

Statement level
Auditing will be done at statement level.
Statements that can be audited are found in STMT_AUDIT_OPTION_MAP.
SQL> audit table by scott;

Audit records can be found in DBA_STMT_AUDIT_OPTS.
SQL> select * from DBA_STMT_AUDIT_OPTS;

Object level
Auditing will be done at object level.
These objects can be audited: tables, views, sequences, packages, stored procedures and stored functions.
SQL> audit insert, update, delete on scott.emp by hr;

Audit records can be found in DBA_OBJ_AUDIT_OPTS.
SQL> select * from DBA_OBJ_AUDIT_OPTS;

Privilege level
Auditing will be done at privilege level.
All system privileges that are found in SYSTEM_PRIVILEGE_MAP can be audited.
SQL> audit create tablespace, alter tablespace by all;

Specify ALL PRIVILEGES to audit all system privileges.

Audit records can be found in DBA_PRIV_AUDIT_OPTS.
SQL> select * from DBA_PRIV_AUDIT_OPTS;

Audit options
BY SESSION
Specify BY SESSION if you want Oracle to write a single record for all SQL statements of the same type issued and operations of the same type executed on the same schema objects in the same session.

Oracle database can write to an operating system audit file but cannot read it to detect whether an entry has already been written for a particular operation. Therefore, if you are using an operating system file for the audit trail (that is, the AUDIT_TRAIL initialization parameter is set to OS), then the database may write multiple records to the audit trail file even if you specify BY SESSION.
SQL> audit create, alter, drop on currency by xe by session;
SQL> audit alter materialized view by session;

BY ACCESS 
Specify BY ACCESS if you want Oracle database to write one record for each audited statement and operation.

If you specify statement options or system privileges that audit data definition language (DDL) statements, then the database automatically audits by access regardless of whether you specify the BY SESSION clause or BY ACCESS clause.

For statement options and system privileges that audit SQL statements other than DDL, you can specify either BY SESSION or BY ACCESS. BY SESSION is the default.
SQL> audit update on health by access;
SQL> audit alter sequence by tester by access;

WHENEVER [NOT] SUCCESSFUL 
Specify WHENEVER SUCCESSFUL to audit only SQL statements and operations that succeed.
Specify WHENEVER NOT SUCCESSFUL to audit only SQL statements and operations that fail or result in errors.

If you omit this clause, then Oracle Database performs the audit regardless of success or failure.
SQL> audit insert, update, delete on hr.emp by hr by session whenever not successful;
SQL> audit materialized view by pingme by access whenever successful;

Examples
Auditing for every SQL statement related to roles (create, alter, drop or set a role).
SQL> AUDIT ROLE;

Auditing for every statement that reads files from database directory
SQL> AUDIT READ ON DIRECTORY ext_dir;

Auditing for every statement that performs any operation on the sequence
SQL> AUDIT ALL ON hr.emp_seq;

View Audit Trail
The audit trail is stored in the base table SYS.AUD$.
It's contents can be viewed in the following views:
· DBA_AUDIT_TRAIL
· DBA_OBJ_AUDIT_OPTS
· DBA_PRIV_AUDIT_OPTS
· DBA_STMT_AUDIT_OPTS
· DBA_AUDIT_EXISTS
· DBA_AUDIT_OBJECT
· DBA_AUDIT_SESSION
· DBA_AUDIT_STATEMENT
· AUDIT_ACTIONS
· DBA_AUDIT_POLICIES
· DBA_AUDIT_POLICY_COLUMNS
· DBA_COMMON_AUDIT_TRAIL
· DBA_FGA_AUDIT_TRAIL (FGA_LOG$)
· DBA_REPAUDIT_ATTRIBUTE
· DBA_REPAUDIT_COLUMN

The audit trail contains lots of data, but the following are most likely to be of interest:
Username - Oracle Username.
Terminal - Machine that the user performed the action from.
Timestamp - When the action occurred.
Object Owner - The owner of the object that was interacted with.
Object Name - name of the object that was interacted with.
Action Name - The action that occurred against the object (INSERT, UPDATE, DELETE, SELECT, EXECUTE)
Fine Grained Auditing (FGA), introduced in Oracle9i, allowed recording of row-level changes along with SCN numbers to reconstruct the old data, but they work for select statements only, not for DML such as update, insert, and delete.
From Oracle 10g, FGA supports DML statements in addition to selects.

Several fields have been added to both the standard and fine-grained audit trails:
  • EXTENDED_TIMESTAMP - A more precise value than the existing TIMESTAMP column.
  • PROXY_SESSIONID - Proxy session serial number when an enterprise user is logging in via the proxy method.
  • GLOBAL_UID - Global Universal Identifier for an enterprise user.
  • INSTANCE_NUMBER - The INSTANCE_NUMBER value from the actioning instance.
  • OS_PROCESS - Operating system process id for the oracle process.
  • TRANSACTIONID - Transaction identifier for the audited transaction. This column can be used to join to the XID column on the FLASHBACK_TRANSACTION_QUERY view.
  • SCN - System change number of the query. This column can be used in flashback queries.
  • SQL_BIND - The values of any bind variables if any.
  • SQL_TEXT - The SQL statement that initiated the audit action.
    The SQL_BIND and SQL_TEXT columns are only populated when the AUDIT_TRAIL=DB_EXTENDED or AUDIT_TRAIL=XML_EXTENDED initialization parameter is set.
Maintenance
The audit trail must be deleted/archived on a regular basis to prevent the SYS.AUD$ table growing to an unacceptable size.

Only users who have been granted specific access to SYS.AUD$ can access the table to select, alter or delete from it. This is usually just the user SYS or any user who has had permissions. There are two specific roles that allow access to SYS.AUD$ for select and delete, these are DELETE_CATALOG_ROLE and SELECT_CATALOG_ROLE. These roles should not be granted to general users.

Auditing modifications of the data in the audit trail itself can be achieved as follows
SQL> AUDIT INSERT, UPDATE, DELETE ON sys.aud$ BY ACCESS;

To delete all audit records from the audit trail:
SQL> DELETE FROM sys.aud$;

From Oracle 11g R2, we can change audit table's (SYS.AUD$ and SYS.FGA_LOG$) tablespace and we can periodically delete the audit trail records using DBMS_AUDIT_MGMT package.

Disabling Auditing
The NOAUDIT statement turns off the various audit options of Oracle. Use it to reset statement, privilege and object audit options. A NOAUDIT statement that sets statement and privilege audit options can include the BY USER option to specify a list of users to limit the scope of the statement and privilege audit options.

SQL> NOAUDIT;
SQL> NOAUDIT session;
SQL> NOAUDIT session BY scott, hr;
SQL> NOAUDIT DELETE ON emp;
SQL> NOAUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE;
SQL> NOAUDIT ALL;
SQL> NOAUDIT ALL PRIVILEGES;
SQL> NOAUDIT ALL ON DEFAULT;

Oracle DBA Day Today Activities

How to Increase the size of a tablespace
Answer: There are 2 ways to increase the size of tablespace.

1. Add an Extra Data file to the tablespace

alter tablespace users add datafile '/u01/oradata/orcl/users02.dbf' size 25m;

2. Resize the Datafile that is currently supporting the tablespace. Some versions of Oracle does not like this, and brings the instance down, so do not use this unless you are really sure.

alter database datafile '/u01/oradata/orcl/users01.dbf' resize 50M;

Tuesday 19 June 2012

Oracle 10 New Features

There are many features in the Oracle 10g which got introduced for the first time in the RDBMS. These features not only made the DBA's life easy but also gave the DBAs time to work on the important tasks instead of spending time in analysing the database memory, storage, file system, etc. I will go through every new feature in details in coming few days, but to begin with I will list here a brief description of all the latest features.

Oracle Database 10g has a long list of impressive architecture enhancements over its previous versions. Oracle has made this database very sophisticated and powerful, and has automated many of the traditional and mundane administrative functions. Oracle has tried to automate as much database functionality as possible and made it more scalable. Several changes have been made to improve memory structures, resource management, storage handling, SQL tuning, data movement, recovery speed, and globalization. This chapter will go over the most notable revisions to the database architecture for releases 1 and 2 of the Oracle Database 10g and any significant improvements for Release 2 is specifically mentioned in this and all chapters.

SYSAUX Tablespace
The SYSAUX tablespace is an auxiliary tablespace that provides storage for all non sys-related tables and indexes that would have been placed in the SYSTEM tablespace. SYSAUX is required for all Oracle Database 10g installations and should be created along with new installs or upgrades. Many database components use SYSAUX as their default tablespace to store data. The SYSAUX tablespace also reduces the number of tablespaces created by default in the seed database and user-defined database.

Rename Tablespace Option
Oracle Database 10g has implemented the provision to rename tablespaces. To accomplish this in older versions, you had to create a new tablespace, copy the contents from the old tablespace to the new tablespace, and drop the old tablespace. The Rename Tablespace feature enables simplified processing of tablespace migrations within a database and ease of transporting a tablespace between two databases.

Automatic Storage Management
Automatic Storage Management (ASM) is a new database feature for efficient management of storage with round-the-clock availability. It helps prevent the DBA from managing thousands of database files across multiple database instances by using disk groups. These disk groups are comprised of disks and resident files on the disks. ASM does not eliminate any existing database functionalities with file systems or raw devices, or Oracle Managed Files (OMFs). ASM also supports RAC configurations.

Temporary Tablespace Group
A temporary tablespace is used by the database for storing temporary data, which is not accounted for in recovery operations. A temporary tablespace group (TTG) is a group of temporary tablespaces. A TTG contains at least one temporary tablespace, with a different name from the tablespace. Multiple temporary tablespaces can be specified at the database level and used in different sessions at the same time.

In Oracle Database 10g, each database user will have a permanent tablespace for storing permanent data and a temporary tablespace for storing temporary data. In previous versions of Oracle, if a user was created without specifying a default tablespace, SYSTEM tablespace would have become the default tablespace. For Oracle Database 10g, a default permanent tablespace can be defined to be used for all new users without a specific permanent tablespace.

BigFile Tablespace
Gone are those days of tablespaces in the range of a few megabytes. These days, database tables hold a lot of data and are always hungry for storage. To address this craving, Oracle has come up with the Bigfile tablespace concept. A BigFile tablespace (BFT) is a tablespace containing a single, very large data file. With the new addressing scheme in 10g, four billion blocks are permitted in a single data file and file sizes can be from 8TB to 128TB, depending on the block size. To differentiate a regular tablespace from a BFT, a regular tablespace is called a small file tablespace. Oracle Database 10g can be a mixture of small file and BigFile tablespaces.

BFTs are supported only for locally managed tablespaces with ASM segments and locally managed undo and temporary tablespaces. When BFTs are used with Oracle Managed Files, data files become completely transparent to the DBA and no reference is needed for them. BFT makes a tablespace logically equivalent to data files (allowing tablespace operations) of earlier releases.

Prior to Oracle Database 10g, K and M were used to specify data file sizes. Because the newer version introduces larger file sizes up to 128TB using BFTs, the sizes can be specified using G and T for gigabytes and terabytes, respectively. For using BFT, the underlying operating system should support Large Files. In other words the file system should have Large File Support (LFS).

Cross-Platform Transportable Tablespaces
In Oracle 8i database, the transportable tablespace feature enabled a tablespace to be moved across different Oracle databases using the same operating system. Oracle Database 10g has significantly improved this functionality to permit the movement of data across different platforms. This will help transportable tablespaces to move data from one environment to another on selected heterogeneous platforms (operating systems). Using cross-platform transportable tablespaces, a database can be migrated from one platform to another by rebuilding the database catalog and transporting the user tablespaces. By default, the converted files are placed in the flash recovery area (also new to Oracle Database 10g), which is discussed later in this chapter. A list of fully supported platforms can be found in v$transportable_platform.

Performance Management Using AWR
Automatic Workload Repository (AWR) is the most important feature among the new Oracle Database 10g manageability infrastructure components. AWR provides the background services to collect, maintain, and utilize the statistics for problem detection and self-tuning. The AWR collects system-performance data at frequent intervals (generally 60 minutes) and stores them as historical system workload information for analysis. These metrics are stored in the memory for performance reasons. These statistics are regularly written from memory to disk by a new background process called Memory Monitor (MMON). This data will later be used for analysis of performance problems that occurred in a certain time period and to do trend analysis. Oracle does all this without any DBA intervention. Automatic Database Diagnostic Monitor (ADDM), which is discussed in the next section, analyzes the information collected by the AWR for database-performance problems.

Automatic Database Diagnostic Monitor (ADDM)
Automatic Database Diagnostic Monitor (ADDM) is the best resource for database tuning. Introduced in 10g, ADDM provides proactive and reactive monitoring instead of the tedious tuning process found in earlier Oracle versions. Proactive monitoring is done by ADDM and Server Generated Alerts (SGAs). Reactive monitoring is done by the DBA, who does manual tuning through Oracle Enterprise Manager or SQL scripts.

Statistical information captured from SGAs is stored inside the workload repository in the form of snapshots every 60 minutes. These detailed snapshots (similar to STATSPACK snapshots) are then written to disk. The ADDM initiates the MMON process to automatically run on every database instance and proactively find problems.

DROP DATABASE Command
Oracle Database 10g has introduced a means to drop the entire database with a single command: DROP DATABASE. The DROP DATABASE command deletes all database files, online log files, control files, and the server parameter (spfile) file. The archive logs and backups, however, have to be deleted manually.

Data Pump Utilities
Data Pump is the new high-speed infrastructure for data and metadata movement in Oracle Database 10g. The Data Pump commands are similar to the traditional export and import commands, but they are different products. Data Pump provides a substantiale performance improvement over the original export and import utilities. It also provides faster data load and unload capability to existing tables. Using Data Pump, platform-independent flat files can be moved between multiple servers. You can use the new network mode to transfer data using database links.

Export and import operations in Data Pump can detach from a long-running job and reattach to it later with out affecting the job. You can also remap data during export and import processes. The names of data files, schema names, and tablespaces from the source can be altered to different names on the target system. It also supports fine-grained object selection using the EXCLUDE, INCLUDE, and CONTENT parameters.

Data Pump Export (dpexp) is the utility for unloading data and metadata from the source database to a set of operating system files (dump file sets). Data Pump Import (dpimp) is used to load data and metadata stored in these export dump file sets to a target database.

In Oracle Database 10g Release 2, a default DATA_PUMP_DIR directory object and additional DBMS_DATAPUMP API calls have been added, along with provisions for compression of metadata in dump files, and the capability to control the dump file size with the FILESIZE parameter.

This section has reviewed the significant new features introduced in Oracle Database 10g. I will touch over the new processes introduced in Oracle Database 10g to support these features.

Wednesday 6 June 2012

Hot Backup - User Managed

SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
------------
ARCHIVELOG

If you see ARCHIVELOG then you can proceed further. In order to take backup while you are in noarhivelog mode follow other post on my blog.

2)Determine the files that you need to take backup.

Whenever you decide to take your database backup then take backup of data files , online redo log files ,control files, spfile.

In order to decide which files you need to backup issue the following query.

SQL>SELECT NAME "File Need Backup" FROM V$DATAFILE
UNION ALL
SELECT MEMBER FROM V$LOGFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE
UNION ALL
SELECT VALUE FROM V$PARAMETER WHERE NAME='spfile';


File Need Backup
--------------------------------------------------------------------------------
/oradata2/data1/dbase/system01.dbf
/oradata2/data1/dbase/undotbs01.dbf
/oradata2/data1/dbase/sysaux01.dbf
/oradata2/data1/dbase/users01.dbf
/oradata2/data.dbf
/oradata2/data1/data02.dbf
/oradata2/6.dbf
/oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf
/oradata2/data_test.dbf
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo02.log
/oradata2/data1/dbase/redo01.log
/oracle/app/oracle/product/10.2.0/db_1/dbs/cntrldupbase.dbf
/oracle/app/oracle/product/10.2.0/db_1/dbs/spfiledupbase.ora
13 rows selected.

So after running the above query I can say I need to backup 13 files.

3)Take the tablespace in backup mode rather than offline and read-only tablespace. In case of offline and read only tablespace you do not have to place the tablespace in backup mode because the database is not permitting changes to the datafiles.

You can check the status, tablespace_name and it's associated data file name with the following query,

SELECT t.STATUS,t.TABLESPACE_NAME "Tablespace", f.FILE_NAME "Datafile"
FROM DBA_TABLESPACES t, DBA_DATA_FILES f
WHERE t.TABLESPACE_NAME = f.TABLESPACE_NAME;
ORDER BY t.NAME;


Take the tablespace in backup mode rather than offline and read-only tablespace.

You can easily make a script of taking the online tablespace in backup mode by following query.

SQL>SELECT 'ALTER TABLESPACE ' ||TABLESPACE_NAME ||' BEGIN BACKUP;' "Script" FROM DBA_TABLESPACES WHERE STATUS NOT IN ('READ ONLY','OFFLINE');

Script
-------------------------------------------------------------
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
ALTER TABLESPACE UNDOTBS1 BEGIN BACKUP;
ALTER TABLESPACE SYSAUX BEGIN BACKUP;
ALTER TABLESPACE TEMP BEGIN BACKUP;
ALTER TABLESPACE USERS BEGIN BACKUP;
ALTER TABLESPACE TEMP_T BEGIN BACKUP;

6 rows selected.

Alternatively, you can issue

SQL>ALTER DATABASE BEGIN BACKUP;


4)Copy the datafile to backup location.

After making a tablespace in backup mode take backup/copy of the associated datafiles. Here you can also make a script in order to copy datafiles to another location.

For online tablespace you must at first take it backup mode. You can check whether backup mode now active or not by issuing following query,

SQL>SELECT t.name AS "TB_NAME", d.file# as "DF#", d.name AS "DF_NAME", b.status
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';

SQL> SELECT 'host scp '|| FILE_NAME || ' &backup_location ' "Backup Command" FROM DBA_DATA_FILES;
Enter value for backup_location: /backup

old 1: SELECT 'host scp '|| FILE_NAME || ' &backup_location ' "Backup Command" FROM DBA_DATA_FILES
new 1: SELECT 'host scp '|| FILE_NAME || ' /backup ' "Backup Command" FROM DBA_DATA_FILES

Backup Command
------------------------------------------------------------------------------------------
host scp /oradata2/data1/dbase/system01.dbf /backup
host scp /oradata2/data1/dbase/undotbs01.dbf /backup
host scp /oradata2/data1/dbase/sysaux01.dbf /backup
host scp /oradata2/data1/dbase/users01.dbf /backup
host scp /oradata2/data.dbf /backup
host scp /oradata2/data1/data02.dbf /backup
host scp /oradata2/6.dbf /backup
host scp /oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf /backup
host scp /oradata2/data_test.dbf /backup

9 rows selected.

Also you can backup network files. Do a recursive search for *.ora starting in your Oracle home directory and under it.

In order to make script for to copy data files for those tablespace which are only in backup mode then issue,

SQL>SELECT 'host scp '|| d.name ||' &backup_location' FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';


Run the script that you genereted.

On windows or other operating system you can use graphical browser to copy or other associated copy command.

5)Whenever you copy is finished make the tablespace out of backup mode. You can issue BEGIN BACKUP .. SCP serially (Take one tablespace in begin backup mode and then copy the associated datafiles and make the tablespace out of backup mode) or you can do it parallely(Take all tablespaces in begin backup mode and then copy the associated datafiles of all tabelspaces and then make the tablespace out of backup mode).

You here also make a script like,

SQL>SELECT 'ALTER TABLESPACE ' ||t.name ||' END BACKUP;' "End Backup Script"
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';

End Backup Script
----------------------------------------------------------
ALTER TABLESPACE SYSTEM END BACKUP;

You if you have taken Database in backup mode then issue

SQL>ALTER DATABASE END BACKUP;

Tuesday 5 June 2012

set to database for Rman Back up


This example changes the default backup destination from disk to tape and then again backup to default.

RMAN> show default device type;
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
old RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
new RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE DEFAULT DEVICE TYPE CLEAR;
old RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
RMAN configuration parameters are successfully reset to default value

RMAN> show default device type;
RMAN configuration parameters are:
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default

RMAN> show backup optimization;
RMAN configuration parameters are:
CONFIGURE BACKUP OPTIMIZATION OFF; # default

RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters:
CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters are successfully stored

RMAN> show backup optimization;
RMAN configuration parameters are:
CONFIGURE BACKUP OPTIMIZATION ON;

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP OFF;
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

By default, the format of the autobackup file for all configured devices is the substitution variable %F. This variable format translates into c-IIIIIIIIII-YYYYMMDD-QQ,

where:IIIIIIIIII stands for the DBID.

YYYYMMDD is a time stamp of the day the backup is generated

QQ is the hex sequence that starts with 00 and has a maximum of

Monday 4 June 2012

does-startup-mount-verify-datafiles

There seems to be a misunderstanding that a MOUNT actually verifies datafiles.
A STARTUP MOUNT  (or STARTUP NOMOUNT followed by ALTER DATABASE MOUNT) does *NOT* read the datafiles and/or verify them.  It does read the controlfile(s).

Here's a simple test :
I have a tablespace with a datafile that is "ONLINE".
SQL> create tablespace X_TBS datafile '/tmp/X_TBS.dbf' size 50M;

Tablespace created.

SQL> create table hemant.X_TBS (col_1) tablespace X_TBS
  2  as select rownum from dual connect by level < 100;

Table created.

SQL> 

SQL> select file#, status, name 
  2  from v$datafile
  3  where name like '%X_TBS%';

     FILE# STATUS  NAME
---------- ------- ----------------------------------------
        14 ONLINE  /tmp/X_TBS.dbf

SQL> select file#, status, name
  2  from v$datafile_header
  3  where name like '%X_TBS%';

     FILE# STATUS  NAME
---------- ------- ----------------------------------------
        14 ONLINE  /tmp/X_TBS.dbf

SQL> select owner, segment_name, bytes/1024
  2  from dba_segments
  3  where tablespace_name = 'X_TBS';

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
HEMANT
X_TBS
        64


SQL> 
I now shutdown the database instance and remove the datafile :
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> !ls /tmp/X_TBS.dbf
/tmp/X_TBS.dbf

SQL> !rm /tmp/X_TBS.dbf

SQL> !ls /tmp/X_TBS.dbf
ls: /tmp/X_TBS.dbf: No such file or directory

SQL> 
Does the STARTUP MOUNT succeed ?
SQL> startup mount
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1337720 bytes
Variable Size             213911176 bytes
Database Buffers          314572800 bytes
Redo Buffers                5840896 bytes
Database mounted.
SQL>
SQL> shutdown
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1337720 bytes
Variable Size             213911176 bytes
Database Buffers          314572800 bytes
Redo Buffers                5840896 bytes
SQL> alter database mount;

Database altered.

SQL> 
Can the file be listed ? Yes. However, V$DATAFILE_HEADER no longer shows the name ! The query on V$DATAFILE_HEADER does cause Oracle to "look" for the file but it does NOT cause a failure. It simply finds it "missing".
SQL> select file#, status, name
  2  from v$datafile
  3  where file#=14;

     FILE# STATUS  NAME
---------- ------- ----------------------------------------
        14 ONLINE  /tmp/X_TBS.dbf

SQL> select file#, status, name
  2  from v$datafile_header
  3  where file#=14;

     FILE# STATUS  NAME
---------- ------- ----------------------------------------
        14 ONLINE

SQL> 
When does Oracle attempt to access the datafile ?
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 14 - see DBWR trace file
ORA-01110: data file 14: '/tmp/X_TBS.dbf'


SQL> 
Even as the OPEN failed with an ORA-01157, the datafile is present in the controlfile :
SQL> select file#, status, name
  2  from v$datafile
  3  where file#=14;

     FILE# STATUS  NAME
---------- ------- ----------------------------------------
        14 ONLINE  /tmp/X_TBS.dbf

SQL> select file#, status, name
  2  from v$datafile_header
  3  where file#=14;

     FILE# STATUS  NAME
---------- ------- ----------------------------------------
        14 ONLINE

SQL> 
I hope that I have you convinced that a MOUNT does NOT verify the datafiles. If you are still not convinced, read the Backup and Recovery documentation about how to do a FULL DATABASE RESTORE and RECOVER -- where you restore the controlfile and mount the database before you even restore datafiles. How would the MOUNT succeed with the controlfile alone ?

Datafiles is not Restored -- using V$DATAFILE and V$DATAFILE_HEADER

Building on my previous blog post "STARTUP MOUNT verify datafiles ?",  here's how I can use the V$DATAFILE and V$DATAFILE_HEADER views from the controlfile to identify datafiles that are / are not restored.  This is a case where I have to do a FULL Restore.

I first restore the controlfile :
RMAN> startup nomount;

connected to target database (not started)
Oracle instance started

Total System Global Area     313860096 bytes

Fixed Size                     1336232 bytes
Variable Size                281021528 bytes
Database Buffers              25165824 bytes
Redo Buffers                   6336512 bytes

RMAN> restore controlfile from autobackup;

Starting restore at 01-JAN-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=192 device type=DISK

recovery area destination: /home/oracle/app/oracle/flash_recovery_area
database name (or database unique name) used for search: ORCL
channel ORA_DISK_1: AUTOBACKUP /home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2012_01_01/o1_mf_n_771420416_7hznn10m_.bkp found in the recovery area
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120101
channel ORA_DISK_1: restoring control file from AUTOBACKUP /home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2012_01_01/o1_mf_n_771420416_7hznn10m_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 01-JAN-12

RMAN> 
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> exit


Recovery Manager complete.
I then query the V$DATAFILE and V$DATAFILE_HEADER views. Remember : There are NO datafiles present on this server. I only have the controlfile. I find (see the output below) that V$DATAFILE shows all the files expected to be present but V$DATAFILE_HEADER does not find any files.
oracle@linux64 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 1 11:34:25 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set pages600       
SQL> select file#, name from v$datafile order by 1;

     FILE#
----------
NAME
--------------------------------------------------------------------------------
         1
/home/oracle/app/oracle/oradata/orcl/system01.dbf

         2
/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf

         3
/home/oracle/app/oracle/oradata/orcl/undotbs01.dbf

         4
/home/oracle/app/oracle/oradata/orcl/users01.dbf

         5
/home/oracle/app/oracle/oradata/orcl/example01.dbf

         6
/home/oracle/app/oracle/oradata/orcl/FLOW_1046101119510758.dbf

         7
/home/oracle/app/oracle/oradata/orcl/FLOW_1146416395631714.dbf

         8
/home/oracle/app/oracle/oradata/orcl/FLOW_1170420963682633.dbf

         9
/home/oracle/app/oracle/oradata/orcl/FLOW_1194425963955800.dbf

        10
/home/oracle/app/oracle/oradata/orcl/FLOW_1218408858999342.dbf

        11
/home/oracle/app/oracle/oradata/orcl/FLOW_1242310449730067.dbf

        12
/home/oracle/app/oracle/oradata/orcl/FLOW_1266412439758696.dbf

        13
/home/oracle/app/oracle/oradata/orcl/APEX_1295922881855015.dbf

        14
/oradata/add_tbs.dbf


14 rows selected.

SQL> 
SQL> select file#, name from v$datafile_header order by 1;

     FILE#
----------
NAME
--------------------------------------------------------------------------------
         1


         2


         3


         4


         5


         6


         7


         8


         9


        10


        11


        12


        13


        14



14 rows selected.

SQL> 
SQL> select file#, nvl(name,'File Not Found') from v$datafile_header order by 1;

     FILE#
----------
NVL(NAME,'FILENOTFOUND')
--------------------------------------------------------------------------------
         1
File Not Found

         2
File Not Found

         3
File Not Found

         4
File Not Found

         5
File Not Found

         6
File Not Found

         7
File Not Found

         8
File Not Found

         9
File Not Found

        10
File Not Found

        11
File Not Found

        12
File Not Found

        13
File Not Found

        14
File Not Found


14 rows selected.

SQL> 
SQL> exit
Thus, all 14 entries are present in V$DATAFILE_HEADER but with a NULL NAME.

I then begin a RESTORE :
[oracle@linux64 ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sun Jan 1 11:36:07 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> 
RMAN> restore database;

Starting restore at 01-JAN-12
Starting implicit crosscheck backup at 01-JAN-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=192 device type=DISK
Crosschecked 12 objects
Finished implicit crosscheck backup at 01-JAN-12

Starting implicit crosscheck copy at 01-JAN-12
using channel ORA_DISK_1
Finished implicit crosscheck copy at 01-JAN-12

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2012_01_01/o1_mf_n_771420416_7hznn10m_.bkp

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /oradata/add_tbs.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
channel ORA_DISK_1: ORA-19870: error while restoring backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
ORA-19504: failed to create file "/oradata/add_tbs.dbf"
ORA-27037: unable to obtain file status
Linux Error: 13: Permission denied
Additional information: 6

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/orcl/FLOW_1146416395631714.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznm84k_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznm84k_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/orcl/FLOW_1046101119510758.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/orcl/FLOW_1170420963682633.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznmq8q_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznmq8q_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00009 to /home/oracle/app/oracle/oradata/orcl/FLOW_1194425963955800.dbf
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/FLOW_1218408858999342.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/orcl/FLOW_1242310449730067.dbf
channel ORA_DISK_1: restoring datafile 00012 to /home/oracle/app/oracle/oradata/orcl/FLOW_1266412439758696.dbf
channel ORA_DISK_1: restoring datafile 00013 to /home/oracle/app/oracle/oradata/orcl/APEX_1295922881855015.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznmydw_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznmydw_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
failover to previous backup

creating datafile file number=14 name=/oradata/add_tbs.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/01/2012 11:36:45
ORA-01119: error in creating database file '/oradata/add_tbs.dbf'
ORA-27037: unable to obtain file status
Linux Error: 13: Permission denied
Additional information: 6
RMAN-06956: create datafile failed; retry after removing /oradata/add_tbs.dbf from OS

RMAN> 
RMAN> restore database;

Starting restore at 01-JAN-12
using channel ORA_DISK_1

skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
skipping datafile 7; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1146416395631714.dbf
skipping datafile 5; already restored to file /home/oracle/app/oracle/oradata/orcl/example01.dbf
skipping datafile 6; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1046101119510758.dbf
skipping datafile 8; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1170420963682633.dbf
skipping datafile 9; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1194425963955800.dbf
skipping datafile 10; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1218408858999342.dbf
skipping datafile 11; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1242310449730067.dbf
skipping datafile 12; already restored to file /home/oracle/app/oracle/oradata/orcl/FLOW_1266412439758696.dbf
skipping datafile 13; already restored to file /home/oracle/app/oracle/oradata/orcl/APEX_1295922881855015.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /oradata/add_tbs.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
channel ORA_DISK_1: ORA-19870: error while restoring backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
ORA-19504: failed to create file "/oradata/add_tbs.dbf"
ORA-27037: unable to obtain file status
Linux Error: 13: Permission denied
Additional information: 6

failover to previous backup

creating datafile file number=14 name=/oradata/add_tbs.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/01/2012 11:37:42
ORA-01119: error in creating database file '/oradata/add_tbs.dbf'
ORA-27037: unable to obtain file status
Linux Error: 13: Permission denied
Additional information: 6
RMAN-06956: create datafile failed; retry after removing /oradata/add_tbs.dbf from OS

RMAN> 
RMAN> exit
I get an OS permissions error for File #14 ('/oradata/add_tbs.dbf') because oracle does not have write permissions for this directory path on this server !  Very unfortunately, RMAN isn't currently intelligent enough to skip this one file and restore the other files from the same backuppiece. Apparently, all 4 files in the backup piece are not restored !
RMAN does skip over datafiles 4 to 13 that have already been restored and does not attempt to restore these files at the second RESTORE execution.
Before attempting to restore these 4 files, I also use the V$DATAFILE and V$DATAFILE_HEADER views to identify them :
oracle@linux64 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 1 11:42:06 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set pages600
SQL> select file#, name from v$datafile
  2  minus
  3  select file#, name from v$datafile_header
  4  order by 1;

     FILE#
----------
NAME
--------------------------------------------------------------------------------
         1
/home/oracle/app/oracle/oradata/orcl/system01.dbf

         2
/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf

         3
/home/oracle/app/oracle/oradata/orcl/undotbs01.dbf

        14
/oradata/add_tbs.dbf


SQL> 
SQL> exit
I then RESTORE the 4 files individually :
[oracle@linux64 ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sun Jan 1 11:42:52 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore datafile 1;

Starting restore at 01-JAN-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=192 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:15
Finished restore at 01-JAN-12

RMAN> restore datafile 2;

Starting restore at 01-JAN-12
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:15
Finished restore at 01-JAN-12

RMAN> restore datafile 3;

Starting restore at 01-JAN-12
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 01-JAN-12

RMAN> run
2> {set newname for datafile 14 to '/home/oracle/app/oracle/oradata/orcl/add_tbs.dbf';
3> restore datafile 14;
4> }

executing command: SET NEWNAME

Starting restore at 01-JAN-12
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00014 to /home/oracle/app/oracle/oradata/orcl/add_tbs.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_01_01/o1_mf_nnndf_TAG20120101T112516_7hznjx24_.bkp tag=TAG20120101T112516
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 01-JAN-12

RMAN> 
However, since I have used a NEWNAME for datafile #14, I must SWITCH it before I can use RECOVER DATABASE :
RMAN> recover database; 

Starting recover at 01-JAN-12
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 01/01/2012 11:48:44
RMAN-06094: datafile 14 must be restored

RMAN> switch datafile 14 to copy;

datafile 14 switched to datafile copy "/home/oracle/app/oracle/oradata/orcl/add_tbs.dbf"

RMAN> recover database;

Starting recover at 01-JAN-12
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 2 is already on disk as file /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2012_01_01/o1_mf_1_2_7hznmzh7_.arc
archived log for thread 1 with sequence 3 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2012_01_01/o1_mf_1_2_7hznmzh7_.arc thread=1 sequence=2
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=3
media recovery complete, elapsed time: 00:00:00
Finished recover at 01-JAN-12

RMAN> 
RMAN> exit
I now verify the datafiles again, before actually OPENing the database :
[oracle@linux64 ~]$ sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 1 11:51:23 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select file#, name from v$datafile
  2  minus
  3  select file#, name from v$datafile_header
  4  order by 1;

no rows selected

SQL> select name from v$datafile where file#=14;

NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/orcl/add_tbs.dbf

SQL> alter database open resetlogs;

Database altered.

SQL>
Now it should be clear how V$DATAFILE and V$DATAFILE_HEADER are different. V$DATAFILE_HEADER does actually verify the datafile, while V$DATAFILE only reads the controlfile.