Oracle Database Auditing

There are two distinct type of auditing

Standard: Auditing based on statement, Privileges and object level.

Fine-grained: Auditing on a finer granule which is based on content i.e value > 10,000

The standard auditing having 3 levels of auditing:

Statement: Audit all action at any type of objects.

Privilege: audit action on system level privileges

Object_level: Specific audit action lie select, update, insert or delete.

For all the 3 level of auditing you can choose to audit by access (audit every time you access) or by session (audit only once per access during the session), you can also audit on if the access was successful (whenever successful) or not (whenever not successful)

DB: Enables database auditing and directs all audit records to the database audit trail, except for records that are always written to the operating system audit trail.

DB, extended: As per the DB value but also populate sqlbind and sqltext clob columns

OS: Enables database auditing and directs all audit records to an operating system file

NONE: Disables auditing (This value is the default.)

Note: You must bounce back the instance to activate audit parameter.

Setting Audit Trail:

SQL>alter system set audit_trail = db scope=spfile;

SQL>alter system set audit_file_dest = ‘c:oracleauditing’;

SQL>alter system set audit_trail = os scope=spfile;

Note: Note: if the audit_file_dest is not set then the default location is $oracle_home/rdbms/audit/

SQL>alter system set audit_sys_operations = true scope = spfile;

Note: this will audit all sys operations regardless ifaudit_ trail is set.

SQL>alter system set audit_trail = none scope=spfile;

Setup Audit Trail through Initialization Parameter

  1. Set “audit_trail = true” in the init.ora file.
  2. Run the $ORACLE_HOME/rdbms/admin/cataudit.sql script while connected as SYS

Audit Options

BY SESSION/BY ACCESS

BY SESSION causes Oracle to write a single record for all SQL statements of the same type issued in the same session. BY ACCESS causes Oracle to write one record for each access.

WHENEVER SUCCESSFUL/WHENEVER NOT SUCCESSFUL

WHENEVER SUCCESSFUL chooses auditing only for statements that succeed. WHENEVER NOT SUCCESSFUL chooses auditing only for statements that fail or result in errors.

Auditing Example:

Audit Session By Scott, Lori;

Audit Delete Any Table By Access Whenever Not Successful; Audit Delete Any Table; Audit Select Table, Insert Table, Delete Table, Execute Procedure By Access Whenever Not Successful; Audit Delete On Scott.Emp; Audit Select, Insert, Delete On Jward.Dept By Access Whenever Successful; Audit Select On Default Whenever Not Successful; Audit Select Table By Appserve On Behalf Of Jackson; Audit Alter, Index, Rename On Default By Session; Audit Alter User; Audit Lock Table By Access Whenever Successful; Audit Delete On Scott.Emp By Access Whenever Successful; Audit Delete Table, Update Table By Hr By Access; Audit All By Hr By Access; Audit Execute Procedure By Hr By Access;

Disabling Audit

Noaudit Table;

Noaudit All Privileges;

Turn Off All Auditing

Noaudit All;

Noaudit All Privileges;

Noaudit All On Default;

Purge Auditing

Delete From Sys.Aud$;

Truncate From Sys.Aud$

Delete From Sys.Aud$;

Delete From Sys.Aud$ Where Obj$Name=’Emp’;

View Audit Trail

The audit trail is stored in the SYS.AUD$ table. It’s contents can be viewed directly or via the following views:

Dba_Audit_Exists, Dba_Audit_Object, Dba_Audit_Session, Dba_Audit_Statement, Dba_Audit_Trail, Dba_Obj_Audit_Opts, Dba_Priv_Audit_Opts, Dba_Stmt_Audit_Opts

Auditing Via Trigger

It is possible to audit the system by using triggers, there are a number of system-level triggers that can be fired such as database startup, logon, logoff, ddl, server error

Example Trigger Auditing:
create or replace trigger audit_insert after insert on vallep.employees for each row
insert into employees_table_audit
values (user, sysdate);
create or replace trigger logon_audit_trig
after logon on database
begin
insert into logon_audit values (user, sys_context(‘userenv’, ‘sessionid’), sysdate, null,
sys_context(‘userenv’, ‘host’));end;

Fine-Grain Auditing

Fine-grain auditing (FGA) allows you to audit users accessing data of a certain criteria. As per standard auditing you can audit select, insert, update and delete operations. You use the package dbms_fga to add, remove, enable and disable FGA auditing policies, it is virtually impossible to bypass these policies, one nice feature is that you can attach handlers (like a trigger) to the policies which can execute procedures which could email or page you. There are many options that can be applied to the dbms_fga package, so best to look up the oracle man pages but here are some simple example

Privilege: grant execute on dbms_fga to vallep;

Creating Auditing:
dbms_fga.add_policy (object_schema => ‘vallep’,
object_name => ’employees’,policy_name => ‘compensation_aud’,
audit_columns => ‘salary,commission_pct’,
enable => false, statement_types => ‘select’);

Creating (handler):
dbms_fga.add_policy (object_schema => ‘vallep’,
object_name => ’employees’,policy_name => ‘compensation_aud’,
audit_columns => ‘salary,commission_pct’,
enable => false,statement_types => ‘select’
handler_schema => ‘vallep’
handler_module => ‘log_id’);
create procedure vallep.log_id (schema1 varchar2, table1 varchar2, policy1
varchar2) as
beginutil_alert_pager(schema1, table1, policy1);/* send an alert via a pager */ end;

Removing auditing:
dbms_fga.drop_policy (object_schema => ‘vallep’,
object_name => ’employees’, policy_name => ‘compensation_aud’);
Enabling auditing:
dbms_fga.enable_policy (object_schema => ‘vallep’,
object_name => ’employees’,
policy_name => ‘compensation_aud’);

Disabling auditing:

dbms_fga.edisable_policy ( object_schema => ‘vallep’,object_name => ’employees’, policy_name => ‘compensation_aud’);

Usful Tables:

DBA_AUDIT_POLICIES, DBA_FGA_AUDIT_TRAIL, DBA_COMMON_AUDIT_TRAIL

Maintenance

The audit trail must be deleted/archived on a regular basis to prevent the SYS.AUD$ table growing to an unacceptable size

Security

Only DBAs should have maintenance access to the audit trail. If SELECT access is required by any applications this can be granted to any users, or alternatively a specific user may be created for this. Auditing modifications of the data in the audit trail itself can be achieved as follows:

AUDIT INSERT, UPDATE, DELETE ON sys.aud$ BY ACCESS;

Advertisements

User Privileges and Roles in Oracle

A user privilege is a right to execute a particular type of SQL statement, or a right to access another user’s object. The types of privileges are defined by Oracle.

Roles, on the other hand, are created by users (usually administrators) and are used to group together privileges or other roles.

Restricting System Privileges

O7_DICTIONARY_ACCESSIBILITY initialization parameter is set to FALSE.

When this parameter is not set to FALSE, the ANY privilege applies to the data dictionary, and a malicious user with ANYprivilege could access or alter data dictionary tables. The default for O7_DICTIONARY_ACCESSIBILITY is FALSE.

Creating a Role

A Roles can be specified to be authorized by:‎

– The database using a password

CREATE ROLE SHAHID_ROLE1 IDENTIFIED BY AHMED;‎

– An application using a specified package

CREATE ROLE admin_role IDENTIFIED USING hr.admin;‎

– Externally by the operating system, network, or other external source

ALTER ROLE clerk IDENTIFIED EXTERNALLY;‎

– Globally by an enterprise directory service

CREATE ROLE supervisor IDENTIFIED GLOBALLY;‎

Dropping Roles

DROP ROLE clerk;

Granting System Privileges and Roles

GRANT CREATE SESSION, accts_pay TO SHAHID1; GRANT new_dba TO SHAHID1 WITH ADMIN OPTION;

Here in above to grant option first statement granting system privilege along with a role accts_pay to user SHAHID1 where second statement granting a role new_dba to user SHAHID with admin option that means the user SHAHID1 cannot only use all the privilege implicit in new_dba role but can grant, revoke, or drop the new_dba too.

The SET ROLE Statement

During the session, the user or an application can use the SET ROLE statement any number of times to change the roles currently enabled for the session. You can retrieve the current roles that are active for a user in a session by using SESSION_ROLES

SQL> connect hrms/hrms@orcl3;

SQL> select * from session_roles;

ROLE

——————————

EXECUTE_CATALOG_ROLE

DELETE_CATALOG_ROLE

EXP_FULL_DATABASE

IMP_FULL_DATABASE

You can change the roles active in the current session by “SET ROLE”

SQL> select * from session_roles;

ROLE

——————————

SHAHID_ROLE1

Enable specific set of roles in the current session (SHAHID_ROLE1 is created)

SQL> set role SHAHID_ROLE1, EXECUTE_CATALOG_ROLE

Role set.

SQL> select * from session_roles;

ROLE

——————————

EXECUTE_CATALOG_ROLE

SHAHID_ROLE1

Enable all roles in the current session

SQL> set role all;

Role set.

You can disable all roles with the following statement:

SET ROLE NONE;

Note: A user can enable as many roles as specified by the initialization parameter MAX_ENABLED_ROLES.

Using OS Role Management

OS_ROLES = TRUE

If OS_ROLES is set to TRUE, the operating system completely manages the grants and revokes of roles to users. Any previous grants of roles to users using GRANT statements do not apply; however, they are still listed in the data dictionary. Only the role grants made at the operating system level to users apply. Users can still grant privileges to roles and users.

Note: If you choose to have the operating system to manage roles, by default users cannot connect to the database through the shared server. This restriction is the default because a remote user could impersonate another operating system user over a non-secure connection.

SELECT * FROM DBA_SYS_PRIVS; Listing all system Grants SELECT * FROM DBA_ROLE_PRIVS; Listing all Role Grants SELECT TABLE_NAME, PRIVILEGE, GRANTABLE FROM DBA_TAB_PRIVS WHERE GRANTEE = ‘JWARD’; Listing object privilege Granted to user SELECT GRANTEE, TABLE_NAME, COLUMN_NAME, PRIVILEGE FROM DBA_COL_PRIVS; listing all column specific privilege that have been granted SELECT * FROM SESSION_ROLES; Listing all role currently enabled.‎ SELECT * FROM SESSION_PRIVS; Listing all system privilege currently available.‎ SELECT * FROM DBA_ROLES; listing Role of database SELECT GRANTED_ROLE, ADMIN_OPTION FROM ROLE_ROLE_PRIVS WHERE ROLE = ‘SYSTEM_ADMIN’;‎ The above query will display information about the privilege domains of role.‎

Oracle Database User Management

CREATE AND DROP USER:

create user shaan identified by moon

default tablespace rtbs

temporary tablespace temp

quota 30m on rtbs;

Above command creates a user shaan with password moon. Consider the tablespace you have in which shaan will store his data is “RTBS”. The tablespace used for storing temporary segments will be “TEMP” and the amount of space which the user shaan can use on “RTBS” tablespace is 30M.
SQL> DROP USER SHAAN CASCADE;
ALTERING/UNLOCKING ACCOUNT:

SQL> alter user shaan identified by moon account unlock;
The above command alters or unlocks the “SHAAN” user with password “moon”.
SQL> select username, account_status, default_tablespace,

temporary_tablespace, profile from dba_users

where username = ‘HRMS’;
USERNAME ACCOUNT_STATUS DEFAULT_TABLESPACE TEMPORARY_TABLESPACE PROFILE

—— ————– —————— ——————– ———

HRMS      OPEN           MUJ_HRMS_DBF      TEMP                 DEFAULT

The above query shows the account information related to “HRMS”.
ALTERING TABLEPSACE QUOTA:
SQL> select * from dba_ts_quotas where username = ‘HRMS’;

TABLESPACE_NAME USERNAME BYTES MAX_BYTES BLOCKS MAX_BLOCKS

————— ——– —– ——— —— ———-

MUJ_HRMS_DBF    HRMS     198901760 -1    24280  -1

SQL> alter user shaan quota 40m on RTBS;‎
GRANTING AND REVOKING PRIVILEGES:
SQL> GRANT create table to SHAAN;‎
SQL> GRANT create session to SHAAN;
SQL> GRANT create any table, create tablespace to SHAAN;
SQL> REVOKE create any table from SHAAN;
SQL> REVOKE create tablespace from SHAAN;
SQL> GRANT select, insert, update, delete on HRSM.PAY_PAYMENT_MASTER to HRMS;
SQL> REVOKE update,delete on HRMS.PAY_PAYMENT_MASTER from HRMS;
ROLES:
SQL>create role MY_ROLE;

SQL> GRANT create any table, alter any table, drop any table, select any table, update any table, delete any table to MY_ROLE;

SQL> GRANT create any index, alter any index, drop any index to MY_ROLE
SQL> GRANT alter session, restricted session to MY_ROLE;

SQL> GRANT create tablespace, alter tablespace, drop tablespace, unlimited tablespace to MY_ROLE;
SQL> GRANT select, insert, update, delete on HRMS.PAY_PAYMENT_MASTER to MY_ROLE;

SQL> GRANT MY_ROLE to SHAAN;
SQL> select * from dba_sys_privs where grantee = ‘SHAAN’;

GRANTEE                        PRIVILEGE            ADM

———————-         ——————– —

SHAAN                          CREATE TABLE         NO

SHAAN                          CREATE SESSION       NO

SQL> select * from dba_sys_privs where grantee = ‘MY_ROLE’‎

GRANTEE                        PRIVILEGE                   ADM

———————-         ————————— —

MY_ROLE                        DELETE ANY TABLE            NO

MY_ROLE                        CREATE ANY TABLE            NO

MY_ROLE                        DROP TABLESPACE             NO

MY_ROLE                        ALTER TABLESPACE            NO

MY_ROLE                        ALTER ANY INDEX             NO

MY_ROLE                        DROP ANY TABLE              NO

MY_ROLE                        DROP ANY INDEX              NO

MY_ROLE                        UPDATE ANY TABLE            NO

MY_ROLE                        ALTER SESSION               NO

MY_ROLE                        SELECT ANY TABLE            NO

MY_ROLE                        RESTRICTED SESSION          NO

MY_ROLE                        CREATE ANY INDEX            NO

MY_ROLE                        ALTER ANY TABLE             NO

MY_ROLE                        UNLIMITED TABLESPACE        NO

MY_ROLE                        CREATE TABLESPACE           NO

SQL> select grantee, owner, table_name, grantor, privilege from dba_tab_privs

‎where grantee = ‘SHAAN’;‎

GRANTEE  OWNER  TABLE_NAME   GRANTOR    PRIVILEGE

——– —— ———— ———  ———

HRMS     HRMS  PAY_PAYMENT_MASTER HRMS  SELECT

HRMS     HRMS  PAY_PAYMENT_MASTER HRMS  INSERT

SQL> select grantee, owner, table_name, grantor, privilege from dba_tab_privs

where grantee = ‘MY_ROLE’‎;

GRANTEE  OWNER  TABLE_NAME   GRANTOR    PRIVILEGE

——– —— ———— ———  ———

MY_ROLE  HRMS  PAY_PAYMENT_MASTER HRMS  UPDATE

MY_ROLE  HRMS  PAY_PAYMENT_MASTER HRMS  SELECT

MY_ROLE  HRMS  PAY_PAYMENT_MASTER HRMS  INSERT

MY_ROLE  HRMS  PAY_PAYMENT_MASTER HRMS  DELETE

SQL> select * from dba_roles where role = ‘MY_ROLE’;‎

ROLE                  PASSWORD AUTHENTICAT

——————— ——– ———–

MY_ROLE               NO       NONE

SQL> select * from dba_role_privs where grantee = ‘SHAAN’;‎

GRANTEE                        GRANTED_ROLE    ADM DEF

————————       ————— — —

SHAAN                          MY_ROLE         NO  YES

SQL> select * from role_sys_privs where role = ‘MY_ROLE’;‎

ROLE                  PRIVILEGE                       ADM

——————— ——————————- —

MY_ROLE               DROP TABLESPACE                 NO

MY_ROLE               CREATE ANY TABLE                NO

MY_ROLE               DELETE ANY TABLE                NO

MY_ROLE               ALTER TABLESPACE                NO

MY_ROLE               DROP ANY TABLE                  NO

MY_ROLE               ALTER ANY INDEX                 NO

MY_ROLE               UPDATE ANY TABLE                NO

MY_ROLE               DROP ANY INDEX                  NO

MY_ROLE               ALTER SESSION                   NO

MY_ROLE               RESTRICTED SESSION              NO

MY_ROLE               SELECT ANY TABLE                NO

MY_ROLE               CREATE TABLESPACE               NO

MY_ROLE               UNLIMITED TABLESPACE            NO

MY_ROLE               ALTER ANY TABLE                 NO

MY_ROLE               CREATE ANY INDEX                NO

SQL> select * from role_tab_privs where role = ‘MY_ROLE’;‎Code:

ROLE     OWNER   TABLE_NAME   COLUMN_NAME   PRIVILEGE   GRA

——– ——- ———— ————- ———– —

MY_ROLE  HRMS   PAY_PAYMENT_MASTER          DELETE       NO

MY_ROLE  HRMS   PAY_PAYMENT_MASTER          UPDATE       NO

MY_ROLE  HRMS   PAY_PAYMENT_MASTER          SELECT       NO

MY_ROLE  HRMS   PAY_PAYMENT_MASTER          INSERT       NO

SQL> revoke MY_ROLE from SHAAN;
PROFILES:
SQL> create profile DEVELOPER limit

failed_login_attempts 3

password_lock_time unlimited

password_life_time 30

password_reuse_time 30

password_grace_time 5

idle_time 30;
SQL> alter user SHAAN profile developer;
SQL> select * from dba_profiles where profile = ‘DEVELOPER’;

PROFILE     RESOURCE_NAME                    RESOURCE LIMIT

———– ————–                   ——– —–

DEVELOPER   COMPOSITE_LIMIT                  KERNEL   DEFAULT

DEVELOPER   SESSIONS_PER_USER                KERNEL   DEFAULT

DEVELOPER   CPU_PER_SESSION                  KERNEL   DEFAULT

DEVELOPER   CPU_PER_CALL                     KERNEL   DEFAULT

DEVELOPER   LOGICAL_READS_PER_SESSION        KERNEL   DEFAULT

DEVELOPER   LOGICAL_READS_PER_CALL           KERNEL   DEFAULT

DEVELOPER   IDLE_TIME                        KERNEL   30

DEVELOPER   CONNECT_TIME                     KERNEL   DEFAULT

DEVELOPER   PRIVATE_SGA                      KERNEL   DEFAULT

DEVELOPER   FAILED_LOGIN_ATTEMPTS            PASSWORD 3

DEVELOPER   PASSWORD_LIFE_TIME               PASSWORD 30

DEVELOPER   PASSWORD_REUSE_TIME              PASSWORD 30

DEVELOPER   PASSWORD_REUSE_MAX               PASSWORD DEFAULT

DEVELOPER   PASSWORD_VERIFY_FUNCTION         PASSWORD DEFAULT

DEVELOPER   PASSWORD_LOCK_TIME               PASSWORD UNLIMITED

DEVELOPER   PASSWORD_GRACE_TIME              PASSWORD 5

SQL> drop profile developer cascade;

DB File Sequential Read Wait/ DB File Scattered Read

An Oracle session logs the db file sequential read wait event when it has to wait for a single-block I/O read request to complete. Oracle issues single-block I/O read requests when reading from indexes, rollback segments, sort segments, control files, datafile headers and tables (when tables are accessed via rowids). A sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3).  To determine the actual object being waited can be checked by the p1, p2, p3 info in v$session_wait.

Select * from   v$session_event

where  event = ‘db file sequential read’

order by time_waited;

Select segment_name, partition_name, segment_type, tablespace_name

from   dba_extents a, v$session_wait b

where  b.p2 between a.block_id and (a.block_id + a.blocks – 1)

and    a.file_id  = b.p1

and    b.event    = ‘db file sequential read’;

Select a.sid, a.serial#, a.username, a.osuser, b.sql_text

from   v$session a, v$sqltext b

where  a.sql_hash_value = b.hash_value

and    a.sql_address    = b.address and    a.sid in (select sid from   v$session_wait

where  event = ‘db file sequential read’)

order by a.sid, b.hash_value, b.piece;

Note: Where P1 = file#, P2 = block#, P3 = blocks 9 (should be 1)

Generally the entire database having some wait event doing IO for index scan usually. But if you see seconds in waiting greater then 0, you must tune index I/O.

To reduce this wait event follow the below points:

  1. Tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce “db file sequential read” wait time.
  2. Distribute the index in different file system to reduce the contention for I/O
    Tuning Physical devices, the data on different disk to reduce the I/O.
  3. Use of Faster disk reduces the unnecessary I/O request.
    Increase db_block_buffers or larger buffer cache sometimes can help.

DB File Scattered Read:

This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes. Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU (Least Recently Used) list.

The DBA should be concerned with average I/O time and session that spend time on this event. The average multi block I/O should not exceeds 1/100 sec.

SELECT a.average_wait “SEQ READ”, b.average_wait “SCAT READ”

FROM sys.v_$system_event a, sys.v_$system_event b

WHERE a.event = ‘db file sequential read’ AND b.event = ‘db file scattered read’;

select * from v$system_event where event = ‘db file sequential read’;

Select * from   v$session_event

where  event = ‘db file scattered read’

order by time_waited;

Select a.sid, b.name, a.value

from   v$sesstat a, v$statname b

where  a.statistic# = b.statistic#

and    a.value     <> 0 and    b.name = ‘table scan blocks gotten’

order by 3,1;

If the average I/O wait time for the db file scattered read event is acceptable, but the event indicates waits in a certain session, then this is an application issue.

In this case DBA needs to determine which objects is being read the most from P1 and P2 values, check the relevant SQL statement, explain plan for that SQL, Perform SQL tuning. The motive is to reduce both the logical and physical I/O calls: link to check SQL or Application Tuning.

If an application that has been running fine for suddenly starts indicating the db file scattered read event then this could be an index issue. One or more indexes may have been dropped or become unusable.

To determine which indexes have been dropped, the DBA can compare the development, test, and production databases. The ALTER TABLE MOVE command marks all indexes associated with the table as unusable. Certain partitioning operations can also cause indexes to be marked unusable. This includes adding a new partition or coalescing partitions in a hash-partitioned table, dropping a partition from a partitioned table or global partitioned index, modifying partition attributes, and merging, moving, splitting or truncating table partitions. A direct load operation that fails also leaves indexes in an unusable state. This can easily be fixed by rebuilding the indexes.

Oracle 11g Data Pump Enhancements

A number of new data pump features are included in Oracle 11g, including the deprecation of the EXP utility, compression of dump file sets, improvement in encryption, and data remapping. Other new features include table renaming, data pump and partitioned table operations, overwrite dump files, the data_options parameter and the transportable parameter. Some of them are discussed below.

Compression of Dump File Sets

Oracle 11g now supports compression of Dump file sets.

expdp HRMS/*** DIRECTORY= datapump_dir DUMPFILE=ORCL3_COMP.dmpCOMPRESSION=ALL

There are four options available for the compression: ALL, metadata_only, data_only, None

Data Pump Encryption Enhancements

Oracle 11g now provides the ability to encrypt Data Pump dump files. Encryption is part of a separate Oracle license (Advanced Security Option) that you must purchase (again, check with Oracle if and when you need this to make sure about current license requirements).

Data Pump Data Remapping (Obfuscation)

It is not uncommon to have a requirement that sensitive production data cannot be contained in a non-production environment. It is also a very common requirement that you need test environments to be as production like as possible. Oracle Data Pump offers help in this regard by providing the ability to obfuscate (or remap if you prefer) data during an export or import operation. Remapping of data in Data Pump is supported with the use of the new remap_data parameter.

Remapping can occur during either the export or the import of the dump file. It probably is a better idea to do the remapping during the export for security reasons. That way you can be less concerned if the export file ends up being misused, since the dump file will not contain sensitive data.

Data Pump Rename Table

In Oracle 11g you can now able to rename a table during the import process. To rename a table during the import process use the remap_table parameter.

impdp HRMS/**** DIRECTORY=datapump_dir DUMPFILE=orcl3.dmp tables=hrms.pay_payment_master remap_table=hrms.pay_payment_master:copy_pay_payment_master

Data Pump and Partitioned Tables

Now in 11g during an Oracle Data Pump import you can have some control over partitioning of tables by using partition_options.

  • Departition – Partitions will be created as individual tables rather than partitions of a partitioned table.
  • Merge – Causes all partitions to be merged into one, unpartitioned table.
  • None – Same as per the export.

impdp hrms/**** DIRECTORY=datapump_dir DUMPFILE=orcl3.dmp tables=hrms.pay_payment_master partition_options=merge;

Overwrite Dump Files

When using impdp in Oracle 11g you can now overwrite any dump file that might already exist by using the new reuse_dumpfiles parameter:

expdp hrms/**** DIRECTORY=datapump_dir DUMPFILE=orcl3.dmp tables=hrms.pay_payment_master reuse_dumpfiles=Y

Data Pump Data_Options Parameter

Previously with the impdp if we are loading records into a table with Oracle Data Pump Import, entire load fail because of single duplicate records were in the import file. Oracle 11g solves this problem with a new parameter: skip_constraint_errors for use with Data Pump Import. When this parameter is used Data Pump Import will ignore the rows that generate a constraint error (while still not loading them of course) and will continue to load those rows that do not generate an error.

C:>impdp hrms/**** dumpfile=new_hrms.dmp directory=datapump_dir tables=pay_payment_master  table_exists_action=append data_options=skip_constraint_errors;

Restriction: You must use external table method  with Data Pump when loading data.

The Transportable Parameter

Previously when the transportable parameter is used with impdp or expdp only the metadata associated with specific tables, partitions, or sub-partitions will be extracted, rather than all metadata. You can then proceed to transport the associated data files as you normally would.

You then copy the data files and the dump file set to the destination and plug in the database. Now you would use impdp as a part of this process to import the metadata into the database

impdp hrms/**** DIRECTORY=datapump_dir DUMPFILE=orcl3.dmp tables=hrms.pay_payment_master remap_schema=hrms:hrms1;

How to Load Excel sheet into Oracle Table

To the best of my knowledge there are 4 different options available to load Excel sheet into Oracle Table. The Below Table illustrates all the 4 ways to load Excel data into oracle tables.

Ways to load Excel spreadsheet load data from CSV files by SQL*Loader load data with external tables load data by ODBC gateway or utl_file to read the file load data by Code template or Heterogeneous services
Precondition PL/SQL mapping A CSV/ text file converted from Excel is required ODBC gateway for Excel is set up.
Mapping type No. SQL*Loader mapping & PL/SQL Mapping if doing more transform PL/SQL mapping CT mapping
Are staging table and extra mapping required? High. It can support the full range of Warehouse Builder data transformation capabilities. Yes when there are more than one source files or mapping is complex. No. No.
What mapping operators are supported? Low. Only one flat file source is allowed within each mapping, and only a few operators are applicable. High. It can offer the full range of Warehouse Builder data transformation capabilities. Middle. More than one file operators can be involved within one mapping. CT mappings support only a subset of transformations available in PL/SQL mappings.

For More details reading follow this link: http://docs.oracle.com/cd/E11882_01/owb.112/e10935/sap_km_mappings.htm#WBETL07002

Method1: load data from CSV files by SQL*Loader

Step1: Save the file in .csv format from excel software

Simply open the file in Microsoft Excel and pres “Save As” put the File extension csv and save the file.

Step2: Make one .CTL file

Step3: Now Run the SQLloader command to load file Excel sheet into Oracle Table

An alternative to this method through the use of Temp file:

Step1: Convert Excel sheet into .CSV file. For that you can use above “Save as” concept or you can use any third party tools.

Step2: Then use SQLloader load this file into temporary table.

Step3: Now simply write a stored procedure to take this data from temp table into the permanent table.

Method2: load data with external tables

Step1: Create a .CSV file of the Data in the excel sheet separated by ‘,’

Step2: Create a directory. For example:

SQL> CREATE DIRECTORY PRODUCT_DIR AS ‘D:shahidproducts’;

Step3: Create an external table in oracle which points to .CSV file

SQL>create table products_ext

(product_no number,description varchar2(100),

price varchar2(20)

)

organization EXTERNAL

(type oracle_loader

default directory PRODUCT_DIR

access parameters (records delimited by newline

badfile ‘products.bad’

logfile ‘products.log’

fields terminated by ‘,’

)

location (‘D:shahidproductsproducts.csv’)

)

reject limit unlimited

/

Step 4: Once this table is created you can load the Data into the target directly

INSERT INTO products_ext (product_no, description, price);

SELECT * FROM products_ext;

Method3: load data through ODBC gateway

Step1: Setup ODBC gateway for the Excel file.

Step2: Now create an ODBC module corresponding to the ODBC gateway and invoke the import wizard to invoke the structure of excel sheet.

Step3: When its executed then oracle will read the data in excel through ODBC gateway into the target table.

An Alternative way to this method:

Step1: First of all we need to convert Excel file from .XLS format into .CSV or .text format so that OWB flat file can access it.

Step2: Now use the import option to import this CSV file.

Step3: Now Create SQL*Loader mappings to load data from CSV table to the staging tables

Step4: Finally a PL/SQL mapping is used to transfer these data from staging table to oracle table (final destination).

Method4: load data by code template or Oracle Heterogeneous services

In 11GR2, Code Template mapping (CT mapping) is introduced for heterogeneous data extraction, transformation and loading by using code templates. This enhances OWB to extract and load data (either with or without transformation) between different database platforms easily. Each CT mapping contains one or more execution units, and each unit is independent and associated with one code template. It utilizes the technology defined in the code template to move data, process data movement unit by unit, and then loads it into the final target. Because of the independency of each unit, CT Mapping allows users to handle data from different sources using different technologies.

Step1: Create CSV file from Excel file for that you can use any method, here simply open the file in MS-Excel and ‘SAVE AS’ with the Extension .CSV

Step2: Assign code template to an execution unit

For that just open the CT mapping and select the “Exection view” tab and then select the “Execution unit” box. Now select the appropriate code template from the drop down list (based on your data and functionality of template)

Step3: Browse or reference the CSV file or flat file through Import metadata wizard

Step4: Finally before executing check the proper path for csv file (default physical file location and check out your result.

CT mapping allows flexible technology for data movement, and the performance of large data movement depends on the code template applied. There may be some restriction. For details please visit the given link: http://docs.oracle.com/cd/E11882_01/owb.112/e10935/sap_km_mappings.htm
Note: If you are using TOAD then directly use TOAD Import function from the menu to directly import External file into the oracle Table and sane as above export table data into Excel file.

Database Link in Oracle?

A DB link is a pointer that defines on-way communication path from an oracle database server to another database server. It is one way means if you define a link from DB ‘A’ to DB ‘B’ then the database A can access information from database B but by using the same link B cannot access the information from A.

A DB link allows the local user to access data on remote database as if it is a local table’s data. For this connection each database must have unique global name in distributed environment.

Database links are either private or public or global. If they are private, then only the user who created the link has access; if they are public, then all database users have access and if an Oracle network uses a directory server, the directory server automatically create and manages global database links (as net service names) for every Oracle Database in the network.

The great advantage of database link is that it allows local user can access link to the remote database without having to be a user on remote database.

Creating database link:

To create a private database link, you must have the CREATE DATABASE LINK system privilege. To create a public database link, you must have the CREATE PUBLIC DATABASE LINK system privilege. Also, you must have the CREATE SESSION system privilege on the remote Oracle database.

Syntax:

CREATE [SHARED][PUBLIC] DATABASE LINK link_name [CONNECT TO user IDENTIFIED BY password] [AUTHENTICATED BY user IDENTIFIED BY password] [USING ‘connect_string’]

Restriction: You cannot create a database link in another user’s schema, and you cannot qualify dblink with the name of a schema.

Example:

CREATE Public Database Link HRMS Connect To HRMS Identified By HRMS Using test;SELECT * from PAY_PAYMENT_MASTER@HRMS;

Note: Oracle Database uses the global database name to name the schema objects globally using the following scheme: schema.schema_object@global_database_name. If GLOBAL_NAMES is set to FALSE, then you can use any name for the link

Features of Different DB link:

Private database link: This link is more secure than a public or global link, because only the owner of the private link, or subprograms within the same schema, can use the link to access the remote database.

Public database link: When many users require an access path to a remote Oracle Database, you can create a single public database link for all users in a database.

Global database link: When an Oracle network uses a directory server, an administrator can conveniently manage global database links for all databases in the system. Database link management is centralized and simple.

REMOTE_OS_AUTHENT Value

SQL> SHOW PARAMETER REMOTE_OS_AUTHENT;

NAME                       TYPE        VALUE

————————– ———   ———–

remote_os_authent          boolean     FALSE

If it is true then an externally-authenticated user can connect to the remote database using a connected user database link and if it is false then an externally-authenticated user cannot connect to the remote database using a connected user database link unless a secure protocol or a network authentication service supported by the Oracle Advanced Security option is used.

Drop Database link:

DROP [PUBLIC] DATABASE LINK link_name ALTER SESSION CLOSE DATABASE LINK link_name

Note: You cannot drop a database link in another user’s schema and you must specify PUBLIC to drop a PUBLIC database link.

Related Views:

DBA_DB_LINKS: All database links in the database

ALL_DB_LINKS: Database links accessible to the user

USER_DB_LINKS: Database links owned by the user

Database Link Restrictions:

  • You cannot perform the following operation using database links:
  • Grant Privileges on remote objects.
  • Analyze remote objects
  • Define or enforce referential integrity
  • Grant roles to users in a remote database
  • Obtain non-default roles on a remote database
  • Execute hash query joins that use shared server connections
  • Use a current user link without authentication through SSL, password, or NT native authentication

The Behavior of database link is also depends upon the categories of users involved in creation of DBlink.

Example: Private Connected User:

CREATE DATABASE LINK sales.us.americas.acme_auto.com USING ‘sales_us’;

Example: Private Current User:

CREATE DATABASE LINK foo CONNECT TO CURRENT_USER USING ‘am_sls’;

Example: Private fixed User:

CREATE DATABASE LINK sales.us.americas.acme_auto.com CONNECT TO scott IDENTIFIED BY tiger USING ‘sales_us’;

Example: Public fixed User:

CREATE PUBLIC DATABASE LINK sales CONNECT TO scott IDENTIFIED BY tiger USING ‘rev’;

Example: Shared public fixed User:

CREATE SHARED PUBLIC DATABASE LINK sales.us.americas.acme_auto.com CONNECT TO scott IDENTIFIED BY tiger AUTHENTICATED BY anupam IDENTIFIED BY bhide USING ‘sales’;