Wednesday, 17 July 2013

Oracle Locking and Concurrency

In oracle you will learn that:

  • Transactions are what databases are all about; they are good
  • You should defer committing as long as you have to. You should not do it quickly to avoid stressing the system, it does not stress the system to have long or large transactions. The rule is commit when you must, and not before. Things should only be small or large as your business logic dictates
  • You should hold locks on data as long as you need to. There are tools you to use, not things to be avoided. Locks are not a scarce resource.
  • There is no overhead involved with row-level locking in Oracle, none
  • You should never escalate a lock(for example, use a table lock instead of a row lock) because it would be 'better on the system'. In Oracle it won't be better for the system - it will save no resources
  • Concurrency and consistency can be achieved. You can get it fast and correct, every time.

Oracle uses a different locking method then most other databases, Oracle locking policy consists of the following:

  • Oracle locks data at the row level on modification only. There is no lock escalation to a block or table level, ever.
  • Oracle never locks data just to read it. There are no locks placed on rows of data by simple reads.
  • A writer of data does not block a reader of data.
  • A writer of data is blocked only when another writer of data has already locked the row it was going after.

Pessimistic Locking

When a user queries some data and picks a row to change the below statement is used:

Pessimistic Locking select empno, ename, sal
from emp
where empno = :empno
and ename = :ename
and sal = :sal
for update nowait

what the "for update nowait" statement does is to lock the row against updates by other sessions. This is why this approach is called pessimistic locking. We lock the row before we attempt to update it because we doubt that the row will remain unchanged otherwise. We'll get three outcomes from this statement:

  • if the underlying data has not changed, we get our row back and this row will be locked from updates by others (but not reads).
  • if another user is in the process of modifying that row, we will get a ORA-00054 Resource Busy error. We are blocked and must wait for the other user to finish with it.
  • if, in the time between selecting the data and indicating our intention to update, someone has already changed the row then we will get zero rows back. The data will be stale. The application needs to re-query the data and lock it before allowing the end user to modify any of the data in order to avoid a lost update scenario.

Optimistic Locking

Optimistic locking, is to keep the old and new values in the application and apon updating the data us a update like below:

Optimistic Locking update table
set column1 = :new_column1, column2 = :new_column2, ....
where column1 = :old_column1
and column2 = :old_column2
...

We are optimistically hoping that the data has not changed, if we are lucky the row is updated, if not we update zero rows and now we have two options get the user to re-key the data back in or should be we try and merge the data (lots of code to do this)

So, the best method in Oracle would be to use pessimistic locking as the user can have confidence that the data they are modifying on the screen is currently owned by them - in other words the row is checked out and nobody could modify it. While you may be thinking what if the user walks away the row is locked, in this scenario its would be better to get the application to release the lock or use Resource Profiles in the database to time out idle sessions. Remember that even if a row is locked you can still read that row, it is never blocked for reading in Oracle.

Blocked Inserts

The only time an INSERT will block is when you have a table with a primary key or unique constraint placed on it and two sessions simultaneously attempt to insert a row with the same value, it is most avoided via the use of Oracle sequences in the generation of primary keys as they are highly concurrent method of generating unique keys in a multi-user environment.

Blocked Updates and Deletes

To avoid update and delete blocking use either one of the two locking methods Pessimistic or Optimistic.

Deadlocks

Deadlocks occur when two people hold a resource that the other wants. Oracle records all deadlocks in a trace file. The number one cause of deadlocks is un-indexed foreign keys

  • if you update the parent table's primary key the child table will be locked in the absence of an index
  • if I delete a parent table, the entire child table will be locked, again in the absence of an index.

Lock Escalation

In other RDBMS when a users locks 100 rows (this may vary) the lock is escalated to a table lock, however Oracle will never escalates a lock, NEVER. Oracle does practice lock conversion/lock promotion they are synonymous.

If a user select a row using FOR UPDATE two locks are placed, one exclusive lock on the row and the other a ROW SHARE LOCK on the table itself. This will prevent other users placing a exclusive lock on the table, thus preventing them from altering the table structure.

Type of Locks

There a number of different types of locks as listed below:

  • DML Locks - DML (data manipulation language), in general SELECT, INSERT, UPDATE and DELETE. DML locks will be locks on a specific row of data, or a lock at the table level, which locks every row in the table.
  • DDL locks - DDL (data definition language), in general CREATE, ALTER and so on. DDL locks protect the definition of the structure of objects.
  • Internal locks and latches - These are locks that Oracle uses to protect its internal data structure.
  • Distributed Locks - These are used by OPS to ensure that different nodes are consistent with each other.
  • Deadlocks - Occurs when two sessions block each other while each waits for a resource that the other session is holding.
  • PCM - PCM (Parallel Cache Management) These are locks that protect one or more cached data blocks in the buffer cache across multiple instances, also used in OPS.

DML Locks

There are two main types of DML locks TX (Transaction) and TM (DL Enqueue). A TX lock is acquired when a transaction initiates its first change and is held until the transaction performs a COMMIT or ROLLBACK. It is used as a queuing mechanism so that other sessions can wait for the transaction to complete. A TM lock is used to ensure that the structure of the table is not altered while you are modifying its contents.

The complete set of DML locks are

Row Share permits concurrent access but prohibits others from locking table for exclusive access
Row Exclusive

same as row share but also prohibits locking in share mode

Share

permits concurrent queries but prohibits updates to the table

Share Row Exclusive

prevent others from locking in share mode or updating the rows on the whole table

Exclusive permits queries but no DML against the table but select ok

The type of locked used when using DML are

RS (table) and RX (row) select  ... for update;
lock table ... in row share mode
RX (table) and RX (row) any insert, update or delete

Below are tables that can be used to identify locks, transaction ID, etc, the code can be used to obtain this information.

Useful SQL
Identify locks and Transaction ID's select username,
v$lock.sid,
trunc(id1/power(2,16)) rbs,
bitand(id1, to_number('ffff', 'xxxx'))+0 slot,
id2 seq,
lmode,
request
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER;
Identify who is blocking whom select (select username from v$session where sid=a.sid) blocker,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) blockee,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
and a.id1 = b.id1
and a.id2 = b.id2;
Using NOWAIT

select * from employee for update nowait;
select * from employee for update wait 10;

Note: the above commands will abort if the lock is not release in the specified time period.

Useful Views
V$TRANSACTION

lists the active transactions in the system

V$SESSION lists session information for each current session.
V$LOCK lists the locks currently held by the Oracle Database and outstanding requests for a lock or latch.
V$LOCKED_OBJECT lists all locks acquired by every transaction on the system. It shows which sessions are holding DML locks (that is, TM-type enqueues) on what objects and in what mode.
DBA_LOCK lists all locks or latches held in the database, and all outstanding requests for a lock or latch
DBA_BLOCKERS displays a session if it is not waiting for a locked object but is holding a lock on an object for which another session is waiting
DBA_DDL_LOCKS lists all DDL locks held in the database and all outstanding requests for a DDL lock
DBA_DML_LOCKS lists all DML locks held in the database and all outstanding requests for a DML lock.

DDL Locks

DDL locks are automatically placed against objects during a DDL operation to protect them from changes by other sessions.

There are three types of DDL locks

  • Exclusive DDL Locks - these prevent other sessions from gaining a DDL lock or TM lock themselves. You can query a table but not modify it. Exclusive locks will normally lock the object until the statement has finished. However in some instances you can use the ONLINE option which only uses a low-level lock this still locks DDL operations but allows DML to occur normally.
  • Share DDL Locks - This protect the structure of the referenced object against modification by other sessions, but allows modification to the data. Shared DDL locks allow you to modify the contents of a table but not their structure.
  • Breakable Parse Locks - This allows an object, such as a query plan cached in the shared pool to register its reliance on some objects. If you perform a DDL against that object, Oracle will review the list of objects that have registered their dependence, and invalidate them. Hence these locks are breakable, they do not prevent the DDL from occurring. Breakable parse locks are used when a session parses a statement, a parse lock is taken against every object referenced by that statement. These locks are taken in order to allow the parsed, cached statement to be invalidated (flushed) in the shared pool if a reference object is dropped or alter in some way. Use the below SQL to identify any parse locks on views, procedures,grants, etc
Identify Locks select * from dba_ddl_locks;

Latches and Internal Locks (Enqueues)

Latches are locks that are held for short period of time, for example the time it takes to modify an in-memory data structure. They are used to protect certain memory structures such as the database block buffer cache or the library cache in the shared pool. They are lightweight low-level serialization mechanism to protect the in-memory data structures of the SGA. They do not support queuing and do not protect database objects such as tables or data files.

Enqueues are another more sophisticated serialized device used when update rows in a database table. The requestor will queue up and wait for the resource to become available, hence these are not as fast as a latch.

It is possible to use manual locking using the FOR UPDATE statement or LOCK TABLE statement, or you can create your own locks by using the DBMS_LOCK package.

Deadlocks

Occurs when two sessions block each other while each waits for a resource that the other session is holding.

Multi-versioning

Oracle operates a multi-version read-consistent concurrency model. Oracle provides:

  • Read-consistent queries: Queries that produce consistent results with respect to a point in time by using rollback segments.
  • Non-Blocking queries: Queries are never blocked by writers of data, as they would be in other databases.

When Oracle reads a table its uses the rollback segment of any rows that data has changed since when it started the read. This allows a point in time read of a table. This also allows Oracle not to lock a table while reading large tables.

Transaction and Row Locks

I am now going to describe in detail how a lock works, you don't need this details but its good to understand what is going on under the covers. Row level locks protect selected rows in a data block during a transaction, a transaction acquires a enqueue and an exclusive lock for each individual row modified by one of the following

  • Insert
  • Delete
  • Update
  • Select with for update

These locks are stored within the data block and each lock refers to the transaction enqueue and as they are stored in the blocck they have a database wide view. The lock is held until either a commit or a rollback is executed, SMON also acquires it in exclusive mode when recovering (undo-ing) a transaction. Transaction locks are used as a queuing mechanism for processes awaiting release of an object locked by a transaction process.

Every data block (except for temp and rollback segements) will have a number of predefined transaction slots. Undo segments have a different type of transaction slot called transaction tables, the transaction slots are otherwise know as interested transaction lists (ITLs) and are controlled by the INITRANS parameter (default is 2 for tables and 3 for indexes). A transaction slots uses 24 bytes of free space in the block, and the maximum number of transactions slots is controlled by the MAXTRANS parameter, hwoever you can only use upto 50% of the block for transactions slots.

ITL slots are required for every transaction, it contains the transaction ID (XID) which is a pointer to an entry in the transaction table of a rollback segment. You can still read the data but other processes wanting to change the data must wait until the lock is released (commit or rollback). The ITL entry contains a XID, undo byes address (UBA) information, flags indicating the transaction status (Flag) and lock count (Lck) showing the number of rows locked by this transaction within the block and SCN at which the transaction is updated. Basically the XID identifies the undo information about that transaction.

You can use the view X$KTUXE to obtain information on the number of rows that are affected, also the view V$TRANSACTION can be used to get more details on a transaction.

When the transaction completed Oracle performs the bare minimum to commit the transaction, it involves updating the flag in the transaction table and the block is not revisisted. This is know as a fast commit, during this time the ITL in the data block is still pointing to the transaction table of the corresponding rollback segment. If an transaction wants to use the block (persume the change has not been comiited) it see's that its point to a rollback segement makes a copy of the block in memory, gets the UBA from the ITL, reads the data from the undo and uses it to rollback the change defined by the undo. If the transaction is committed the rows are no longer locked but the lock byte is the row header is not cleared until the next time a DML action is performed on the block. The block cleanout is delayed by some discrete time interval because of the fast commit, this is called delay block cleanout. The cleanout operation closes the open ITLs and generates the redo information as a block cleanout many involve updating the block with a new SCN, this is why you see redo generation for some select statements.


Oracle Transactions

Oracle Transactions

A transaction is logical piece of work consisting of one or more SQL statements. A transaction is started whenever data is read or written and they are ended by a COMMIT or ROLLBACK. DDL statements always perform a commit first this is called an implicit commit this is because the user did not issue the commit.

Oracle uses transaction locking and multiversion concurrency control using undo records to ensure serializability in transactions, this stops any user conflicts while ensuring database consistency.

Transaction Properties

Database transactions should exhibit attributes described by the ACID properties:

  • Atomicity - A transaction either happens completely, or none of it happens
  • Consistency - A transaction takes the database from consistent state to the next
  • Isolation - The effects of a transaction may not be visible to other transaction until the transaction has committed.
  • Durability - Once the transaction is committed, it is permanent all changes are written to the redo log first then the data files.

Transaction Concurrent Control

Oracle uses locking to ensure data consistency but the locking is done via the least restrictive fashion, with the goal of maintaining the maximum amount of concurrency.

Concurrency problems can be any of the following

Dirty Reads Occurs when a transaction reads data that has been updated by an ongoing transaction but has not been committed permanently to the database, it is possible that the transaction may be rolled back.
Phantom Reads Are caused by the appearance of new data in between two database operations in a transaction.
Lost Updates Is caused by transactions trying to read data while it is being updated by other transaction.
Non-Repeatable Reads When a transaction finds data that it has read previously has been modified by some other transaction, you have a non-repeatable-read or fuzzy read. Basically when you read data at one time and its different when you read it again.

To overcome the above problems you could serialize all the transactions making sure that data is consistent, however this does not scale well. Oracle serializes the transaction via isolation levels and the management of undo data.

Isolation Levels

The main isolation levels are the following

Serializable Then transaction will lock all the tables it is accessing to prevent other transactions updating data until it either rollbacks or commits
Repeatable Read A transaction that reads the data twice from a table at different points in time will find the same values each time. Both dirty reads and non-repeatable are avoided with this level of isolation.
Read uncommitted Allows a transaction to read another transaction's immediate value before it commits
Read committed Guarantees that the row data won't change while you're accessing a particular row in a table.

Oracle uses locks and multiversion concurrency control system, it uses row-level locking (it never uses lock escalation), it will automatically place the lock for you and store the lock information in the data block, locks are held until the transaction is either committed or rolled back. Multiversion concurrency is a timestamp approach to read the original data, oracle will write the original data to a undo record in the undo tablespace, queries then have a consistent view of the data which provide read consistency- they only see data from a single point in time, for more information see Oracle locking.

Oracle Locks

There are a number of different locks in Oracle and tables that can obtain information regarding locks.

DML Locks Oracle uses row-level locks, this is to protect the row while its being changed, the lock will never block a reader of the same row. A table lock is also placed but this ensures that no DDL is used on the table.
DDL Locks When changing table attributes Oracle places a exclusive lock on the table to prevent any modifications to the rows. This lock is also used during DML transactions to make sure the table is not changed when changing or inserting data.
Latches Latches protect the memory structure with the SGA, they control the processes that access the memory area's.
Internal Locks Are used by oracle to protect access to structures such as data files, tablespaces and rollback segments.
Distributed Locks Are specialized locking mechanisms used in distributed systems.
Blocking Locks Occurs when a lock is placed on an object by a user to prevent other users accessing the same object.
DeadLocks Occurs when two sessions block each other while each waits for a resource that the other session is holding. Oracle always steps in to resolve the issue by killing one of the sessions, check the alert.log for deadlocks.
Useful Views
DBA_LOCK lists all locks or latches held in the database, and all outstanding requests for a lock or latch
DBA_WAITERS shows all the sessions that are waiting for a lock
DBA_BLOCKERS displays a session if it is not waiting for a locked object but is holding a lock on an object for which another session is waiting
V$LOCK This view lists the locks currently held by the Oracle Database and outstanding requests for a lock or latch
V$SESSION

This view lists session information for each current session

See Oracle Locking for more information.

Integrity Constraints and Transaction

See constraints for more information about deferable and immediate constraints.

Autonomous Transactions

You can create a transaction within a transaction that will rollback independently of its parent transaction. They can be used in the following

  • top-level anonymous blocks
  • local (a procedure in a procedure), stand-alone or packaged functions and procedures
  • methods of object types
  • database triggers
example

create or replace procedure autonomous_example
as
   pragma autonomous_transaction;
begin
   insert into test_table ('Autonomous insert');
   commit;
end;
/

The pragma directive tells oracle that this is a new autonomous transaction and that it is independent from its parent.

A trigger cannot contain a commit or rollback statement, however by using autonomous transactions you can overcome this limitation, it is considered bad practice but it is possible.

Trigger with a commit

create table tab1 (col1 number);
create table log (timestamp date, operation varchar2(2000));

create trigger test_trig
after insert on tab1
begin
insert into log values (SYSDATE, 'Insert and commit via trigger');
commit;
end;
/

SQL> insert into tab1 values (1);
insert into tab1 values (1)
*
ERROR at line 1:
ORA-04092: cannot COMMIT in a trigger
ORA-06512: at "TEST01.TEST_TRIG", line 3
ORA-04088: error during execution of trigger 'TEST01.TEST_TRIG'

Overcome commit problem

create or replace trigger test_trig
  after insert on tab1
  declare
  pragma autonomous_transaction;
begin
  insert into log values (SYSDATE, 'Insert and commit using autonomous transaction');
  commit; -- only allowed in autonomous triggers
end;
/

Undo Data

Undo data provides read consistency, there are two ways to control the undo manual or automatic. see undo data for more details

Oracle Transaction

Simple Oracle transaction

  1. User requests a connection to oracle
  2. A new dedicated server process is started for the user
  3. User executes a statement to insert data in to a table
  4. Oracle checks the users privileges, it first checks the library cache (cache hit) for the information and if not found retrieve it from disk.
  5. Check to see if the SQL statement has been parsed before (library cache) if it has then this is called a soft parse, otherwise the code has to be compiled a hard parse.
  6. Oracle creates a private SQL area in the users session's PGA
  7. Oracle checks to see if the data is in the buffer cache, otherwise perform a read from the data file
  8. Oracle will then apply row-level locks where needed to prevent others changing the row (select statements are still allowed on the row)
  9. Oracle then writes the change vectors to the redo log buffer
  10. Oracle then modifies the row in the data buffer cache
  11. The user commits the transaction making it permanent, the row-level locks are released
  12. The log writer process immediately writes out the changed data in the redo log buffers to the online redo log files, in other words the change is now recoverable.
  13. Oracle informs the user process that the transaction was completed successfully
  14. It may be sometime before the data buffer cache writes out the change to the data files.

Note: if the users transaction was an update then the before update row would have been written to the undo buffer cache, this would be used if the user rolls back the change of if another user run's a select on that data before the new update was committed.


Oracle Enterprise Manager (OEM)

Oracle Enterprise Manager is Oracle's GUI-based database management tool. It is possible to manage the oracle environment using SQL and PL/SQL but OEM gives you a console based framework to help you with almost all of your day-to-day management activities, including tasks such as backup, recovery, export, import and data loading, it also makes it easier to use the oracle advisors.

There are two version of OEM - a single instance version called Database Control and a system wide version that lets you manage all your systems called Grid Control. Most dba's do not like using GUI's but it makes sense to use the GUI for the easier day-to-day administration tasks like adding users, resizing tablespaces, etc, I think that knowing both SQL and PL/SQL and the GUI will make you a better dba in the long run.

The benefits of using the OEM

Out-of-the-box Management Installing and configuration of the OEM (both database and grid control) is a breeze, if using the DBCA to install Oracle it will automatically be installed for you, configuring the OEM is a trivial task.
Web-based Management OEM can be configured to use both HTTP or HTTPS, so security can be tightened if required. Being web based all you need is web browser to access the OEM.
Real-Time Monitoring Normally you have to write scripts or use a 3rd party tool to perform real-time monitoring, which can cause additional expensive or time, OEM will perform real-time monitoring right from the off. The new hang analysis feature in 10.2 helps during a database hang, when SQL scripts can't get information back to you quickly, by attaching directly to the SGA bypasses SQL and helps you out when normal SQL is used.
Complete Environment Monitoring

A poor performing database could be the result of a problem anywhere in the application stack (web server, app server, etc). OEM can monitor the performance of all the components in the stack not just the Oracle database. It can monitor

  • The entire Platform
  • End user experience
  • Systems and web applications availability
  • The extended network components
  • Business transactions
  • Historical data analysis and change tracking
Application Performance Monitoring

Oracle uses alert and notification systems to monitor the database, these alerts can identify problems quickly thus using a pro-active approach, thresholds can also be easily adjusted with the OEM. It can identify

  • Monitors the performance and availability of the system
  • Indicates outages and bottlenecks throughout the application system
  • Analyzes the root cause of performance with drill-downs
  • Minimizes application downtime through the use of efficient problem-resolution tools.
Scaleable Management Capability OEM is highly scalable, the only requirement is that any new servers require the management agent started.
Consolidated Database Management The home page of OEM will contain everything you need for a quick overview of your environment, this means that any problems can be identified very quick, a few more clicks and you have advanced performance charts available.
Integrated System Management OEM uses the Smart Plug-In (SPI) to provide integration in system-wide monitoring tools such as HP Openview.

OEM Database Control

You can automatically install OEM with installing Oracle via the DBCA, just select the use database control for database management option.

Manually configure OEM

unix: export ORACLE_SID=D01
windows: set ORACLE_SID=D01

emca -config dbcontrol db

Start dbconsole emctl start dbconsole
Stop dbconsole emctl stop dbconsole
dbconsole status emctl status dbconsole
Start management agent emctl start agent
Stop management agent emctl stop agent
Management agent status emctl status agent

Main OEM Page

The main OEM page is displayed below, please refer to the oracle web site on how to navigate and what is available on the OEM.

OEM Grid Control

OEM Grid Control can manage a number of systems across the environment, this means easier management. The Grid Control has a number of components

  • Grid Control - the main OEM web-based interface for monitoring the enterprise
  • Management Agent - monitors the host and sends information to the OEM management Service
  • Management Service - This is the middle tier (OMS) and stores the data collected in a repository, which is used by the Grid Control

Note that the grid control product is a licensed product which you can obtain from Oracle.

If possible it is best to install the Grid Control on a server on its own, depending on the number of servers you wish to monitor will determine the performance and disk space required for the server.

There are a number of commands that can be used to control the Oracle Management Services (OMS)

Status # emctl status oms
Start # emctl start oms
Stop # emctl stop oms

To install Grid Control follow below

Grid Control Installation
  1. Make sure that the Oracle CD is in the CD Drive and run the following

    ./runInstaller
  2. At the Welcome screen click Next
  3. Accept the default Operating System group Name, click Next
  4. You now need to run a script called orainstRoot.sh as user root in another terminal session.

    # cd $ORACLE_BASED/oraInventory
    # ./orainstRoot.sh
  5. Now click continue back in the Installer window
  6. Now choose the directory for the OEM files and click Next
  7. If using a new server select the option that creates a new Oracle database, click Next
  8. It will now perform prerequisite check, click Next
  9. Choose a password for SYSMAN and click Next
  10. Select password for SYS, SYSTEM AND DBSMNP, click next
  11. You can alter the Metalink and Proxy information if so desired, click Next
  12. Choose the SID and the Global database name for the database, click next
  13. Specify the location for all the database files, click Next
  14. Review the components and then click Next
  15. You will now watch while Grid Control is installed, you will be asked to run root.sh as user root
  16. A window will be displayed showing the various configuratrion assistants, click OK
  17. yet again run the orainstRoot.sh script as user root, then click Next
  18. This screen is important as it details all port numbers and links to the web page (make note)
  19. Click Exit to end the installation

To install the OEM Management agent only

Management agent Installation   1-7. Use the proceeding first seven steps
  1. In the select a product window, choose the additional management agent installation type and click Next
  2. specify the hostname and click Next
  3. Click Next after reading the security page
  4. Review the summary and click Next to start the install
  5. Click Exit to finish the install
  6. In Grid Control click on the target tab, you see all the discovered hosts    

Saturday, 13 July 2013

Oracle Memory Architecture

Oracle uses three kinds of memory structures

SGA
(System Global Area)
is a large part of memory that all the oracle background processes access.
PGA
(Process Global Area)
This is memory that is private to a single process or thread and is not accessible by any other process or thread
UGA
(User Global Area)

This is memory that is assoicated with your session, it can be found in the PGA or SGA depending on whether you are connected to the database via shared server

Shared Server - the UGA will be in the SGA
Dedicated Server - the UGA will be in the PGA

SGA

There are five memory structures that make up the System Global Area (SGA). The SGA will store many internal data structures that all processes need access to, cache data from disk, cache redo data before writing to disk, hold parsed SQL plans and so on.

Shared Pool

The shared pool consists of the following areas:

Library cache includes the shared SQL area, private SQL areas, PL/SQL procedures and packages the control structures such as locks and library cache handles. Oracle code is first parsed, then executed , this parsed code is stored in the library cache, oracle first checks the library cache to see if there is an already parsed and ready to execute form of the statement in there, if there is this will reduce CPU time considerably, this is called a soft parse, If Oracle has to parse it then this is called a hard parse. If there is not enough room in the cache oracle will remove older parsed code, obviously it is better to keep as much parsed code in the library cache as possible. Keep an eye on missed cache hits which is an indication that a lot of hard parsing is going on.

Dictionary cache is a collection of database tables and views containing information about the database, its structures, privileges and users. When statements are issued oracle will check permissions, access, etc and will obtain this information from its dictionary cache, if the information is not in the cache then it has to be read in from the disk and placed in to the cache. The more information held in the cache the less oracle has to access the slow disks.

The parameter SHARED_POOL_SIZE is used to determine the size of the shared pool, there is no way to adjust the caches independently, you can only adjust the shared pool size.

The shared pool uses a LRU (least recently used) list to maintain what is held in the buffer, see buffer cache for more details on the LRU.

You can clear down the shared pool area by using the following command

    alter system flush shared_pool;

Buffer cache

This area holds copies of read data blocks from the datafiles. The buffers in the cache contain two lists, the write list and the least used list (LRU). The write list holds dirty buffers which contain modified data not yet written to disk.

The LRU list has the following

  • free buffers hold no useful data and can be reused
  • pinned buffers actively being used by user sessions
  • dirty buffers contain data that has been read from disk and modified but hasn't been written to disk

It's the database writers job to make sure that they are enough free buffers available to users session, if not then it will write out dirty buffers to disk to free up the cache.

There are 3 buffer caches

  • Default buffer cache, which is everything not assigned to the keep or recycle buffer pools, DB_CACHE_SIZE
  • Keep buffer cache which keeps the data in memory (goal is to keep warm/hot blocks in the pool for as long as possible), DB_KEEP_CACHE_SIZE.
  • Recycle buffer cache which removes data immediately from the cache after use (goal here is to age out a blocks as soon as it is no longer needed), DB_RECYCLE_CACHE_SIZE.

The standard block size is determined by the DB_CACHE_SIZE, if tablespaces are created with a different block sizes then you must also create an entry to match that block size.

DB_2K_CACHE_SIZE (used with tablespace block size of 2k)
DB_4K_CACHE_SIZE (used with tablespace block size of 4k)
DB_8K_CACHE_SIZE (used with tablespace block size of 8k)
DB_16K_CACHE_SIZE (used with tablespace block size of 16k)
DB_32K_CACHE_SIZE (used with tablespace block size of 32k)

buffer cache hit ratio is used to determine if the buffer cache is sized correctly, the higher the value the more is being read from the cache.

     hit rate = (1 - (physical reads / logical reads)) * 100

You can clear down the buffer pool area by using the following command

    alter system flush buffer_cache;

Redo buffer

The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily before it is written to disk, this area is normally less than a couple of megabytes in size. These entries contain necessary information to reconstruct/redo changes by the INSERT, UPDATE, DELETE, CREATE, ALTER and DROP commands.

The contents of this buffer are flushed:

  • Every three seconds
  • Whenever someone commits a transaction
  • When its gets one third full or contains 1MB of cached redo log data.
  • When LGWR is asked to switch logs

Use the parameter LOG_BUFFER parameter to adjust but be-careful increasing it too large as it will reduce your I/O but commits will take longer.

Large Pool

This is an optional memory area that provide large areas of memory for:

  • Shared Server - to allocate the UGA region in the SGA
  • Parallel execution of statements - to allow for the allocation of inter-processing message buffers, used to coordinate the parallel query servers.
  • Backup - for RMAN disk I/O buffers
The large pool is basically a non-cached version of the shared pool.

Use the parameter LARGE_POOL_SIZE parameter to adjust

Java Pool

Used to execute java code within the database.

Use the parameter JAVA_POOL_SIZE parameter to adjust (default is 20MB)

Streams Pool

Streams are used for enabling data sharing between databases or application environment.

Use the parameter STREAMS_POOL_SIZE parameter to adjust

The fixed SGA contains a set of variables that point to the other components of the SGA, and variables that contain the values of various parameters., the area is a kind of bootstrap section of the SGA, something that Oracle uses to find other bits and pieces of the SGA

For more information regarding setting up the SGA click here.

PGA and UGA

The PGA (Process Global Area) is a specific piece of memory that is associated with a single process or thread, it is not accessible by any other process or thread, note that each of Oracles background processes have a PGA area. The UGA (User Global Area) is your state information, this area of memory will be accessed by your current session, depending on the connection type (shared server) the UGA can be located in the SGA which is accessible by any one of the shared server processes, because a dedicated connection does not use shared servers the memory will be located in the PGA

  • Shared server - UGA will be part of the SGA
  • Dedicated server - UGA will be the PGA
Memory Area Dedicated Server Shared Server
Nature of session memory Private Shared
Location of the persistent area PGA SGA
Location of part of the runtim area for select statements PGA PGA
Location of the runtime area for DML/DDL statements PGA PGA

Oracle creates a PGA area for each users session, this area holds data and control information, the PGA is exclusively used by the users session. Users cursors, sort operations are all stored in the PGA. The PGA is split in to two areas

Session Information
(runtime area)
PGA in an instance running with a shared server requires additional memory for the user's session, such as private SQL areas and other information.
Stack space
(private sql area)
The memory allocated to hold a sessions variables, arrays, etc and other information relating to the session.

Automatic PGA Management

To reduce response times sorts should be performed in the PGA cache area (optimal mode operation), otherwise the sort will spill on to the disk (single-pass / multiple-pass operation) this will reduce performance, so there is a direct relationship between the size of the PGA and query performance. You can manually tune the below to increase performance

  • sort_area_size - total memory that will be used to sort information before swapping to disk
  • sort_area_retained_size - memory that is used to retained data after a sort
  • hash_area_size - memory that will would be used to store hash tables
  • bitmap_merge_area_size - memory Oracle uses to merge bitmaps retrieved from a range scan of the index.

Staring with Oracle 9i there is a new to manage the above settings that is to let oracle manage the PGA area automatically by setting the parameter following parameters Oracle will automatically adjust the PGA area basic on users demand.

  • workarea_size_policy - you can set this option to manual or auto (default)
  • pga_aggregate_target - controls how much to allocate the PGA in total

Oracle will try and keep the PGA under the target value, but if you exceed this value Oracle will perform multi-pass operations (disk operations).

System Parameters
workarea_size_policy manual or auto (default)
pga_aggregate_target total amount of memory allocated to the PGA
 
PGA/UGA amount used

select a.name, to_char(b.value, '999,999,999') value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name like '%ga memory%';

Display if using memory or disk sorts set autotrace traceonly statistics;
set autotrace off;
Display background process PGA memory usage select program, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process;

Oracle Processes (oracle background processes)

Oracle server processes perform all the database operations such as inserting and deleting data. The oracle processes working with the SGA (oracle memory structure) manage the database.

There are two types of Oracle process

  • User process - Responsible for running the application that connects to the database
  • Oracle server process - Perform oracle tasks that manage the database.

There are a number of server processes that could be running , Windows will only have one process called Oracle, this process will have one thread for each of the below processes.

Process Monitor
PMON

Responsible for cleaning up after abnormally terminated connections.
Responsible for monitoring other server processes and restarting them if necessary
Registers the instance with the listener dynamically (dynamic service registration).
Restarts failed server processes and dispatcher processes

System Monitor
SMON

Temporary space cleanup
Crash recovery apon restart
Coalescing free space
Recovering transactions active against unavailable files
Instance recovery of failed node in OPS (Oracle parallel server)
Cleans up OJB$ (Low Level data dictionary)
Shrinks rollback segments
Offline's rollback segments

Other processes call the SMON process when required.

Distributed database recovery
RECO
Recovers transactions that are left in a prepared state because of a crash or loss of connection during a two-phase commit.
Checkpoint process
CKPT

The checkpoint process is charged with instructing the database block buffer writers to write the database buffer cache to disk, it then updates the data file headers and control file to indicate when the checkpoint was performed. There is a relationship with checkpoints and recovery time, the more checkpointing the less recovery time is need when a crash occurs.

The ckpt process does not do the checkpoint but assists with the checkpointing process by updating the file headers of the data files.

A checkpointing process involves the following:

  • Flushing the redo log buffers to the redo log files
  • Writing a checkpoint record to the redo log file
  • Flushing the database log buffers to the data files
  • Updating the data file headers and control files after the checkpoint completes
Database block writer
DBWn

Responsible for writing dirty blocks to disk when free space within the database buffer cache is low, it writes the dirty blocks from the buffer cache out to the disk. It uses the LRU (Least Recently Used) algorithm which retains data in the memory based on how long it has been since someone asked for that data. The database buffer cache is flushed to disk

  • when the database issues a checkpoint
  • when a server process can't find a clean reusable buffer after checking a threshold number of buffers
  • every 3 seconds
  • users process has searched to long for a free buffer when reading a buffer into the buffer cache
  • Instance is shutdown
  • tablespace is put in backup mode or offline
  • segment is dropped

If you have multiple CPU's then it is advised to run multiple database writers. Use the DB_WRITER_PROCESSES parameter to increase the number of database writers, the instance has to be rebooted.

Log writer
LGWR

Responsible for flushing to disk the contents of the redo log buffer located in the SGA. Both committed and uncommitted changes are written to the redo log buffer. The redo log buffer is flushed to disk before the data blocks are written to disk. The redo log buffer is flushed to disk

  • every 3 seconds
  • whenever a user commits a transaction
  • when the redo log buffer is a third full or contains 1 Mb of buffered data
  • before the DBWn process writes when a checkpoint occurs
Archive process
ARCn

Used when the database is in archive-mode, it copies the online redo log file to another location when LGWR fills up, these log files would be used to perform media recovery. There can be a maximum of ten archive processes running ARC0-ARC9. The LOG_ARCHIVE_MAX_PROCESSES parameter determines how many archive processes will be started (default is 1).

Manageability Monitor
MMON
Collects statistics to help the database manage itself. The MMON process collects the AWR (automatic workload repository) snapshot information which is used by the ADDM (automatic database diagnostic monitor), also MMON issues alerts when database thresholds are exceeded.
Manageability Monitor Light
MMNL
The process flushes ASH information to disk when the buffer is full, it also captures session history and database metrics.
Memory Manager
MMAN
Uses the the metrics collected to determine the ideal distribution of memory within oracle. It constantly monitors the database and adjusts the memory allocations according to workloads.
Job Queue Coordination
CJQ0
Used to schedule and run user jobs. It spawns job queue slave processes (J000-J999) which actually run the job.
Job Queue Process
J000-J999
These processes are what actually run the schedule jobs requested by CJQ0.
File Mapping Monitor
FMON
Maps files to immediate storage layers and physical devices. Results are normally kept in the DBMS_STORAGE_MAP view. Generally the 3rd party LVM (logical volume manager) supplier will supply a driver to map to.
Recovery Writer
RVWR
This process is started when you implement flashback logging, it logs the before image (taken from the flashback buffers) of an oracle block before it is changed, this is written to the flashback log files.
Change Tracking Writer
CTWR
This process tracks any data blocks that have changed which then RMAN can use to speed up backups as it will no longer need to read the entire data file to see what has changed.
Queue Monitor Coordinator
QMNC
Spawns and coordinates queue slave processes.
Block server process
BSP
Used in OPS and keeps each servers SGA in the clusters consistent with each other.
Lock monitor process
LMON
Used in OPS and monitors all instances in a cluster to detect a failure of an instance.
Lock manager daemon
LMD
Used in OPS and controls the global locks and global resources for the block buffer cache in a clustered environment.
Lock process
LCKn
Used in OPS and is the same as the LMD daemon but handles requests for all global resources other than database block buffers
Dispatcher process
Dnnn
Dispatcher processes that are used when using a shared server environment
Shared Server process
Snnn
Shared Server processes that are used when using a shared server environment
Oracle process spawner
PSP0
Process spawner has the job of creating and managing other Oracle processes.
Oracle shadow process
SHAD
Oracle's shadow process, could not find much on this process
Streams Advanced Queuing process
q000 - q???
I believe this is something to do with Oracle Streams Advanced Queuing

There are a number of useful command and views you can use to get information regarding the running processes.

Useful SQL
Display all processes select name, description from v$bgprocess;
Display process memory usage select program, pid, spid, username, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process;
Display users process select
  substr(s.username,1,18) username,
  substr(s.program,1,20) program,
  decode(s.command,
    0,'No Command',
    1,'Create Table',
    2,'Insert',
    3,'Select',
    6,'Update',
    7,'Delete',
    9,'Create Index',
    15,'Alter Table',
    21,'Create View',
    23,'Validate Index',
    35,'Alter Database',
    39,'Create Tablespace',
    41,'Drop Tablespace',
    40,'Alter Tablespace',
    53,'Drop User',
    62,'Analyze Table',
    63,'Analyze Index',
        s.command||': Other') command
from
  v$session s,
  v$process p,
  v$transaction t,
  v$rollstat r,
  v$rollname n
where s.paddr = p.addr
and s.taddr = t.addr (+)
and t.xidusn = r.usn (+)
and r.usn = n.usn (+)
order by 1
;
Useful Views
V$BGPROCESS displays information about the background processes
V$PROCESS contains information about the currently active processes

Oracle FlashBack Architecture

There are a number of flashback levels

row level flashback query, flashback versions query, flashback transaction query
table level flashback table, flashback drop
database level flashback database

Oracle 10g has several error-correction techniques that use undo data, however they are only available if you use automatic undo management (AUM),

  • Flashback query - retrieves data from a past point in time
  • Flashback versions query - shows you different versions of data rows, plus start and end times of a particular transaction that created that row
  • Flashback transaction query - lets you retrieve historical data for a given transaction and the SQL code to undo the transaction.
  • Flashback table - recovers a table to its state at a past point in time, without having to perform a point in time recovery.

There are two other flashback technologies that do not use the undo data, they use flashback logs and recyclebin instead.

  • flashback database - restore the whole database back to a point in time.
  • flashback drop - allows you to reverse the effects of a drop table statement, without resorting to a point-in-time recovery

DBMS_FLASHBACK, flashback table query, flashback transaction query, flashback version query and select .. as of .. statements all use the undo segments. Flashback database uses the flashback logs and flashback drop uses the recycled bin.

When using flashback, if any operations violate a constraint the flashback operation will be rolled back, you can disable constraints but it’s probably not a good idea. If you have a table using a foreign key it is a good idea to flashback both tables. Flashback technology requires you to lock the whole table if it cannot it will fail immediately.

RMAN can only do flashback database and no other flashback technology.

Flashback Query

Using flashback query involves using a select statement with an AS OF clause. you can select data from a past point in time. If you get a ORA-08180 it means that the data is no longer available in the undo segments.

Privilege grant flashback on table_test to pvalle;
grant flashback on any table to pvalle;
Flashback query (time)

select * from employees as of timestamp
to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')
where last_name = valle;

select employee_id, name from hr.employee as of timestamp (systimestamp - interval '15' minute);

Flashback query (SCN)

select * from employees as of scn 4542;

Note: using a scn will put you with 3 secs if you need to be dead accurate use timestamps instead.

Take the whole session back in time

grant execute on dbms_flashback to test01;

execute dbms_flashback.enable_at_time(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS'));

execute dbms_flashback.disable;

Reinserting

insert into employees
select * from employees as of timestamp
to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')
where last_name = valle;

insert into employees select * from employees as of scn = 4542;

List the flashback entries select * from flashback_transaction_query;
Obtain a time from an SCN select scn_to_timestamp(1408640) as ts from dual;
Obtain SCN from time point select timestamp_to_scn(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')) as scn from dual;
What is in the undo tablespace

select begin_time, end_time, tuned_undoretention from v$undostat;

Note: the time is in seconds

Flashback Version Query

Flashback version query provides you with all the versions of a row between two points in time or SCN, this is useful if you want to audit a table finding out what happened to a row. However there are some points to remember:

  • You can only retrieve committed rows
  • They query will retrieve all deleted rows as well as current rows
  • The query will retrieve any rows that were deleted and reinserted later on
  • Query result is table format and contains a row for each version of a row during the time or SCN interval you specify

The limitations of flashback version query are:

  • You can only query actual tables not views
  • you cannot apply the versions clause across DDL operations
  • The query will ignore physical row changes for example during a segment shrink operation
  • You cannot use against external or temporary tables.

The most useful columns to obtain are below:

  • VERSIONS_STARTTIME - start timestamp of version
  • VERSIONS_STARTSCN - start SCN of version
  • VERSIONS_ENDTIME - end timestamp of version
  • VERSIONS_ENDSCN - end SCN of version
  • VERSIONS_XID - transaction ID of version
  • VERSIONS_OPERATION - DML operation of version
Flashback version query (time)

select versions_xid as xid, versions_startscn as start_scn, versions_endscn as end_scn,   versions_operation as operation, empname
from employees
  versions between timestamp minvalue and maxvalue
order by versions_startscn;

Flashback version query (SCN) select versions_xid as xid, versions_startscn as start_scn, versions_endscn as end_scn,   versions_operation as operation, empname
from employees
  versions between scn minvalue and maxvalue
where emp_id = 863;
Obtain a time from an SCN select scn_to_timestamp(1408640) as ts from dual;
Obtain SCN from time point select timestamp_to_scn(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')) as scn from dual;

Flashback Transaction Query

Identifies which transaction or transactions were responsible for a certain change in a table's data during a specified time period. Basically it queries the flashback_transaction_query view. It provides the SQL code that will undo the change, flashback transaction query can use an index path to retrieve data instead of reading the entire redo log file.

Flashback transaction considerations:

  • Turn on minimal supplemental logging if your operations involve chained rows and special storage structures, such as clustered tables
  • When querying IOT, an update is shown as a delete/insert operation.
  • If the query involves a dropped table or a dropped user, it returns object numbers and user ID's instead of the object names and usernames.

Consider setting the retention guarantee option for the undo tablespace, this will ensure that the unexpired data in the undo segments is preserved.

Flashback transaction query will contain the following columns

  • start_scn and start_timestamp - identify when a certain was created
  • commit_scn and commit_timestamp - tell you when a certain row was committed
  • xid_row_id and undo_change# - identify the row, transaction and change numbers
  • operation - tells you what sort of operation occurred insert, delete or update.
  • logon_user, table_name and table_owner - username, table name and schema name
  • undo_sql - the exact SQL code to undo the change

If you have chained rows or use clustered tables then oracle recommends that you should turn on supplemental logging on at the database level.

Privilege grant select any transaction to pvalle;
Supplemental logging alter database add supplemental log data;
Display undo segments select operation, undo_sql, table_name from flashback_transaction_query;
Flashback transaction query

select operation, undo_sql, table_name
from flashback_transaction_query
where start_timestamp >= to_timestamp ('04-12-2007 05:00:00', 'DD-MM-YYYY HH:MI:SS')
and commit_timestamp <= to_timestamp ('04-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')
and table_owner='VALLEP';

Note: this will give you the SQL to reserver the change that was applied to the database,just cut and paste to use.

Obtain a time from an SCN select scn_to_timestamp(1408640) as ts from dual;
Obtain SCN from time point select timestamp_to_scn(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')) as scn from dual;

You use flashback version query to obtain the xid and then use the xid in the flashback transaction query statement

flashback version and flashback transaction query

select versions_xid as xid, versions_startscn as start_scn, versions_endscn as end_scn, versions_operation as operation,
empname from employees
versions between scn minvalue and maxvalue
as of scn 7920
where emp_id = 222;

XID                         start_scn   end_scn   operation  empname    salary
---------------------------------------------------------------------------------
0003002F00038BA9            2266                  I           Paul      20000

Now use the XID in the flashback transaction query to obtain the SQL to undo the change.

select xid, start_scn, commit_scn, operation, logon_user, undo_sql
from flashback_transaction_query
where xid=hextoraw('0003002F00038BA9');

XID                   start_scn    commit_scn   operation   user       undo_sql
---------------------------------------------------------------------------------------
0003002F00038BA9      195243       195244       delete      vallep     insert into HR.EMP
                                                                       ('EMPNO', 'EMPNAME', 'SALARY')
                                                                       values ('222', 'Paul', '20000');

Flashback Table

There are two distinct table related flashback table features in oracle, flashback table which relies on undo segments and flashback drop which lies on the recyclebin not the undo segments.

Flashback table lets you recover a table to a previous point in time, you don't have to take the tablespace offline during a recovery, however oracle acquires exclusive DML locks on the table or tables that you are recovering, but the table continues to be online.

When using flashback table oracle does not preserve the ROWIDS when it restores the rows in the changed data blocks of the tables, since it uses DML operations to perform its work, you must have enabled row movement in the tables that you are going to flashback, only flashback table requires you to enable row movement.

If the data is not in the undo segments then you cannot recover the table by using flashback table, however you can use other means to recover the table.

Restriction on flashback table recovery

  • You cannot use flashback table on SYS objects
  • You cannot flashback a table that has had preceding DDL operations on the table like table structure changes, dropping columns, etc
  • The flashback must entirely exceed or it will fail, if flashing back multiple tables all tables must be flashed back or none.
  • Any constraint violations will abort the flashback operation
  • You cannot flashback a table that has had any shrink or storage changes to the table (pctfree, initrans and maxtrans)
Privilege grant flashback on table_test to pvalle;
grant flashback on any table to pvalle;
Enable row movement alter table test enable row movement;
Display row movement select table_name, row_movement from user_tables;
Flashback table (SCN) flashback table test to scn 4587309871;
Flashback table (time)

flashback table employees to timestamp to_timestamp
('04-12-2007 05:00:00', 'DD-MM-YYYY HH:MI:SS')

flashback table employees to timestamp (systimestamp - interval '15' minute);

flashback table employess to timestamp (sysdate - 1);

flashback table employees, depts to timestamp (sysdate - 1);

Enable triggers

flashback table employees to timestamp (sysdate -1) enable triggers;

Note: Oracle disables triggers by default when falshing back a table

Flashback Drop

Flashback drop lets you reinstate previously dropped tables exactly as it was before the drop, below is a table of what is kept where when a table is dropped:

  • Recyclebin: tables and indexes
  • Data dictionary: unique keys, primary key, not-null constraints, triggers and grants
  • Not recovered: foreign key constraints

If two tables exist in the recyclebin with the same name the newest one will be restored unless you state which one you want to restore. If you restore a table it is removed from the recyclebin.

Recover flashback table <table_name> to before drop
Recover and rename

flashback table <table_name> to before drop rename to <new name>;

Note: use if table name already exists

Recover same name table flashback table 'BIN$hfkjdshfkhs' to before drop;
List recycle bin select object_name, original_name, type from user_recyclebin;
select object_name, original_name, type from dba_recyclebin;
select * from recyclebin; (shows only current user info)
show recyclebin;
drop table completely

drop table <table_name> purge;

Note: table will not be in recyclebin

drop table from r/bin

purge table <table_name>

Note: table will be gone from recyclebin

truncate table

truncate table

Note: table will not be in the recyclebin

drop user

drop user test cascade

Note: will not store anything in the recyclebin

purge recyclebins purge recyclebin; (purge user recyclebin)
purge dba_recyclebin;(purges all recyclebins)
purge tablespace test user test03; (purge test03 from tablespace test)
List dropped table contents

select * from "BIN$NwM/FEjuSUORrgxHUPR3WA==$0";

Note: the double quotes

Naming Convention BIN$globalUID$version

Space pressure on a tablespace will cause it to purge the recyclebins of the users within that tablespace, it is based on a FIFO order. When a tablespace has the auto extend feature turned on it will clear down the recyclebin first, then auto extend.

Limitations on flashback drop:

  • Recyclebin is only available to non-system, locally managed tablespaces.
  • There is no guaranteed timeframe for how long an object will be stored in the recyclebin
  • DML and DDL cannot be used on objects in the recyclebin
  • Must use the recyclebin name to query the table
  • All dependent objects are retrieved when you perform a flashback drop.
  • Virtual private database (VPD) and FGA policies defined on tables are not protected for security reasons
  • Partitioned index-organised tables are not protected by the recycle bin.
  • Referential constraints are not protected by the recycle bin. They must be re-created after table has been rebuilt.

Flashback Database

The database can be taken back in time by reversing all work done sequentially. The database must be opened with resetlogs as if an incomplete recovery has happened. This is ideal if you have a database corruption (wrong transaction, etc) and require the database to be rewound before the corruption occurred. If you have media or a physical problem a normal recovery is required.

Flashback database is not enabled by default, when enabled flashback database a process (RVWR – recovery Writer) copies modified blocks to the flashback buffer. This buffer is then flushed to disk (flashback logs). Remember the flashback logging is not a log of changes but a log of the complete block images. Not every changed block is logged as this would be too much for the database to cope with, so only as many blocks are copied such that performance is not impacted. Flashback database will construct a version of the data files that is just before the time you want. The data files probably will be in a inconsistent state as different blocks will be at different SCN’s, to complete the flashback process, Oracle then uses the redo logs to recover all the blocks to the exact time requested thus synchronizing all the data files to the same SCN. Archiving mode must be enabled to use flashback database. An important note to remember is that Flashback can never reserve a change only to redo them.

The advantage in using flashback database is speed and convenience with which you can take the database back in time.

You can use rman, sql and Enterprise manager to flashback a database. If the flash recovery area does not have enough room the database will continue to function but flashback operations may fail. It is not possible to flashback one tablespace, you must flashback the whole database. If performance is being affected by flashback data collection turn some tablespace flashbacking off.

You cannot undo a resized data file to a smaller size. When using ‘backup recovery area’ and ‘backup recovery files’ controlfiles , redo logs, permanent files and flashback logs will not be backed up.

Enable

select log_mode from v$database; (must use archivelog mode)
alter system set db_recovery_file_dest=’c:/flash_recovery_area’;
alter system set dba_recovery_file_dest_size=8G;
alter system set db_flashback_retention_target=4320; (3 days – time in minutes)
shutdown immediate;
startup mount;
alter database flashback on; (RVWR process will start)
alter database open;

Monitoring

alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';
select flashback_on from v$database; (check its enabled)
select retention_target, estimated_flashback_size, flashback_size from v$flashback_database_log;
select oldest_flashback_scn, oldest_flashback_time from v$flashback_database_log;
select end_time, flashback_data, db_data, redo_data from v$flashback_database_stat; (flashback growth)

Flashback Buffer select * from v$sgastat where name like 'flashback%';
Flashback database example

startup mount;
flashback database to timestamp to_timestamp('15-02-07 10:00:00', 'dd-mm-yy hh24:mi:ss');
alter database open read only; (check schema ok start db, if not continue, optional)
shutdown abort;
startup mount;
flashback database to timestamp to_timestamp('15-02-07 10:02:00', 'dd-mm-yy hh24:mi:ss');
alter database open read only; (check schema ok start db, if not continue, optional)

When happy………………

alter database open resetlogs;

Flashback using RMAN

flashback database to time = to_date('15-02-07 10:00:00', 'dd-mm-yy hh24:mi:ss');
flashback database to scn = 2765665;
flashback database to sequence=2123 thread=1;

Tablespace Flashback Configuration
Turn flashback on (tablespace) alter tablespace <tablespace_name> flashback on; (must be in mount mode)
Turn flashback off (tablespace) alter tablespace <tablespace_name> flashback off; (this can be done in open or mount mode)
Display tablespaces with flashback select name, flashback_on from v$tablespace;
Flashback space usage select * from v$flash_recovery_area_usage;

Note: if one or more tablespaces are not generating flashback data, then before carrying out a flashback operation the files making up the tablespace must be taken offline. Offline files are ignored by recover and flashback. Remember that you must make these files to the same point as the flashback otherwise the database will not open.

Flashback Recovery Area

The alert log and DBA_OUTSTANDING_ALERTS will hold status information regarding the flash recovery area. You can use the commands backup copy or backup for flash recovery area. Controlfiles and redo logs are permanently stored in the flash recovery area.

Monitoring

select * from v$recovery_file_dest;

Note: this details space_used, space_limit, space_reclaimable, # of files

What is using the Flashback recovery area select * from v$flash_recovery_area_usage;
Backing up flashback area

rman> backup recovery area;
Note: includes backup sets, datafile and archive, controlfile excludes: flashback log, current controlfile, online redo logs

rman> backup recovery files;
Note: includes all files whether or not they are in the flash recovery area

Using Restore Points

There is a new enhancement to recovery techniques by allowing you to create restore points, a restore point is alias for an SCN, which elimates the need to research and record SCN's or timestamps which you need for flashback database and flashback table operations, all you need to do is refer to the restore point when recovering.

The restore point does not guarantee that the data will remain in the flashback logs necessary for a flashback database operation to succeed under all circumstances. By creating a restore point that will guarantee that all data will be preserved, you can be assured that the restore will be successful, remember that you will need enough space in the flash recovery area to hold the necessary data. A guaranteed restore point does not depend on flashback logs, thus you can create a guaranteed restore point with flashback logging turned off, they use a logging mechanism that's similar to flashback logs but it's separate from them, thus if you are using guaranteed restore points it better to turn off flashback logging otherwise you may end up filling up the flash recovery area.

One note is that you can only restore back to the restore point, you cannot restore back to a point in time using restore points, you must then use the backups and archived logs to do a point in time recovery.

Not Guaranteed
create create restore point large_update;
remove drop restore point large_update;
Guaranteed
create guaranteed restore point create restore point test_guarantee guarantee flashback database;
remove guaranteed restore point drop restore point test_guarantee;
Other Operations
Using a restore point flashback database to restore point large_update;
flashback table test to restore point table_update;
Displaying restore points select name, scn, storage_size, time, guarantee_flashback_database from v$restore_point;
Flashback databse running

select flashback_on from v$database;

Note: you should get a restore point only reply if using restore points