Sunday, 19 April 2015

Oracle Resumable Space

If you were running a long batch program and the tablespace run out of space, this would cause a error, you would increase the amount of space in the tablespace and rerun your job, this could take quite a bit of time.
Oracle's resumable space will suspend the running job that has run into problems due to lack of space and will automatically continue when the space issue has been fixed.
You can make all operations run in resumable space allocation mode by using a alter session command. The following database operations are resumable
  • Queries - they can always be resumed after the temporary tablespace has run out of space.
  • DML Operations - insert, delete and update can all be resumed
  • DDL Operations - index operations involving creating, rebuilding and altering are resumable as are create table as select operations
  • import and export operations - SQL loader jobs are resumable but to must use the resumable parameter in the SQL loader job.
You can resume operations that are of the following types of errors
  • out of space - typical error message is the ORA-01653
  • maximum extents errors - typical error message is the ORA-01628
  • users space quota errors - typical error message is the ORA-01536
Privilege grant resumable to vallep;
grant execute on dbms_resumable to vallep;
Who has privilege select grantee, privilege from dba_sys_privs where privilege='RESUMABLE';
Who has set resumable mode select user_id, session_id, status, timeout, name from dba_resumable;
Who is in resumable mode waiting for space select user_id, session_id, name, timeout, start_time, suspend_time from dba_resumable;
select username, event from v$session where event like '%sus%';
Resumable space across entire database resumable_timeout=7200;
Note: default is 0, time is in seconds
Resumable space in session alter session enable resumable;
Resumable space in session with timeout alter session enable resumable timeout 18000;
execute dbms_resumable.set_timeout(18000);
Note: time is in seconds
Resumable space in session and adding a name alter session enable resumable name 'pauls_resumable';
Display resumable space mode options select dbms_resumable.get_timeout() from dual;
Disable resumable space mode alter session disable resumable;
Useful Views
DBA_SYS_PRIVS describes system privileges granted to users and roles.
DBA_RESUMABLE lists all resumable statements executed in the system.
V$SESSION lists session information for each current session


Wednesday, 17 July 2013

Oracle Locking and Concurrency

In oracle you will learn that:

  • Transactions are what databases are all about; they are good
  • You should defer committing as long as you have to. You should not do it quickly to avoid stressing the system, it does not stress the system to have long or large transactions. The rule is commit when you must, and not before. Things should only be small or large as your business logic dictates
  • You should hold locks on data as long as you need to. There are tools you to use, not things to be avoided. Locks are not a scarce resource.
  • There is no overhead involved with row-level locking in Oracle, none
  • You should never escalate a lock(for example, use a table lock instead of a row lock) because it would be 'better on the system'. In Oracle it won't be better for the system - it will save no resources
  • Concurrency and consistency can be achieved. You can get it fast and correct, every time.

Oracle uses a different locking method then most other databases, Oracle locking policy consists of the following:

  • Oracle locks data at the row level on modification only. There is no lock escalation to a block or table level, ever.
  • Oracle never locks data just to read it. There are no locks placed on rows of data by simple reads.
  • A writer of data does not block a reader of data.
  • A writer of data is blocked only when another writer of data has already locked the row it was going after.

Pessimistic Locking

When a user queries some data and picks a row to change the below statement is used:

Pessimistic Locking select empno, ename, sal
from emp
where empno = :empno
and ename = :ename
and sal = :sal
for update nowait

what the "for update nowait" statement does is to lock the row against updates by other sessions. This is why this approach is called pessimistic locking. We lock the row before we attempt to update it because we doubt that the row will remain unchanged otherwise. We'll get three outcomes from this statement:

  • if the underlying data has not changed, we get our row back and this row will be locked from updates by others (but not reads).
  • if another user is in the process of modifying that row, we will get a ORA-00054 Resource Busy error. We are blocked and must wait for the other user to finish with it.
  • if, in the time between selecting the data and indicating our intention to update, someone has already changed the row then we will get zero rows back. The data will be stale. The application needs to re-query the data and lock it before allowing the end user to modify any of the data in order to avoid a lost update scenario.

Optimistic Locking

Optimistic locking, is to keep the old and new values in the application and apon updating the data us a update like below:

Optimistic Locking update table
set column1 = :new_column1, column2 = :new_column2, ....
where column1 = :old_column1
and column2 = :old_column2
...

We are optimistically hoping that the data has not changed, if we are lucky the row is updated, if not we update zero rows and now we have two options get the user to re-key the data back in or should be we try and merge the data (lots of code to do this)

So, the best method in Oracle would be to use pessimistic locking as the user can have confidence that the data they are modifying on the screen is currently owned by them - in other words the row is checked out and nobody could modify it. While you may be thinking what if the user walks away the row is locked, in this scenario its would be better to get the application to release the lock or use Resource Profiles in the database to time out idle sessions. Remember that even if a row is locked you can still read that row, it is never blocked for reading in Oracle.

Blocked Inserts

The only time an INSERT will block is when you have a table with a primary key or unique constraint placed on it and two sessions simultaneously attempt to insert a row with the same value, it is most avoided via the use of Oracle sequences in the generation of primary keys as they are highly concurrent method of generating unique keys in a multi-user environment.

Blocked Updates and Deletes

To avoid update and delete blocking use either one of the two locking methods Pessimistic or Optimistic.

Deadlocks

Deadlocks occur when two people hold a resource that the other wants. Oracle records all deadlocks in a trace file. The number one cause of deadlocks is un-indexed foreign keys

  • if you update the parent table's primary key the child table will be locked in the absence of an index
  • if I delete a parent table, the entire child table will be locked, again in the absence of an index.

Lock Escalation

In other RDBMS when a users locks 100 rows (this may vary) the lock is escalated to a table lock, however Oracle will never escalates a lock, NEVER. Oracle does practice lock conversion/lock promotion they are synonymous.

If a user select a row using FOR UPDATE two locks are placed, one exclusive lock on the row and the other a ROW SHARE LOCK on the table itself. This will prevent other users placing a exclusive lock on the table, thus preventing them from altering the table structure.

Type of Locks

There a number of different types of locks as listed below:

  • DML Locks - DML (data manipulation language), in general SELECT, INSERT, UPDATE and DELETE. DML locks will be locks on a specific row of data, or a lock at the table level, which locks every row in the table.
  • DDL locks - DDL (data definition language), in general CREATE, ALTER and so on. DDL locks protect the definition of the structure of objects.
  • Internal locks and latches - These are locks that Oracle uses to protect its internal data structure.
  • Distributed Locks - These are used by OPS to ensure that different nodes are consistent with each other.
  • Deadlocks - Occurs when two sessions block each other while each waits for a resource that the other session is holding.
  • PCM - PCM (Parallel Cache Management) These are locks that protect one or more cached data blocks in the buffer cache across multiple instances, also used in OPS.

DML Locks

There are two main types of DML locks TX (Transaction) and TM (DL Enqueue). A TX lock is acquired when a transaction initiates its first change and is held until the transaction performs a COMMIT or ROLLBACK. It is used as a queuing mechanism so that other sessions can wait for the transaction to complete. A TM lock is used to ensure that the structure of the table is not altered while you are modifying its contents.

The complete set of DML locks are

Row Share permits concurrent access but prohibits others from locking table for exclusive access
Row Exclusive

same as row share but also prohibits locking in share mode

Share

permits concurrent queries but prohibits updates to the table

Share Row Exclusive

prevent others from locking in share mode or updating the rows on the whole table

Exclusive permits queries but no DML against the table but select ok

The type of locked used when using DML are

RS (table) and RX (row) select  ... for update;
lock table ... in row share mode
RX (table) and RX (row) any insert, update or delete

Below are tables that can be used to identify locks, transaction ID, etc, the code can be used to obtain this information.

Useful SQL
Identify locks and Transaction ID's select username,
v$lock.sid,
trunc(id1/power(2,16)) rbs,
bitand(id1, to_number('ffff', 'xxxx'))+0 slot,
id2 seq,
lmode,
request
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER;
Identify who is blocking whom select (select username from v$session where sid=a.sid) blocker,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) blockee,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
and a.id1 = b.id1
and a.id2 = b.id2;
Using NOWAIT

select * from employee for update nowait;
select * from employee for update wait 10;

Note: the above commands will abort if the lock is not release in the specified time period.

Useful Views
V$TRANSACTION

lists the active transactions in the system

V$SESSION lists session information for each current session.
V$LOCK lists the locks currently held by the Oracle Database and outstanding requests for a lock or latch.
V$LOCKED_OBJECT lists all locks acquired by every transaction on the system. It shows which sessions are holding DML locks (that is, TM-type enqueues) on what objects and in what mode.
DBA_LOCK lists all locks or latches held in the database, and all outstanding requests for a lock or latch
DBA_BLOCKERS displays a session if it is not waiting for a locked object but is holding a lock on an object for which another session is waiting
DBA_DDL_LOCKS lists all DDL locks held in the database and all outstanding requests for a DDL lock
DBA_DML_LOCKS lists all DML locks held in the database and all outstanding requests for a DML lock.

DDL Locks

DDL locks are automatically placed against objects during a DDL operation to protect them from changes by other sessions.

There are three types of DDL locks

  • Exclusive DDL Locks - these prevent other sessions from gaining a DDL lock or TM lock themselves. You can query a table but not modify it. Exclusive locks will normally lock the object until the statement has finished. However in some instances you can use the ONLINE option which only uses a low-level lock this still locks DDL operations but allows DML to occur normally.
  • Share DDL Locks - This protect the structure of the referenced object against modification by other sessions, but allows modification to the data. Shared DDL locks allow you to modify the contents of a table but not their structure.
  • Breakable Parse Locks - This allows an object, such as a query plan cached in the shared pool to register its reliance on some objects. If you perform a DDL against that object, Oracle will review the list of objects that have registered their dependence, and invalidate them. Hence these locks are breakable, they do not prevent the DDL from occurring. Breakable parse locks are used when a session parses a statement, a parse lock is taken against every object referenced by that statement. These locks are taken in order to allow the parsed, cached statement to be invalidated (flushed) in the shared pool if a reference object is dropped or alter in some way. Use the below SQL to identify any parse locks on views, procedures,grants, etc
Identify Locks select * from dba_ddl_locks;

Latches and Internal Locks (Enqueues)

Latches are locks that are held for short period of time, for example the time it takes to modify an in-memory data structure. They are used to protect certain memory structures such as the database block buffer cache or the library cache in the shared pool. They are lightweight low-level serialization mechanism to protect the in-memory data structures of the SGA. They do not support queuing and do not protect database objects such as tables or data files.

Enqueues are another more sophisticated serialized device used when update rows in a database table. The requestor will queue up and wait for the resource to become available, hence these are not as fast as a latch.

It is possible to use manual locking using the FOR UPDATE statement or LOCK TABLE statement, or you can create your own locks by using the DBMS_LOCK package.

Deadlocks

Occurs when two sessions block each other while each waits for a resource that the other session is holding.

Multi-versioning

Oracle operates a multi-version read-consistent concurrency model. Oracle provides:

  • Read-consistent queries: Queries that produce consistent results with respect to a point in time by using rollback segments.
  • Non-Blocking queries: Queries are never blocked by writers of data, as they would be in other databases.

When Oracle reads a table its uses the rollback segment of any rows that data has changed since when it started the read. This allows a point in time read of a table. This also allows Oracle not to lock a table while reading large tables.

Transaction and Row Locks

I am now going to describe in detail how a lock works, you don't need this details but its good to understand what is going on under the covers. Row level locks protect selected rows in a data block during a transaction, a transaction acquires a enqueue and an exclusive lock for each individual row modified by one of the following

  • Insert
  • Delete
  • Update
  • Select with for update

These locks are stored within the data block and each lock refers to the transaction enqueue and as they are stored in the blocck they have a database wide view. The lock is held until either a commit or a rollback is executed, SMON also acquires it in exclusive mode when recovering (undo-ing) a transaction. Transaction locks are used as a queuing mechanism for processes awaiting release of an object locked by a transaction process.

Every data block (except for temp and rollback segements) will have a number of predefined transaction slots. Undo segments have a different type of transaction slot called transaction tables, the transaction slots are otherwise know as interested transaction lists (ITLs) and are controlled by the INITRANS parameter (default is 2 for tables and 3 for indexes). A transaction slots uses 24 bytes of free space in the block, and the maximum number of transactions slots is controlled by the MAXTRANS parameter, hwoever you can only use upto 50% of the block for transactions slots.

ITL slots are required for every transaction, it contains the transaction ID (XID) which is a pointer to an entry in the transaction table of a rollback segment. You can still read the data but other processes wanting to change the data must wait until the lock is released (commit or rollback). The ITL entry contains a XID, undo byes address (UBA) information, flags indicating the transaction status (Flag) and lock count (Lck) showing the number of rows locked by this transaction within the block and SCN at which the transaction is updated. Basically the XID identifies the undo information about that transaction.

You can use the view X$KTUXE to obtain information on the number of rows that are affected, also the view V$TRANSACTION can be used to get more details on a transaction.

When the transaction completed Oracle performs the bare minimum to commit the transaction, it involves updating the flag in the transaction table and the block is not revisisted. This is know as a fast commit, during this time the ITL in the data block is still pointing to the transaction table of the corresponding rollback segment. If an transaction wants to use the block (persume the change has not been comiited) it see's that its point to a rollback segement makes a copy of the block in memory, gets the UBA from the ITL, reads the data from the undo and uses it to rollback the change defined by the undo. If the transaction is committed the rows are no longer locked but the lock byte is the row header is not cleared until the next time a DML action is performed on the block. The block cleanout is delayed by some discrete time interval because of the fast commit, this is called delay block cleanout. The cleanout operation closes the open ITLs and generates the redo information as a block cleanout many involve updating the block with a new SCN, this is why you see redo generation for some select statements.


Oracle Transactions

Oracle Transactions

A transaction is logical piece of work consisting of one or more SQL statements. A transaction is started whenever data is read or written and they are ended by a COMMIT or ROLLBACK. DDL statements always perform a commit first this is called an implicit commit this is because the user did not issue the commit.

Oracle uses transaction locking and multiversion concurrency control using undo records to ensure serializability in transactions, this stops any user conflicts while ensuring database consistency.

Transaction Properties

Database transactions should exhibit attributes described by the ACID properties:

  • Atomicity - A transaction either happens completely, or none of it happens
  • Consistency - A transaction takes the database from consistent state to the next
  • Isolation - The effects of a transaction may not be visible to other transaction until the transaction has committed.
  • Durability - Once the transaction is committed, it is permanent all changes are written to the redo log first then the data files.

Transaction Concurrent Control

Oracle uses locking to ensure data consistency but the locking is done via the least restrictive fashion, with the goal of maintaining the maximum amount of concurrency.

Concurrency problems can be any of the following

Dirty Reads Occurs when a transaction reads data that has been updated by an ongoing transaction but has not been committed permanently to the database, it is possible that the transaction may be rolled back.
Phantom Reads Are caused by the appearance of new data in between two database operations in a transaction.
Lost Updates Is caused by transactions trying to read data while it is being updated by other transaction.
Non-Repeatable Reads When a transaction finds data that it has read previously has been modified by some other transaction, you have a non-repeatable-read or fuzzy read. Basically when you read data at one time and its different when you read it again.

To overcome the above problems you could serialize all the transactions making sure that data is consistent, however this does not scale well. Oracle serializes the transaction via isolation levels and the management of undo data.

Isolation Levels

The main isolation levels are the following

Serializable Then transaction will lock all the tables it is accessing to prevent other transactions updating data until it either rollbacks or commits
Repeatable Read A transaction that reads the data twice from a table at different points in time will find the same values each time. Both dirty reads and non-repeatable are avoided with this level of isolation.
Read uncommitted Allows a transaction to read another transaction's immediate value before it commits
Read committed Guarantees that the row data won't change while you're accessing a particular row in a table.

Oracle uses locks and multiversion concurrency control system, it uses row-level locking (it never uses lock escalation), it will automatically place the lock for you and store the lock information in the data block, locks are held until the transaction is either committed or rolled back. Multiversion concurrency is a timestamp approach to read the original data, oracle will write the original data to a undo record in the undo tablespace, queries then have a consistent view of the data which provide read consistency- they only see data from a single point in time, for more information see Oracle locking.

Oracle Locks

There are a number of different locks in Oracle and tables that can obtain information regarding locks.

DML Locks Oracle uses row-level locks, this is to protect the row while its being changed, the lock will never block a reader of the same row. A table lock is also placed but this ensures that no DDL is used on the table.
DDL Locks When changing table attributes Oracle places a exclusive lock on the table to prevent any modifications to the rows. This lock is also used during DML transactions to make sure the table is not changed when changing or inserting data.
Latches Latches protect the memory structure with the SGA, they control the processes that access the memory area's.
Internal Locks Are used by oracle to protect access to structures such as data files, tablespaces and rollback segments.
Distributed Locks Are specialized locking mechanisms used in distributed systems.
Blocking Locks Occurs when a lock is placed on an object by a user to prevent other users accessing the same object.
DeadLocks Occurs when two sessions block each other while each waits for a resource that the other session is holding. Oracle always steps in to resolve the issue by killing one of the sessions, check the alert.log for deadlocks.
Useful Views
DBA_LOCK lists all locks or latches held in the database, and all outstanding requests for a lock or latch
DBA_WAITERS shows all the sessions that are waiting for a lock
DBA_BLOCKERS displays a session if it is not waiting for a locked object but is holding a lock on an object for which another session is waiting
V$LOCK This view lists the locks currently held by the Oracle Database and outstanding requests for a lock or latch
V$SESSION

This view lists session information for each current session

See Oracle Locking for more information.

Integrity Constraints and Transaction

See constraints for more information about deferable and immediate constraints.

Autonomous Transactions

You can create a transaction within a transaction that will rollback independently of its parent transaction. They can be used in the following

  • top-level anonymous blocks
  • local (a procedure in a procedure), stand-alone or packaged functions and procedures
  • methods of object types
  • database triggers
example

create or replace procedure autonomous_example
as
   pragma autonomous_transaction;
begin
   insert into test_table ('Autonomous insert');
   commit;
end;
/

The pragma directive tells oracle that this is a new autonomous transaction and that it is independent from its parent.

A trigger cannot contain a commit or rollback statement, however by using autonomous transactions you can overcome this limitation, it is considered bad practice but it is possible.

Trigger with a commit

create table tab1 (col1 number);
create table log (timestamp date, operation varchar2(2000));

create trigger test_trig
after insert on tab1
begin
insert into log values (SYSDATE, 'Insert and commit via trigger');
commit;
end;
/

SQL> insert into tab1 values (1);
insert into tab1 values (1)
*
ERROR at line 1:
ORA-04092: cannot COMMIT in a trigger
ORA-06512: at "TEST01.TEST_TRIG", line 3
ORA-04088: error during execution of trigger 'TEST01.TEST_TRIG'

Overcome commit problem

create or replace trigger test_trig
  after insert on tab1
  declare
  pragma autonomous_transaction;
begin
  insert into log values (SYSDATE, 'Insert and commit using autonomous transaction');
  commit; -- only allowed in autonomous triggers
end;
/

Undo Data

Undo data provides read consistency, there are two ways to control the undo manual or automatic. see undo data for more details

Oracle Transaction

Simple Oracle transaction

  1. User requests a connection to oracle
  2. A new dedicated server process is started for the user
  3. User executes a statement to insert data in to a table
  4. Oracle checks the users privileges, it first checks the library cache (cache hit) for the information and if not found retrieve it from disk.
  5. Check to see if the SQL statement has been parsed before (library cache) if it has then this is called a soft parse, otherwise the code has to be compiled a hard parse.
  6. Oracle creates a private SQL area in the users session's PGA
  7. Oracle checks to see if the data is in the buffer cache, otherwise perform a read from the data file
  8. Oracle will then apply row-level locks where needed to prevent others changing the row (select statements are still allowed on the row)
  9. Oracle then writes the change vectors to the redo log buffer
  10. Oracle then modifies the row in the data buffer cache
  11. The user commits the transaction making it permanent, the row-level locks are released
  12. The log writer process immediately writes out the changed data in the redo log buffers to the online redo log files, in other words the change is now recoverable.
  13. Oracle informs the user process that the transaction was completed successfully
  14. It may be sometime before the data buffer cache writes out the change to the data files.

Note: if the users transaction was an update then the before update row would have been written to the undo buffer cache, this would be used if the user rolls back the change of if another user run's a select on that data before the new update was committed.


Oracle Enterprise Manager (OEM)

Oracle Enterprise Manager is Oracle's GUI-based database management tool. It is possible to manage the oracle environment using SQL and PL/SQL but OEM gives you a console based framework to help you with almost all of your day-to-day management activities, including tasks such as backup, recovery, export, import and data loading, it also makes it easier to use the oracle advisors.

There are two version of OEM - a single instance version called Database Control and a system wide version that lets you manage all your systems called Grid Control. Most dba's do not like using GUI's but it makes sense to use the GUI for the easier day-to-day administration tasks like adding users, resizing tablespaces, etc, I think that knowing both SQL and PL/SQL and the GUI will make you a better dba in the long run.

The benefits of using the OEM

Out-of-the-box Management Installing and configuration of the OEM (both database and grid control) is a breeze, if using the DBCA to install Oracle it will automatically be installed for you, configuring the OEM is a trivial task.
Web-based Management OEM can be configured to use both HTTP or HTTPS, so security can be tightened if required. Being web based all you need is web browser to access the OEM.
Real-Time Monitoring Normally you have to write scripts or use a 3rd party tool to perform real-time monitoring, which can cause additional expensive or time, OEM will perform real-time monitoring right from the off. The new hang analysis feature in 10.2 helps during a database hang, when SQL scripts can't get information back to you quickly, by attaching directly to the SGA bypasses SQL and helps you out when normal SQL is used.
Complete Environment Monitoring

A poor performing database could be the result of a problem anywhere in the application stack (web server, app server, etc). OEM can monitor the performance of all the components in the stack not just the Oracle database. It can monitor

  • The entire Platform
  • End user experience
  • Systems and web applications availability
  • The extended network components
  • Business transactions
  • Historical data analysis and change tracking
Application Performance Monitoring

Oracle uses alert and notification systems to monitor the database, these alerts can identify problems quickly thus using a pro-active approach, thresholds can also be easily adjusted with the OEM. It can identify

  • Monitors the performance and availability of the system
  • Indicates outages and bottlenecks throughout the application system
  • Analyzes the root cause of performance with drill-downs
  • Minimizes application downtime through the use of efficient problem-resolution tools.
Scaleable Management Capability OEM is highly scalable, the only requirement is that any new servers require the management agent started.
Consolidated Database Management The home page of OEM will contain everything you need for a quick overview of your environment, this means that any problems can be identified very quick, a few more clicks and you have advanced performance charts available.
Integrated System Management OEM uses the Smart Plug-In (SPI) to provide integration in system-wide monitoring tools such as HP Openview.

OEM Database Control

You can automatically install OEM with installing Oracle via the DBCA, just select the use database control for database management option.

Manually configure OEM

unix: export ORACLE_SID=D01
windows: set ORACLE_SID=D01

emca -config dbcontrol db

Start dbconsole emctl start dbconsole
Stop dbconsole emctl stop dbconsole
dbconsole status emctl status dbconsole
Start management agent emctl start agent
Stop management agent emctl stop agent
Management agent status emctl status agent

Main OEM Page

The main OEM page is displayed below, please refer to the oracle web site on how to navigate and what is available on the OEM.

OEM Grid Control

OEM Grid Control can manage a number of systems across the environment, this means easier management. The Grid Control has a number of components

  • Grid Control - the main OEM web-based interface for monitoring the enterprise
  • Management Agent - monitors the host and sends information to the OEM management Service
  • Management Service - This is the middle tier (OMS) and stores the data collected in a repository, which is used by the Grid Control

Note that the grid control product is a licensed product which you can obtain from Oracle.

If possible it is best to install the Grid Control on a server on its own, depending on the number of servers you wish to monitor will determine the performance and disk space required for the server.

There are a number of commands that can be used to control the Oracle Management Services (OMS)

Status # emctl status oms
Start # emctl start oms
Stop # emctl stop oms

To install Grid Control follow below

Grid Control Installation
  1. Make sure that the Oracle CD is in the CD Drive and run the following

    ./runInstaller
  2. At the Welcome screen click Next
  3. Accept the default Operating System group Name, click Next
  4. You now need to run a script called orainstRoot.sh as user root in another terminal session.

    # cd $ORACLE_BASED/oraInventory
    # ./orainstRoot.sh
  5. Now click continue back in the Installer window
  6. Now choose the directory for the OEM files and click Next
  7. If using a new server select the option that creates a new Oracle database, click Next
  8. It will now perform prerequisite check, click Next
  9. Choose a password for SYSMAN and click Next
  10. Select password for SYS, SYSTEM AND DBSMNP, click next
  11. You can alter the Metalink and Proxy information if so desired, click Next
  12. Choose the SID and the Global database name for the database, click next
  13. Specify the location for all the database files, click Next
  14. Review the components and then click Next
  15. You will now watch while Grid Control is installed, you will be asked to run root.sh as user root
  16. A window will be displayed showing the various configuratrion assistants, click OK
  17. yet again run the orainstRoot.sh script as user root, then click Next
  18. This screen is important as it details all port numbers and links to the web page (make note)
  19. Click Exit to end the installation

To install the OEM Management agent only

Management agent Installation   1-7. Use the proceeding first seven steps
  1. In the select a product window, choose the additional management agent installation type and click Next
  2. specify the hostname and click Next
  3. Click Next after reading the security page
  4. Review the summary and click Next to start the install
  5. Click Exit to finish the install
  6. In Grid Control click on the target tab, you see all the discovered hosts    

Saturday, 13 July 2013

Oracle Memory Architecture

Oracle uses three kinds of memory structures

SGA
(System Global Area)
is a large part of memory that all the oracle background processes access.
PGA
(Process Global Area)
This is memory that is private to a single process or thread and is not accessible by any other process or thread
UGA
(User Global Area)

This is memory that is assoicated with your session, it can be found in the PGA or SGA depending on whether you are connected to the database via shared server

Shared Server - the UGA will be in the SGA
Dedicated Server - the UGA will be in the PGA

SGA

There are five memory structures that make up the System Global Area (SGA). The SGA will store many internal data structures that all processes need access to, cache data from disk, cache redo data before writing to disk, hold parsed SQL plans and so on.

Shared Pool

The shared pool consists of the following areas:

Library cache includes the shared SQL area, private SQL areas, PL/SQL procedures and packages the control structures such as locks and library cache handles. Oracle code is first parsed, then executed , this parsed code is stored in the library cache, oracle first checks the library cache to see if there is an already parsed and ready to execute form of the statement in there, if there is this will reduce CPU time considerably, this is called a soft parse, If Oracle has to parse it then this is called a hard parse. If there is not enough room in the cache oracle will remove older parsed code, obviously it is better to keep as much parsed code in the library cache as possible. Keep an eye on missed cache hits which is an indication that a lot of hard parsing is going on.

Dictionary cache is a collection of database tables and views containing information about the database, its structures, privileges and users. When statements are issued oracle will check permissions, access, etc and will obtain this information from its dictionary cache, if the information is not in the cache then it has to be read in from the disk and placed in to the cache. The more information held in the cache the less oracle has to access the slow disks.

The parameter SHARED_POOL_SIZE is used to determine the size of the shared pool, there is no way to adjust the caches independently, you can only adjust the shared pool size.

The shared pool uses a LRU (least recently used) list to maintain what is held in the buffer, see buffer cache for more details on the LRU.

You can clear down the shared pool area by using the following command

    alter system flush shared_pool;

Buffer cache

This area holds copies of read data blocks from the datafiles. The buffers in the cache contain two lists, the write list and the least used list (LRU). The write list holds dirty buffers which contain modified data not yet written to disk.

The LRU list has the following

  • free buffers hold no useful data and can be reused
  • pinned buffers actively being used by user sessions
  • dirty buffers contain data that has been read from disk and modified but hasn't been written to disk

It's the database writers job to make sure that they are enough free buffers available to users session, if not then it will write out dirty buffers to disk to free up the cache.

There are 3 buffer caches

  • Default buffer cache, which is everything not assigned to the keep or recycle buffer pools, DB_CACHE_SIZE
  • Keep buffer cache which keeps the data in memory (goal is to keep warm/hot blocks in the pool for as long as possible), DB_KEEP_CACHE_SIZE.
  • Recycle buffer cache which removes data immediately from the cache after use (goal here is to age out a blocks as soon as it is no longer needed), DB_RECYCLE_CACHE_SIZE.

The standard block size is determined by the DB_CACHE_SIZE, if tablespaces are created with a different block sizes then you must also create an entry to match that block size.

DB_2K_CACHE_SIZE (used with tablespace block size of 2k)
DB_4K_CACHE_SIZE (used with tablespace block size of 4k)
DB_8K_CACHE_SIZE (used with tablespace block size of 8k)
DB_16K_CACHE_SIZE (used with tablespace block size of 16k)
DB_32K_CACHE_SIZE (used with tablespace block size of 32k)

buffer cache hit ratio is used to determine if the buffer cache is sized correctly, the higher the value the more is being read from the cache.

     hit rate = (1 - (physical reads / logical reads)) * 100

You can clear down the buffer pool area by using the following command

    alter system flush buffer_cache;

Redo buffer

The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily before it is written to disk, this area is normally less than a couple of megabytes in size. These entries contain necessary information to reconstruct/redo changes by the INSERT, UPDATE, DELETE, CREATE, ALTER and DROP commands.

The contents of this buffer are flushed:

  • Every three seconds
  • Whenever someone commits a transaction
  • When its gets one third full or contains 1MB of cached redo log data.
  • When LGWR is asked to switch logs

Use the parameter LOG_BUFFER parameter to adjust but be-careful increasing it too large as it will reduce your I/O but commits will take longer.

Large Pool

This is an optional memory area that provide large areas of memory for:

  • Shared Server - to allocate the UGA region in the SGA
  • Parallel execution of statements - to allow for the allocation of inter-processing message buffers, used to coordinate the parallel query servers.
  • Backup - for RMAN disk I/O buffers
The large pool is basically a non-cached version of the shared pool.

Use the parameter LARGE_POOL_SIZE parameter to adjust

Java Pool

Used to execute java code within the database.

Use the parameter JAVA_POOL_SIZE parameter to adjust (default is 20MB)

Streams Pool

Streams are used for enabling data sharing between databases or application environment.

Use the parameter STREAMS_POOL_SIZE parameter to adjust

The fixed SGA contains a set of variables that point to the other components of the SGA, and variables that contain the values of various parameters., the area is a kind of bootstrap section of the SGA, something that Oracle uses to find other bits and pieces of the SGA

For more information regarding setting up the SGA click here.

PGA and UGA

The PGA (Process Global Area) is a specific piece of memory that is associated with a single process or thread, it is not accessible by any other process or thread, note that each of Oracles background processes have a PGA area. The UGA (User Global Area) is your state information, this area of memory will be accessed by your current session, depending on the connection type (shared server) the UGA can be located in the SGA which is accessible by any one of the shared server processes, because a dedicated connection does not use shared servers the memory will be located in the PGA

  • Shared server - UGA will be part of the SGA
  • Dedicated server - UGA will be the PGA
Memory Area Dedicated Server Shared Server
Nature of session memory Private Shared
Location of the persistent area PGA SGA
Location of part of the runtim area for select statements PGA PGA
Location of the runtime area for DML/DDL statements PGA PGA

Oracle creates a PGA area for each users session, this area holds data and control information, the PGA is exclusively used by the users session. Users cursors, sort operations are all stored in the PGA. The PGA is split in to two areas

Session Information
(runtime area)
PGA in an instance running with a shared server requires additional memory for the user's session, such as private SQL areas and other information.
Stack space
(private sql area)
The memory allocated to hold a sessions variables, arrays, etc and other information relating to the session.

Automatic PGA Management

To reduce response times sorts should be performed in the PGA cache area (optimal mode operation), otherwise the sort will spill on to the disk (single-pass / multiple-pass operation) this will reduce performance, so there is a direct relationship between the size of the PGA and query performance. You can manually tune the below to increase performance

  • sort_area_size - total memory that will be used to sort information before swapping to disk
  • sort_area_retained_size - memory that is used to retained data after a sort
  • hash_area_size - memory that will would be used to store hash tables
  • bitmap_merge_area_size - memory Oracle uses to merge bitmaps retrieved from a range scan of the index.

Staring with Oracle 9i there is a new to manage the above settings that is to let oracle manage the PGA area automatically by setting the parameter following parameters Oracle will automatically adjust the PGA area basic on users demand.

  • workarea_size_policy - you can set this option to manual or auto (default)
  • pga_aggregate_target - controls how much to allocate the PGA in total

Oracle will try and keep the PGA under the target value, but if you exceed this value Oracle will perform multi-pass operations (disk operations).

System Parameters
workarea_size_policy manual or auto (default)
pga_aggregate_target total amount of memory allocated to the PGA
 
PGA/UGA amount used

select a.name, to_char(b.value, '999,999,999') value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name like '%ga memory%';

Display if using memory or disk sorts set autotrace traceonly statistics;
set autotrace off;
Display background process PGA memory usage select program, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process;

Oracle Processes (oracle background processes)

Oracle server processes perform all the database operations such as inserting and deleting data. The oracle processes working with the SGA (oracle memory structure) manage the database.

There are two types of Oracle process

  • User process - Responsible for running the application that connects to the database
  • Oracle server process - Perform oracle tasks that manage the database.

There are a number of server processes that could be running , Windows will only have one process called Oracle, this process will have one thread for each of the below processes.

Process Monitor
PMON

Responsible for cleaning up after abnormally terminated connections.
Responsible for monitoring other server processes and restarting them if necessary
Registers the instance with the listener dynamically (dynamic service registration).
Restarts failed server processes and dispatcher processes

System Monitor
SMON

Temporary space cleanup
Crash recovery apon restart
Coalescing free space
Recovering transactions active against unavailable files
Instance recovery of failed node in OPS (Oracle parallel server)
Cleans up OJB$ (Low Level data dictionary)
Shrinks rollback segments
Offline's rollback segments

Other processes call the SMON process when required.

Distributed database recovery
RECO
Recovers transactions that are left in a prepared state because of a crash or loss of connection during a two-phase commit.
Checkpoint process
CKPT

The checkpoint process is charged with instructing the database block buffer writers to write the database buffer cache to disk, it then updates the data file headers and control file to indicate when the checkpoint was performed. There is a relationship with checkpoints and recovery time, the more checkpointing the less recovery time is need when a crash occurs.

The ckpt process does not do the checkpoint but assists with the checkpointing process by updating the file headers of the data files.

A checkpointing process involves the following:

  • Flushing the redo log buffers to the redo log files
  • Writing a checkpoint record to the redo log file
  • Flushing the database log buffers to the data files
  • Updating the data file headers and control files after the checkpoint completes
Database block writer
DBWn

Responsible for writing dirty blocks to disk when free space within the database buffer cache is low, it writes the dirty blocks from the buffer cache out to the disk. It uses the LRU (Least Recently Used) algorithm which retains data in the memory based on how long it has been since someone asked for that data. The database buffer cache is flushed to disk

  • when the database issues a checkpoint
  • when a server process can't find a clean reusable buffer after checking a threshold number of buffers
  • every 3 seconds
  • users process has searched to long for a free buffer when reading a buffer into the buffer cache
  • Instance is shutdown
  • tablespace is put in backup mode or offline
  • segment is dropped

If you have multiple CPU's then it is advised to run multiple database writers. Use the DB_WRITER_PROCESSES parameter to increase the number of database writers, the instance has to be rebooted.

Log writer
LGWR

Responsible for flushing to disk the contents of the redo log buffer located in the SGA. Both committed and uncommitted changes are written to the redo log buffer. The redo log buffer is flushed to disk before the data blocks are written to disk. The redo log buffer is flushed to disk

  • every 3 seconds
  • whenever a user commits a transaction
  • when the redo log buffer is a third full or contains 1 Mb of buffered data
  • before the DBWn process writes when a checkpoint occurs
Archive process
ARCn

Used when the database is in archive-mode, it copies the online redo log file to another location when LGWR fills up, these log files would be used to perform media recovery. There can be a maximum of ten archive processes running ARC0-ARC9. The LOG_ARCHIVE_MAX_PROCESSES parameter determines how many archive processes will be started (default is 1).

Manageability Monitor
MMON
Collects statistics to help the database manage itself. The MMON process collects the AWR (automatic workload repository) snapshot information which is used by the ADDM (automatic database diagnostic monitor), also MMON issues alerts when database thresholds are exceeded.
Manageability Monitor Light
MMNL
The process flushes ASH information to disk when the buffer is full, it also captures session history and database metrics.
Memory Manager
MMAN
Uses the the metrics collected to determine the ideal distribution of memory within oracle. It constantly monitors the database and adjusts the memory allocations according to workloads.
Job Queue Coordination
CJQ0
Used to schedule and run user jobs. It spawns job queue slave processes (J000-J999) which actually run the job.
Job Queue Process
J000-J999
These processes are what actually run the schedule jobs requested by CJQ0.
File Mapping Monitor
FMON
Maps files to immediate storage layers and physical devices. Results are normally kept in the DBMS_STORAGE_MAP view. Generally the 3rd party LVM (logical volume manager) supplier will supply a driver to map to.
Recovery Writer
RVWR
This process is started when you implement flashback logging, it logs the before image (taken from the flashback buffers) of an oracle block before it is changed, this is written to the flashback log files.
Change Tracking Writer
CTWR
This process tracks any data blocks that have changed which then RMAN can use to speed up backups as it will no longer need to read the entire data file to see what has changed.
Queue Monitor Coordinator
QMNC
Spawns and coordinates queue slave processes.
Block server process
BSP
Used in OPS and keeps each servers SGA in the clusters consistent with each other.
Lock monitor process
LMON
Used in OPS and monitors all instances in a cluster to detect a failure of an instance.
Lock manager daemon
LMD
Used in OPS and controls the global locks and global resources for the block buffer cache in a clustered environment.
Lock process
LCKn
Used in OPS and is the same as the LMD daemon but handles requests for all global resources other than database block buffers
Dispatcher process
Dnnn
Dispatcher processes that are used when using a shared server environment
Shared Server process
Snnn
Shared Server processes that are used when using a shared server environment
Oracle process spawner
PSP0
Process spawner has the job of creating and managing other Oracle processes.
Oracle shadow process
SHAD
Oracle's shadow process, could not find much on this process
Streams Advanced Queuing process
q000 - q???
I believe this is something to do with Oracle Streams Advanced Queuing

There are a number of useful command and views you can use to get information regarding the running processes.

Useful SQL
Display all processes select name, description from v$bgprocess;
Display process memory usage select program, pid, spid, username, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process;
Display users process select
  substr(s.username,1,18) username,
  substr(s.program,1,20) program,
  decode(s.command,
    0,'No Command',
    1,'Create Table',
    2,'Insert',
    3,'Select',
    6,'Update',
    7,'Delete',
    9,'Create Index',
    15,'Alter Table',
    21,'Create View',
    23,'Validate Index',
    35,'Alter Database',
    39,'Create Tablespace',
    41,'Drop Tablespace',
    40,'Alter Tablespace',
    53,'Drop User',
    62,'Analyze Table',
    63,'Analyze Index',
        s.command||': Other') command
from
  v$session s,
  v$process p,
  v$transaction t,
  v$rollstat r,
  v$rollname n
where s.paddr = p.addr
and s.taddr = t.addr (+)
and t.xidusn = r.usn (+)
and r.usn = n.usn (+)
order by 1
;
Useful Views
V$BGPROCESS displays information about the background processes
V$PROCESS contains information about the currently active processes