The DBA, the application developer, management, the system administrator, the network administrator.
Considered on the ROI (Return on Investment) outline here are the tunning recommendation in order:
Other importatn OS and Oracle I/O tuning issues include: read-ahead capabilities, asynchronous I/O, multiblock reads, RAID stripe sizes, disk geometry issues, controlles issues,and many more.
There are different ways of determining the goals of a performance tuning effort, consider the application type, also sampling the database on various quantative measaurs is further defining the tuning goals:
In most systems throughput and response time run counter to one another as tuning goals. If response time is high (bad), throughput might be high (good). If throughput is low (bad) then response time might be low (good).
Typically OLTP systems want low response time or high throughput, in terms of transaction on the application needs. A DSS wants low response time , and a batch system normally wants lower wall times.
Always consider the two central tuning goals :
Also consider these general-purpose tuning goals :
The two main configurations for running a database are ARCHIVELOG and NOARCHIVELOG. Assuming a recovery operation is needed the following scenarios could arise:
If the database is in ARCHIVOLOG there are two possibilities:
If the database is in NOARCHIVELOG:
You benefit from not having to save all the on-line redo log files, on heavily accesed systems this type of configuration alleviates disk-space usage,because in ARCHIVELOG every on-line redo log file is eventually backed-up; these archived redo-log files can easily fill up a WHOLE disk (10-12 GB) in a matter of hours. However the disadvantage of running this configuration is that in the event of a failure your only means of backup is by your last backup tape, Oracle cannot help in this case because it does not have a history of previous redo-log files , the ones that are ARCHIVED in ARCHIVELOG mode.
The alert log records the commands and command results of major events in the life of the database,e.g. Tablespace creation, redo log switches, recovery operations, and data base startups.
This file is located at the location specified by the parameter BACKGROUND_DUMP_TEST in the init.ora file.
When a background processes is terminated or abnormally aborts an operation, it usually produces a trace file containing an error message causing the failure. Dumps of the current process stacks, currently executing cursors, and many other information pertinent to the problem. A background dump test is saved at the background_dump_test parameter in the init.ora file, these processes are commonly known as "detached process dumps". A user dump test is saved to the directory specified in the parameter user_dump_test in the init.ora file, these processes are of cours known as "user process dumps".
It is more important to collect these files and forward them to Oracle Support as they may help resolve the problem. The background processes also write an entry to the alert.log file for the database.
All the Oracle products such as server manager,enterprise manager, rely on the V$ dynamic performance views. These views are grouped into instance, database, memory, disk, user, session, and contention aspects of performance they are based on the internal X$ base tables. The available V$ views are at the table V$FIXED_TABLE to get a listing of the V$ views.
The V$ views are called dynamic because they are populated at instance startup and are truncated at shutdown. The V$ views also form the basis of the standart Oracle tuning scripts, UTLBSTAT/UTLESTAT, which query them using SQL scripts and format the output that is returned. Therefore, if UTLBSTAT/UTLESTAT do not give you what you want, you can use Server Manager and the V$ views to either supplement or supplant those utilities.
This is the most commonly used diagnostic utility. The DBA runs UTLBSTAT before running his or her application or simulation. The utlbstat
script builds the beginning tables necessary to collect and store the performance data. The the DBA run utlestat
which builds the ending tables and the difference tables, computes the performance differences (deltas) between the utlbstat run and this utlestat run, formats the data output data ( including comments and some explanations ), and writes it to the default file, report.txt . This file must be interpreted by the DBA, either directly or inderectily ( by taking some of the output values given and using them as inputs into simple formulas ).
Interpretation of this data means comparing these final figures to more or less established guidelines, keeping the ROI startegy in mind, and categorizing the findings as acceptable or not for that given area of performance.
Enterprise Manager "performance pack" is extremly useful.The components in the performance pack help analyze your logicaland physical design. They also monitor locks, a variety of performance issues (throughput, redo, rollback,I/O, memory,etc),the top user sessions with regard to resource consumption, you tablespace storage (data files,fragmentation,etc), and application events through tracing.
Latches are used to control access to shared structures. Latches are implemented using semaphores at the OS level. Latches are locks that are held for a very small amount of time.
Before a process gets access to a shared structure protected by a latch, it has to first acquire the latch. This latch could currently be free (that is, no other process is accessing the latch) in which case the processes gets the latch immediatly. This processes will hold the latch for the period of time it requires and will then relinquish the latch. In case the latch is already acquired by another process,the process has two options:
spin_count
in the init.ora file. With every spin, it will try to acquire the latch, if it does not , it will continue to try to acquire the latch until the spin_count
parameter in the init.ora file is reached. After that the processes will go to sleep for a specified amount of time, wake up again, and repeat the aforementioned cycle.To view latches the following "views" are used :
The Important latches:
There are aprox. 52 types of latches on an Oracle installation. However the follwing latches are of significant importance in any tuning job.
db block buffer
in the buffer cache. The buffer cache size defined by the parameter db_block_buffers
resides in the SGA and contains a cached copy of data read from data files.The buffer cache is organized in two lists: the dirty list and the LRU list. The dirty list contains the buffers that have been modified but not written to the disk yet. The LRU list is comprised of the pinned buffers, the dirty buffers that have not yet been moved to the dirty list, and the free buffers. The pinned buffer are buffers that are currently accessed by other processes. The dirty buffers contain buffers that are to be written to the disk, and they then subsequently get moved to the dirty list. The free buffers are the buffers that are available for use.
When a process needs to read data from the disk that is not already in the cache, it needs a free buffer to read the new data. It scans the LRU list to check for free buffers. If there are excessive requests for free buffers in the buffer cache, tere will be high access to the LRU list causing contention for the cache buffer LRU chain. The contention for this latch can be minimized with the parameter db_block_lru_latches
in the init.ora file. By increasing this paramater, the contention for this latch can be minimized. The maximum value for this parameter is double the numberof CPUs
The basic reason for contention for this latch is a high request for free buffers. You can optimize the SQL statments to minimize the high demand for free buffers or increase the db_block_buffer
parameter to increase the number of free buffers available on the system. NOTE: The SGA must fit into contiguous chunks of real memory, so if the buffer cache
is enlarged you must ensure there is enough contiguos memory available on the system to service the increase.
log_small_entry_max_size
parameter, the process will use the redo allocation
latch. If the size is greater than this value, the process is copied using the redo copy latch
. A quick way to check whether there is any contention on the redo log buffer is to check where there are any waits associated with writing to the redo log buffer. This can be done using the V$sysstat view :
select name,value from v$sysstat where name = 'redo log space requests'
The size of the redo log buffer
will have to be increased if the number of waits is too high.
Contention for the redo allocation latch
: The contention for the redo allocation latch can be reduced on a multi-CPU system by forcing the process to use the redo copy latch instead. Because there can be multiple redo copy latches, the copy will be done more efficiently. The number of redo copy latches is defined by the parameter LOG_SIMULTANEOUS_COPIES
The maximum number of available latches on the system is double the number of CPUs.For a single CPU system, this value is 0, and the redo allocation latch will be used. If there is a contention for the redo allocation latch , the value of log_small_entry_max_size can be decreased from its current value so thar redo copy latch is used.
Contention of the redo copy latch
: If the system is facing contention for the redo copy latch , it can be decreased by either increasing the value of log_small_entry_max_size
( so that the redo allocation latch is used) or increasing the value of log_simultaneous_copies
(so that it increases the number of redo copy
latches available).
The init.ora parameter log_entry_prebuild_threshold
can be increased so that the data that is written to the redo log buffer is grouped and written out. By increasing the parameter, a number of write operations can be grouped so that they can be written out in one operation, thereby reducing requests for these latches and thus contention.
library cache
inclusdes the Shared SQL area, private SQL areas, PL/SQL procedure packages, and other control structures.Shared SQL area contains SQLs that are shared among multiple sessions. By increasing the sharing of these SQLs, contention to this latch can be avoided. Contention for this latch occurs when there is a lot of demand for space in the library cache. Very high parsing on the system and heavy demand to open a new cursor because of low sharing among processes are some of the common causes of contention on this latch.Contention for this latch can be avoided by using code that can be shared by multiple sessions. This can be done by typing the code with the same conventions (all capitals for DML key words, or just the first letter a capital ) to the parsing engine even a one lower-upper case letter will generate a different hah value. Even putting more spaces in a select statement causes the hash to be different.
Using bind variables: Using bind variables prevents multiple copies of the same select statement from being shared in the same pool.e.g:
select sal from employee where emp_if := emp_id;
Pinning frequently used objects like procedures and packages. The advantage is that these objects will never be flushed out of the shared pool.These objects can be identified by :
select name, executions from V$db_object_cache where executions > <threshold value> order by 2 desc;
And in order to pin these objects in the shared pool.
dbms_share_pool.keep('object name','P');
To check the object in the shared pool that are not pinned :
select name,type,kept,sharable_mem From v$db_object_cache Where kept = 'NO' Order by shareable_mem desc;