Oracle db files too high




















How can I know the SQL responsible for the "db seq file read" wait events to tune it? This is one of the very useful comments by Tom on his site and i have decided to share with others too. The SQL statement associated with this event can be obtained using this query: select a.

Post a Comment. If you multiplex your redo log files, then the likelihood of losing all of your redo log files is low, so you can store datafiles on the same drive as some redo log files. Skip Headers. Guidelines for Managing Datafiles Datafiles are physical files of the operating system that store the data of all logical structures in the database.

Note: Tempfiles are a special class of datafiles that are associated only with temporary tablespaces. Information in this chapter applies to both datafiles and tempfiles except where differences are noted. This file number can be used in many SQL statements that reference datafiles in place of using the file name. Relative Uniquely identifies a datafile within a tablespace. For small and medium size databases, relative file numbers usually have the same value as the absolute file number.

However, when the number of datafiles in a database exceeds a threshold typically , the relative file number differs from the absolute file number. Note: One means of controlling the number of datafiles in your database and simplifying their management is to use bigfile tablespaces.

Bigfile tablespaces comprise a single, very large datafile and are especially useful in ultra large databases and where a logical volume manager is used for managing operating system files. Bigfile tablespaces are discussed in "Bigfile Tablespaces". Consider Possible Limitations When Adding Datafiles to a Tablespace You can add datafiles to traditional smallfile tablespaces, subject to the following limitations: Operating systems often impose a limit on the number of files a process can open simultaneously.

Information in a redo log file is written sequentially. This sequential writing can take place much faster if there is no concurrent activity on the same disk. Dedicating a separate disk to redo log files usually ensures that LGWR runs smoothly with no further tuning attention.

Performance bottlenecks related to LGWR are rare. Dedicating separate disks to datafiles and mirroring redo log files are important safety precautions. These steps ensure that the datafiles and the redo log files cannot both be lost in a single disk failure. Mirroring redo log files ensures that a redo log file cannot be lost in a single disk failure.

For example, if your system has 4 groups with 2 members, then the following scenario should used to separate disk access:. Figure illustrates how redo members should be distributed across disks to minimize contention.

In this example, LGWR switched out of log group 1 member 1a and 1b and is now writing to log group2 2a and 2b. Concurrently, the archiver process is reading from the group 1 and writing to its archive destination. Note how the redo log files are isolated from contention.

Note : Mirroring redo log files, or maintaining multiple copies of each redo log file on separate disks, does not slow LGWR considerably. LGWR writes to each disk in parallel and waits until each part of the parallel write is complete. Because the time required to perform a single-disk write may vary, increasing the number of copies increases the likelihood that one of the single-disk writes in the parallel write takes longer than average.

A parallel write does not take longer than the longest possible single-disk write. There may also be some overhead associated with parallel writes on your operating system. Striping Table Data Striping, or spreading a large table's data across separate datafiles on separate disks, can also help to reduce contention.

See Also: This strategy is fully discussed in the section "Striping Disks". Separating Tables and Indexes It is not necessary to separate a frequently used table from its index. During the course of a transaction, the index is read first, and then the table is read. However, for very high OLTP systems, separating indexes from tables may be required. Both reads happen faster because one disk head is on the index data and the other is on the table data.

The idea of separating objects accessed simultaneously applies to indexes as well. For example, if a SQL statement uses two indexes at the same time, then performance is improved by having each index on a separate disk. Also, avoid having several heavily accessed tables on the same disk. This requires strong knowledge of the application access patterns. The use of partitioned tables and indexes can improve performance of operations in a data warehouse.

Divide a large table or index into multiple physical segments residing in different tablespaces. All tables that contain large object datatypes should be placed into a separate tablespace as well. This measure is especially helpful in optimizing access to redo log files. This permits multiple processes to access different portions of the table concurrently without disk contention.

Striping is particularly helpful in optimizing random access to tables with many rows. Striping can either be done manually described below , or through operating system striping utilities. Currently, operating systems are providing the ability to stripe a heavily used container file across many physical devices. However, such techniques are productive only where the load redistribution eliminates or reduces some form of queue.

Where larger numbers of physical drives are available, consider dedicating two drives to carrying redo logs two because redo logs should always be mirrored either by the operating system or using Oracle redo log group features.

Because redo logs are written serially, drives dedicated to redo log activity normally require limited head movement. This significantly accelerates log writing. This is achieved by placing logs on alternating drives.

The process of writing to each mirror is normally done in parallel, and does not cause a bottleneck. It might thus take The size of the object. The size of the disk. On the other hand, if the system is configured with 1GB or 2GB disks, then the object may require 5 or 3 disks, respectively. Store each extent in a separate datafile. The table extents should be slightly smaller than the datafiles in the tablespace to allow for overhead. Each of these files is on a separate disk.

Striping Disks with Operating System Software As an alternative to striping disks manually, use operating system utilities or third-party tools, such as logical volume managers, or use hardware-based striping.

These factors are affected by the Oracle block size and the database access methods. Random reads and writes The minimum stripe size is twice the Oracle block size. In striping, uniform access to the data is assumed. If the stripe size is too large, then a hot spot may appear on one disk or on a small number of disks.

Avoid this by reducing the stripe size, thus spreading the data over more disks. Consider an example in which rows of fixed size are evenly distributed over 5 disks, with each disk containing 20 sequential rows.

At a high rate of concurrency, the system may not be able to achieve the desired level of performance. Correct this problem by spreading rows 35 through 55 across more disks. In the current example, if there were two rows per block, then we could place rows 35 and 36 on the same disk, and rows 37 and 38 on a different disk.

Redundant arrays of inexpensive disks RAID configurations provide improved data reliability. RAID 5: Provides good reliability. Sequential reads benefit the most.

Writes performance may suffer with RAID 5. This configuration is not recommended for heavy write applications. Oracle recommends that you do not place production database files on RAID 0 systems. Optimal stripe size is a function of three things:. The physical stripe boundaries matching the block size boundaries.

However, keep in mind the following techniques:. To fail here causes the same performance penalty as the one described above.

Avoiding Dynamic Space Management When you create an object, such as a table or rollback segment, Oracle allocates space in the database for the data. This space is called a segment. If subsequent database operations cause the data volume to increase and exceed the space allocated, then Oracle extends the segment.

Dynamic extension then reduces performance. These SQL statements are called recursive calls because Oracle issues these statements itself. Recursive calls are also generated by these activities:. Firing of database triggers. Enforcement of referential integrity constraints. Use the following query to monitor this statistic over a period of time:.

If Oracle continues to make excessive recursive calls while your application is running, then determine whether these recursive calls are due to an activity, other than dynamic extension, that generates recursive calls. If you determine that the recursive calls are caused by dynamic extension, then reduce this extension by allocating larger extents. Choose storage parameter values so that Oracle allocates extents large enough to accommodate all your data when you create the object.

Larger extents tend to benefit performance for the following reasons:. Segments with larger extents are less likely to be extended. However, because large extents require more contiguous blocks, Oracle may have difficulty finding enough contiguous space to store them.

To determine whether to allocate only a few large extents or many small extents, evaluate the benefits and drawbacks of each in consideration of plans for the growth and use of your objects.

Automatically re-sizable datafiles can also cause problems with dynamic extension. Instead, manually allocate more space to a datafile during times when the system is relatively inactive. Even though an object may have unlimited extents, this does not mean that having a large number of small extents is acceptable. For optimal performance you may decide to reduce the number of extents. Extent maps list all extents for a particular segment.

The number of extents entries per Oracle block depends on operating system block size and platform. Although an extent is a data structure inside Oracle, the size of this data structure depends on the platform. Accordingly, this affects the number of extents Oracle can store in a single operating system block. Typically, this value is as follows:. Avoid dynamic extension in dictionary-mapped tablespaces.

For dictionary-mapped tablespaces, do not let the number of extents exceed 1, If extent allocation is local, then do not have more than 2, extents. Having too many extents reduces performance when dropping or truncating tables. For a table that is never full-table scanned, it makes no difference in terms of query performance whether the table has one extent or multiple extents.

The performance of searches using an index is not affected by the index having one extent or multiple extents. Using more than one extent in a table, cluster, or temporary segment does not affect the performance of full scans on a multi-user system. Using more than one extent in a table, cluster, or temporary segment does not materially affect the performance of full scans on a dedicated single-user batch processing system, if the extents are properly sized and if the application is designed to avoid expensive DDL operations.

For rollback segments, many extents are preferable to few extents. Having many extents reduces the number of recursive SQL calls to perform dynamic extent allocations on the segments. Rollback segment size is determined by the rollback segment's storage parameter values. Your rollback segments must be large enough to hold the rollback entries for your transactions.

As with other objects, you should avoid dynamic space management in rollback segments. If you do not explicitly assign a transaction to a rollback segment, then Oracle automatically assigns it to a rollback segment.

Note : If you are running multiple concurrent copies of the same application, then be careful not to assign the transactions for all copies to the same rollback segment. This leads to contention for that rollback segment. Assign large rollback segments to transactions that modify data that is concurrently selected by long queries.

Such queries may require access to rollback segments to reconstruct a read-consistent version of the modified data. The rollback segments must be large enough to hold all the rollback entries for the data while the query is running. Assign large rollback segments to transactions that modify large amounts of data.

A large rollback segment can improve the performance of such a transaction, because the transaction generates large rollback entries. If a rollback entry does not fit into a rollback segment, then Oracle extends the segment.

Dynamic extension reduces performance and should be avoided whenever possible. OLTP applications are characterized by frequent concurrent transactions, each of which modifies a small amount of data. Assign OLTP transactions to small rollback segments, provided that their data is not concurrently queried. Small rollback segments are more likely to remain stored in the buffer cache where they can be accessed quickly. A typical OLTP rollback segment might have 2 extents, each approximately 10 kilobytes in size.

To best avoid contention, create many rollback segments and assign each transaction to its own rollback segment. If an UPDATE statement increases the amount of data in a row so that the row no longer fits in its data block, then Oracle tries to find another block with enough free space to hold the entire row.

If such a block is available, then Oracle moves the entire row to the new block. This is called migrating a row. If the row is too large to fit into any available block, then Oracle splits the row into multiple pieces and stores each piece in a separate block.

This is called chaining a row. Rows can also be chained when they are inserted. Dynamic space management, especially migration and chaining, is detrimental to performance:. This statement collects information about each migrated or chained row and places this information into a specified output table. Manually reducing the sizes of datafiles allows you to reclaim unused space in the database. This is useful for correcting errors in estimates of space requirements.

However, because its tablespace now stores smaller objects, the datafile can be reduced in size. Note: It is not always possible to decrease the size of a file to a specific value. See Also: For more information about the implications resizing files has for downgrading, see Oracle8 Migration. This section describes ways to alter datafile availability, and includes the following topics:.

In very rare situations, you might need to bring specific datafiles online make them available or take specific files offline make them unavailable. For example, when Oracle has problems writing to a datafile, it can automatically take the datafile offline.

You might need to take the damaged datafile offline or bring it online manually. Note: You can make all datafiles in a tablespace, other than the files in the SYSTEM tablespace, temporarily unavailable by taking the tablespace offline. You must leave these files in the tablespace to bring the tablespace back online. Offline datafiles cannot be accessed. Bringing a datafile in a read-only tablespace online makes the file readable.

No one can write to the file unless its associated tablespace is returned to the read-write state. You can perform these operations only when the database is open in exclusive mode. See Also: For more information about bringing datafiles online during media recovery, see Oracle8 Backup and Recovery Guide. This allows you to take the datafile offline and drop it immediately. This section describes the various aspects of renaming and relocating datafiles, and includes the following topics:.

You can rename datafiles to change either their names or locations. Oracle provides options to make the following changes:. Renaming and relocating datafiles with these procedures only change the pointers to the datafiles, as recorded in the database's control file; it does not physically rename any operating system files, nor does it copy files at the operating system level.

Therefore, renaming and relocating datafiles involve several steps. Read the steps and examples carefully before performing these procedures. The new file must already exist; this command does not create a file.



0コメント

  • 1000 / 1000