STANDARD EM QUERY WITH SQL_ID dx4nqvbtu06bx IS VERY SLOW AND USES LOTS OF TEMPORARY SPACE
ISSUE
The body of the query is the following
SELECT MEDIA FROM V$BACKUP_PIECE_DETAILS WHERE SESSION_KEY=:B3 AND SESSION_RECID=:B2 AND SESSION_STAMP=:B1 AND DEVICE_TYPE = 'SBT_TAPE' AND ROWNUM = 1
SYMPTOMS
Sat May 23 07:00:48 2020 ORA-1652: unable to extend temp segment by 128 in tablespace TEMP02 Sat May 23 07:09:32 2020 ORA-1652: unable to extend temp segment by 128 in tablespace TEMP02 Sat May 23 07:09:34 2020 ORA-1652: unable to extend temp segment by 128 in tablespace TEMP02 Sat May 23 07:09:39 2020 ORA-1652: unable to extend temp segment by 128 in tablespace TEMP02
NOTE: In case if your temporary tablespace is big enough there would be no errors like those in your alert log but you can confirm high temp usage by querying the TEMP USAGE view
SELECT * FROM TEMP_USAGE WHERE STATUS='ACTIVE' AND TEMP_USED_GB>0;
USERNAME SID SERIAL# PID STATUS SQL_ID ELAPSED_SEC EVENT CONNNECTED_FROM TABLESPACE SEGTYPE TEMP_USED_GB PERCENT_USED
-------- ----- ------- ---- -------- ------------- ----------- --------------------------- --------------- ----------- ------- ------------ ------------
DBSNMP 634 5349 6362 ACTIVE dx4nqvbtu06bx 595 direct path write temp dbpilot.net TEMP02 HASH 106.9 83.5
DBSNMP 2907 56461 1191 INACTIVE 6 SQL*Net message from client dbpilot.net TEMP02 LOB_DATA 0 0
In this example the dx4nqvbtu06bx uses 83.5% (106.9GiB) of all available space in TEMP2 tablespace. It’s the huge red flag.
At the same time in your awr reports you observe the following:
1. A high value for the “direct path write temp” wait event
2. Very high elapsed time for the statement with sql_id dx4nqvbtu06bx
In execution details of the dx4nqvbtu06bx query (details are in the v$sql view) you notice that the most part of the query elapsed time is IO wait time.
**************************************** : ****************** SQL_ID : dx4nqvbtu06bx Child# : 0 Cursor Status : VALID Is Cursor Obsolete? : N Parsing Schema : DBSNMP System Timestamp : 08-05-2020 11-52-50 First Run : 2020-04-26/18:25:08 Last Active Time : 08-05-2020 11-52-48 Elapsed Time Total(s) : 520116.41 CPU Time Total(s) : 207325.58 IO Wait Time Total(s) : 319430.49 Concurrency Wait Time Total(s) : 5.66 Application Wait Time Total(s) : 0 Cluster Wait Time Total(s) : 0 Executions : 942 Elapsed Time per Exec(s) : 552.14 CPU Time per Exec(s) : 220.09 IO Wait Time per Exec(s) : 339.1 Concurrency Wait Time per Exec(s) : .01 Application Wait Time per Exec(s) : 0 Cluster Wait Time per Exec(s) : 0 : Disk Reads : 154609 Buffer Gets : 455867 Direct Writes : 2175471508 Row Processed : 0 : Number of latches : 0 Sql Profile : Sql Command Type : 3 **************************************** : ******************
In this example, the IO wait time of the query is about 61% of Elapsed Time. It’s very bad.
SOLUTION
Download and Apply the patch 14078947 to the target database or apply one of the three available workarounds:
(1) Use a hint in the query (see the DBMS_SPM package)
SELECT /*+ rule*/ MEDIA FROM V$BACKUP_PIECE_DETAILS WHERE SESSION_KEY=:B1 AND SESSION_RECID=:B2 AND SESSION_STAMP=:B1 AND DEVICE_TYPE = 'SBT_TAPE' AND ROWNUM = 1;
(2) Execute the following against the target database:
exec dbms_stats.DELETE_TABLE_STATS('SYS','X$KCCRSR'); exec dbms_stats.LOCK_TABLE_STATS('SYS','X$KCCRSR'); alter system flush shared_pool; alter system flush shared_pool;
NOTE: I tested this workaround and actually it doen’t help much. The query still uses lots of temporary tablespace.
(3) Disable the metric which executes this query from Enterprise Manager. Please refer to MOS Doc ID 2235163.1.
RESULT
After the patch 14078947 had been applied against our database the performance of the query dx4nqvbtu06bx improved dramatically.
*************************************** : ****************** SQL_ID : dx4nqvbtu06bx Child# : 0 Cursor Status : VALID Is Cursor Obsolete? : N Parsing Schema : DBSNMP System Timestamp : 15-07-2020 12-08-31 First Run : 2020-06-20/17:03:10 Last Active Time : 15-07-2020 12-03-12 Elapsed Time Total(s) : 2422.46 CPU Time Total(s) : 1269.47 IO Wait Time Total(s) : .21 Concurrency Wait Time Total(s) : 0 Application Wait Time Total(s) : 0 Cluster Wait Time Total(s) : 0 Executions : 6002 Elapsed Time per Exec(s) : .4 CPU Time per Exec(s) : .21 IO Wait Time per Exec(s) : 0 Concurrency Wait Time per Exec(s) : 0 Application Wait Time per Exec(s) : 0 Cluster Wait Time per Exec(s) : 0 : Disk Reads : 223 Buffer Gets : 9899 Direct Writes : 0 Row Processed : 6002 : Number of latches : 0 Sql Profile : Sql Command Type : 3 *************************************** : ******************
Just compare “Elapsed Time per Exec(s)” and “IO Wait Time per Exec(s)” before and after.
REFERENCES
EM 13c: After Upgrade to Enterprise Manager 13.2 Cloud Control, a Recurring Query in 11.2.0.4 Target Database (SELECT MEDIA FROM V$BACKUP_PIECE_DETAILS) Takes Five Minutes, Using Huge Amount of Temp Space (Doc ID 2201982.1)
Bug 14078947 – Select from V$RMAN_STATUS very slow when database is open (Doc ID 14078947.8)
Queries on V$RMAN_STATUS are Very Slow Even After GATHER_FIXED_OBJECTS_STATS is Run (Doc ID 1525917.1)
EM13c : A Recurring Query In 11.2.0.4 Database has performance issues (SELECT MEDIA FROM V$BACKUP_PIECE_DETAILS) (Doc ID 2235163.1)
Platform : Oracle Linux Server 7.7
Database : Oracle database 11.2.0.4
OEM : Enterprise Manager Cloud Control 13c Release 3
Tags In
Related Posts
1 Comment
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Categories
- Linux (8)
- Iptables (2)
- Memory Usage (1)
- Postfix (1)
- Yum (4)
- Networker (2)
- Oracle (63)
- Account (2)
- AWR (2)
- Database Errors (6)
- Database Performance (9)
- Datapump (3)
- Enterprise Manager (24)
- Enterprise Manager Metrics (4)
- Environments (4)
- High CPU Usage (2)
- High IO Load (4)
- Initialization Parameters (5)
- job_queue_processes (1)
- Processes (3)
- NLS (1)
- Objects (1)
- Objects DDL (2)
- OJVM/PSU/DRU Patches (1)
- PL/SQL (5)
- UTL_FILE (4)
- Prerequisites (1)
- Privileges (1)
- Profile (1)
- Queries (2)
- Redologs (1)
- Resource Limits (1)
- Segment Shrink (1)
- Session (6)
- Spfile (3)
- SQL PLUS (1)
- Spooling (1)
- Standard Queries (7)
- Statistics (2)
- Temporary Tablespace (2)
ORA-1652: Unable To Extend Temp Segment By 128 In Tablespace TEMP” Due to X$ Tables (Doc ID 2575123.1)