Print Friendly, PDF & Email

 
ISSUE

The standard recurring query of the Enterprise Manager Cloud Control 13c Release 3 with sql_id dx4nqvbtu06bx is running slow and using huge amount of temporary tablespace while executing in a databases of the release 11.2.0.4. The query dx4nqvbtu06bx is executed every 15-20 minutes.

The body of the query is the following

SELECT MEDIA FROM V$BACKUP_PIECE_DETAILS WHERE SESSION_KEY=:B3 AND SESSION_RECID=:B2 AND SESSION_STAMP=:B1 
AND DEVICE_TYPE = 'SBT_TAPE' AND ROWNUM = 1

 
SYMPTOMS

In an alert log of your database, you’ve started noticing frequently generated alerts related to your temporary tablespace

Sat May 23 07:00:48 2020
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP02
Sat May 23 07:09:32 2020
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP02
Sat May 23 07:09:34 2020
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP02
Sat May 23 07:09:39 2020
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP02

 
NOTE: In case if your temporary tablespace is big enough there would be no errors like those in your alert log but you can confirm high temp usage by querying the TEMP USAGE view

SELECT * FROM TEMP_USAGE WHERE STATUS='ACTIVE' AND TEMP_USED_GB>0;

USERNAME   SID SERIAL# PID  STATUS   SQL_ID        ELAPSED_SEC EVENT                       CONNNECTED_FROM TABLESPACE  SEGTYPE TEMP_USED_GB PERCENT_USED
-------- ----- ------- ---- -------- ------------- ----------- --------------------------- --------------- ----------- ------- ------------ ------------
DBSNMP     634    5349 6362 ACTIVE   dx4nqvbtu06bx         595 direct path write temp      dbpilot.net     TEMP02      HASH           106.9         83.5
DBSNMP    2907   56461 1191 INACTIVE                         6 SQL*Net message from client dbpilot.net     TEMP02      LOB_DATA           0            0

In this example the dx4nqvbtu06bx uses 83.5% (106.9GiB) of all available space in TEMP2 tablespace. It’s the huge red flag.

 
At the same time in your awr reports you observe the following:

1. A high value for the “direct path write temp” wait event
2. Very high elapsed time for the statement with sql_id dx4nqvbtu06bx

In execution details of the dx4nqvbtu06bx query (details are in the v$sql view) you notice that the most part of the query elapsed time is IO wait time.

**************************************** : ******************
                                  SQL_ID : dx4nqvbtu06bx
                                  Child# : 0
                           Cursor Status : VALID
                     Is Cursor Obsolete? : N
                          Parsing Schema : DBSNMP
                        System Timestamp : 08-05-2020 11-52-50
                               First Run : 2020-04-26/18:25:08
                        Last Active Time : 08-05-2020 11-52-48
                   Elapsed Time Total(s) : 520116.41
                       CPU Time Total(s) : 207325.58
                   IO Wait Time Total(s) : 319430.49
          Concurrency Wait Time Total(s) : 5.66
          Application Wait Time Total(s) : 0
              Cluster Wait Time Total(s) : 0
                              Executions : 942   
                Elapsed Time per Exec(s) : 552.14
                    CPU Time per Exec(s) : 220.09
                IO Wait Time per Exec(s) : 339.1 
       Concurrency Wait Time per Exec(s) : .01
       Application Wait Time per Exec(s) : 0
           Cluster Wait Time per Exec(s) : 0
                                         :
                              Disk Reads : 154609
                             Buffer Gets : 455867
                           Direct Writes : 2175471508
                           Row Processed : 0
                                         :
                       Number of latches : 0
                             Sql Profile :
                        Sql Command Type : 3
**************************************** : ******************

 
In this example, the IO wait time of the query is about 61% of Elapsed Time. It’s very bad.

 
 
SOLUTION
 
Download and Apply the patch 14078947 to the target database or apply one of the four available workarounds:

(1) Use a hint in the query (which would have to be modified using the DBMS_SPM package)

SELECT /*+ rule*/ MEDIA FROM V$BACKUP_PIECE_DETAILS
WHERE SESSION_KEY=:B1 AND SESSION_RECID=:B2 AND SESSION_STAMP=:B1 AND DEVICE_TYPE = 'SBT_TAPE' AND ROWNUM = 1;

 
(2) Execute the following against the target database:

exec dbms_stats.DELETE_TABLE_STATS('SYS','X$KCCRSR');
exec dbms_stats.LOCK_TABLE_STATS('SYS','X$KCCRSR');
alter system flush shared_pool;
alter system flush shared_pool;

 
(3) Disable the metric which executes this query from Enterprise Manager. Please refer to MOS Doc ID 2235163.1.

(4) Gather statistics

exec dbms_stats.gather_fixed_objects_stats;
exec dbms_stats.gather_schema_stats('SYSTEM');
exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_dictionary_stats;

 
RESULT
 
After the patch 14078947 had been applied against our database the performance of the query dx4nqvbtu06bx improved dramatically.

 *************************************** : ******************
                                  SQL_ID : dx4nqvbtu06bx
                                  Child# : 0
                           Cursor Status : VALID
                     Is Cursor Obsolete? : N
                          Parsing Schema : DBSNMP
                        System Timestamp : 15-07-2020 12-08-31
                               First Run : 2020-06-20/17:03:10
                        Last Active Time : 15-07-2020 12-03-12
                   Elapsed Time Total(s) : 2422.46
                       CPU Time Total(s) : 1269.47
                   IO Wait Time Total(s) : .21
          Concurrency Wait Time Total(s) : 0
          Application Wait Time Total(s) : 0
              Cluster Wait Time Total(s) : 0
                              Executions : 6002
                Elapsed Time per Exec(s) : .4
                    CPU Time per Exec(s) : .21
                IO Wait Time per Exec(s) : 0
       Concurrency Wait Time per Exec(s) : 0
       Application Wait Time per Exec(s) : 0
           Cluster Wait Time per Exec(s) : 0
                                         :
                              Disk Reads : 223
                             Buffer Gets : 9899
                           Direct Writes : 0
                           Row Processed : 6002
                                         :
                       Number of latches : 0
                             Sql Profile :
                        Sql Command Type : 3
 *************************************** : ******************

 
Just compare “Elapsed Time per Exec(s)” and “IO Wait Time per Exec(s)” before and after.

 
REFERENCES
 
EM 13c: After Upgrade to Enterprise Manager 13.2 Cloud Control, a Recurring Query in 11.2.0.4 Target Database (SELECT MEDIA FROM V$BACKUP_PIECE_DETAILS) Takes Five Minutes, Using Huge Amount of Temp Space (Doc ID 2201982.1)
Bug 14078947 – Select from V$RMAN_STATUS very slow when database is open (Doc ID 14078947.8)
Queries on V$RMAN_STATUS are Very Slow Even After GATHER_FIXED_OBJECTS_STATS is Run (Doc ID 1525917.1)
EM13c : A Recurring Query In 11.2.0.4 Database has performance issues (SELECT MEDIA FROM V$BACKUP_PIECE_DETAILS) (Doc ID 2235163.1)

 
 

Version  : 15:20 15.07.2020
Platform : Oracle Linux Server 7.7
Database : Oracle database 11.2.0.4
OEM      : Enterprise Manager Cloud Control 13c Release 3