Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

Huge Archive/Redo Generation in System!

Posted by FatDBA on April 14, 2015

On one of our production database we found huge archives started generating which in turn flooded the entire system and started hampering the performance and availability of the system.
Below are the stats which clearly reflects the hourly archival generation in system which has raised from an average of 25 archives/day to a maximum of 609 redo files a day.

DB DATE       TOTAL  H00 H01 H02 H03 H04 H05 H06 H07 H08 H09 H10 H11 H12 H13 H14 H15 H16 H17 H18 H19 H20 H21 H22 H23
— ———- —— — — — — — — — — — — — — — — — — — — — — — — — —
1 2015-04-04      5   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   0   2   0   1
1 2015-04-05     19   0   0   2   0   0   3   1   0   0   1   0   2   1   0   2   0   1   2   0   1   0   2   1   0
1 2015-04-06     27   1   0   2   0   1   2   0   1   0   1   1   2   1   1   2   1   2   2   0   1   1   2   2   1
1 2015-04-07     33   0   1   2   0   1   2   0   1   1   0   1   2   1   0   2   1   1   3   1   1   2   4   3   3
1 2015-04-08    136   3   3   5   3   3   5   4   4   4   4   5   7   5   6   8   7   7   8   7   7   7   9   8   7
1 2015-04-09    284   8   9  10   9   9  11   9   9  10  11  11  14  12  12  14  13  12  14  13  17  14  15  15  13
1 2015-04-10    345  14  14  16  14  13  14  13  12  13  13  13  16  13  14  15  14  16  16  15  14  15  17  16  15
1 2015-04-11    428  16  16  17  16  16  17  17  16  18  17  18  19  18  18  19  18  18  20  18  18  20  20  19  19
1 2015-04-12    609  19  19  21  21  21  22  21  21  21  21  20  23  22  29  30  31  32  34  31  30  30  31  30  29
1 2015-04-13    277  25  24  25  25  25  26  25  25  25  25  24   3   0   0   0   0   0   0   0   0   0   0   0   0

During investigation found that there are two of the sessions with ID – 3 and 13 generating huge amount of redo data during the period.

select s.username, s.osuser, s.status,s.sql_id, sr.* from
(select sid, round(value/1024/1024) as “RedoSize(MB)”
from v$statname sn, v$sesstat ss
where sn.name = ‘redo size’
and ss.statistic# = sn.statistic#
order by value desc) sr,
v$session s
where sr.sid = s.sid
and rownum <= 10

USERNAME   OSUSER                         STATUS       SQL_ID               SID RedoSize(MB)
———- —————————— ———— ————- ———- ————
oracle                         ACTIVE                              1            0
testuser    testadm                         INACTIVE     apnx8grhadf80          2            0
testuser    testadm                         ACTIVE                             3        90037
testuser    testadm                         INACTIVE                            6            0
testuser    testadm                         INACTIVE                            7            0
testuser    testadm                         INACTIVE     apnx8grhadf80          8            0
testuser    testadm                         INACTIVE                            9            0
testuser    testadm                         INACTIVE                           10            8
testuser    testadm                         ACTIVE       14f48saw6n9d1         13       189923
testuser    testadm                         INACTIVE                           15            0

Lets investigate and jump deep!
Alright, first step should be collecting details of objects which are changing frequently and altering/changing db blocks. Below mentioned script will help to achieve the purpose.

prompt  To show all segment level statistics in one screen
prompt
set lines 140 pages 100
col owner format A12
col object_name format A30
col statistic_name format A30
col object_type format A10
col value format 99999999999
col perc format 99.99
undef statistic_name
break on statistic_name
with  segstats as (
select * from (
select inst_id, owner, object_name, object_type , value ,
rank() over (partition by  inst_id, statistic_name order by value desc ) rnk , statistic_name
from gv$segment_statistics
where value >0  and statistic_name like ‘%’||’&&statistic_name’ ||’%’
) where rnk <31
)  ,
sumstats as ( select inst_id, statistic_name, sum(value) sum_value from gv$segment_statistics group by statistic_name, inst_id)
select a.inst_id, a.statistic_name, a.owner, a.object_name, a.object_type,a.value,(a.value/b.sum_value)*100 perc
from segstats a ,   sumstats b
where a.statistic_name = b.statistic_name
and a.inst_id=b.inst_id
order by a.statistic_name, a.value desc
/

INST_ID|STATISTIC_NAME                |OWNER       |OBJECT_NAME                   |OBJECT_TYP|       VALUE|  PERC
———-|——————————|————|——————————|———-|————|——
1|db block changes              |testuser     |PZ214                          |TABLE     |  2454298704| 71.83
1                               |testuser     |T94                           |TABLE     |    23416784|   .69
1                               |testuser     |PZ978                          |TABLE     |    19604784|   .57
1                               |testuser     |PZ919                          |TABLE     |    18204160|   .53
1                               |testuser     |T85                           |TABLE     |    15616624|   .46
1                               |testuser     |IH94                          |INDEX     |    14927984|   .44
1                               |testuser     |IPZ978                         |INDEX     |    14567840|   .43
1                               |testuser     |I296_1201032811_1             |INDEX     |    14219072|   .42
1                               |testuser     |PZ796                          |TABLE     |    13881712|   .41
1                               |testuser     |H94                           |TABLE     |    13818416|   .40
1                               |testuser     |I312_3_1                      |INDEX     |    12247776|   .36
1                               |testuser     |I312_6_1                      |INDEX     |    11906992|   .35
1                               |testuser     |I312_7_1                      |INDEX     |    11846864|   .35
1                               |testuser     |IPZ412                         |INDEX     |    11841360|   .35
1                               |testuser     |I178_1201032811_1             |INDEX     |    11618160|   .34
1                               |testuser     |PZ972                          |TABLE     |    11611392|   .34
1                               |testuser     |H312                          |TABLE     |    11312656|   .33
1                               |testuser     |IPZ796                         |INDEX     |    11292912|   .33
1                               |testuser     |I188_1101083000_1             |INDEX     |     9772816|   .29
1                               |testuser     |PZ412                          |TABLE     |     9646864|   .28
1                               |testuser     |IH312                         |INDEX     |     9040944|   .26
1                               |testuser     |I189_1201032712_1             |INDEX     |     8739376|   .26
1                               |testuser     |SYS_IL0000077814C00044$$      |INDEX     |     8680976|   .25
1                               |testuser     |I119_1000727019_1             |INDEX     |     8629808|   .25
1                               |testuser     |I119_1101082009_1             |INDEX     |     8561520|   .25
1                               |testuser     |I312_1705081004_1             |INDEX     |     8536656|   .25
1                               |testuser     |I216_1201032712_1             |INDEX     |     8306016|   .24
1                               |testuser     |I119_1404062203_1             |INDEX     |     8289520|   .24
1                               |testuser     |PZ988                          |TABLE     |     8156352|   .24
1                               |testuser     |I85_1703082001_1              |INDEX     |     8126528|   .24

Here in this scenario LOG MINER utility will be of a great help. Below is the method to immediately mine an archive-log with ease.

SQL> begin
sys.dbms_logmnr.ADD_LOGFILE (‘/vol2/oracle/arc/testdb/1_11412_833285103.arc’);
end;
/
begin
sys.dbms_logmnr.START_LOGMNR;
end;
/
PL/SQL procedure successfully completed.

I was using hard-coded 512 bytes for redo block size. You can use the following SQL statement to identify the redo block size.

SQL> select max(lebsz) from x$kccle;

MAX(LEBSZ)
———-
512

1 row selected.

I always prefer to create a table by querying the data from v$logmnr_contents dynamic performance view rather accessing the view directly which always makes things hazardous.

SQL> CREATE TABLE redo_analysis_212_2 nologging AS
SELECT data_obj#, oper,
rbablk * le.bsz + rbabyte curpos,
lead(rbablk*le.bsz+rbabyte,1,0) over (order by rbasqn, rbablk, rbabyte) nextpos
FROM
( SELECT DISTINCT data_obj#, operation oper, rbasqn, rbablk, rbabyte
FROM v$logmnr_contents
ORDER BY rbasqn, rbablk, rbabyte
) ,
(SELECT MAX(lebsz) bsz FROM x$kccle ) le
/

Table created.

Next yo can query the table now to get mining details.

set lines 120 pages 40
column data_obj# format  9999999999
column oper format A15
column object_name format A60
column total_redo format 99999999999999
compute sum label ‘Total Redo size’ of total_Redo on report
break on report
spool /tmp/redo_212_2.lst
select data_obj#, oper, obj_name, sum(redosize) total_redo
from
(
select data_obj#, oper, obj.name obj_name , nextpos-curpos-1 redosize
from redo_analysis_212_2 redo1, sys.obj$ obj
where (redo1.data_obj# = obj.obj# (+) )
and  nextpos !=0 — For the boundary condition
and redo1.data_obj#!=0
union all
select data_obj#, oper, ‘internal ‘ , nextpos-curpos  redosize
from redo_analysis_212_2 redo1
where  redo1.data_obj#=0 and  redo1.data_obj# = 0
and nextpos!=0
)
group by data_obj#, oper, obj_name
order by 4
/

DATA_OBJ#|OPER           |OBJ_NAME                      |     TOTAL_REDO
———–|—————|——————————|—————
78236|UPDATE         |PZ716                          |         132584
78227|INTERNAL       |PZ214                          |         142861
738603|DIRECT INSERT  |WRH$_ACTIVE_SESSION_HISTORY   |         170764
78546|INSERT         |PZ412                          |         179476
78101|UPDATE         |PZ989                          |         191276
78546|LOB_WRITE      |PZ412                          |         220850
78546|UPDATE         |PZ412                          |         314460
78038|UPDATE         |PZ972                          |         322060
77814|UPDATE         |PZ919                          |         375863
78227|LOB_WRITE      |PZ214                          |         399417
77814|LOB_WRITE      |PZ919                          |         407572
0|START          |internal                      |         760604
0|COMMIT         |internal                      |        2654020
78227|UPDATE         |PZ214                          |      452580201
|               |                              |—————
Total Redo |               |                              |      461746150

259 rows selected.

SQL> select OWNER,OBJECT_NAME,OBJECT_TYPE,CREATED,LAST_DDL_TIME,STATUS,TIMESTAMP from dba_objects where OBJECT_ID=’78227′;
rows will be truncated

OWNER       |OBJECT_NAME                                                 |OBJECT_TYP|CREATED    ||STATUS
————|————————————————————|———-|———–|———–|———–
testuser     |PZ214                                                        |TABLE     |04-DEC-2013||VALID

1 row selected.

Its clearly visible after reading mining results which indicates that out of 460 MB of archivelog (that was mined) 450 MB was occupied by UPDATES happened on the object PZ214. Now we have enough proof in our hands which can be shared with application/development teams to investigate issue.

After a parallel investigation, we ultimately found that it was some feature enabled at application end that caused this redo swamp in system which later on rectified and fixed the issue.

DB DATE       TOTAL  H00 H01 H02 H03 H04 H05 H06 H07 H08 H09 H10 H11 H12 H13 H14 H15 H16 H17 H18 H19 H20 H21 H22 H23
— ———- —— — — — — — — — — — — — — — — — — — — — — — — — —
1 2015-04-04      5   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   0   2   0   1
1 2015-04-05     19   0   0   2   0   0   3   1   0   0   1   0   2   1   0   2   0   1   2   0   1   0   2   1   0
1 2015-04-06     27   1   0   2   0   1   2   0   1   0   1   1   2   1   1   2   1   2   2   0   1   1   2   2   1
1 2015-04-07     33   0   1   2   0   1   2   0   1   1   0   1   2   1   0   2   1   1   3   1   1   2   4   3   3
1 2015-04-08    136   3   3   5   3   3   5   4   4   4   4   5   7   5   6   8   7   7   8   7   7   7   9   8   7
1 2015-04-09    284   8   9  10   9   9  11   9   9  10  11  11  14  12  12  14  13  12  14  13  17  14  15  15  13
1 2015-04-10    345  14  14  16  14  13  14  13  12  13  13  13  16  13  14  15  14  16  16  15  14  15  17  16  15
1 2015-04-11    428  16  16  17  16  16  17  17  16  18  17  18  19  18  18  19  18  18  20  18  18  20  20  19  19
1 2015-04-12    609  19  19  21  21  21  22  21  21  21  21  20  23  22  29  30  31  32  34  31  30  30  31  30  29
1 2015-04-13    371  25  24  25  25  25  26  25  25  25  25  24  28  26  19  10   1   2   3   2   1   1   2   2   0
1 2015-04-14      7   1   0   2   0   1   2   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

Advertisement

One Response to “Huge Archive/Redo Generation in System!”

  1. Naresh said

    very informative, thank you.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: