Standby/Dataguard:– RFS Process not working.
Posted by FatDBA on November 12, 2013
Not feeling well today!! 😦 , but as It’s been a long time since i wrote my last article on Oracle database issues … here i am back again with one of the problem that we faced in our production database of our erstwhile customer, some time ago.
It was a typical Physical Standby environment with Maximum Performance model set. This is a story happened on one fine day when we started receiving some issues after a small network outage which blocked the redo stream to transfer logs from Production to standby server. ** * The bad part was – the issue had happened during peak hours where we had confined amount of time to rectify the case.
Issue: RFS Process not working
Problem Description:
The filesystem containing the archive destination in DR server was not accessible. As a result the log shipping got stopped. We deferred the log shipping in the production server. After the filesystem was back, we enabled the log shipping but the RFS process in DR server was not running…The problem occurred when the log 24717 was being shipped… When we queried
SQL> Select status, sequence# from v$managed_standby;
The status for MRP showed that it was waiting for gap…We then manually shipped the log file and then applied… When we enabled the shipping we found that the RFS process was still not started…..There were no error in the alert log of DR…We found a trace file in the production server with the following message..
tkcrrsarc: (WARN) Failed to find ARCH for message (message:0xa)
tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0xa)
So we thought that the issue is with the archiver process….
Solution:
Check whether the archiver process is available for shipping log files. You can identify this by querying V$ARCHIVE_PROCESSES view.
SQL> Select * from v$archive_processes;
The output have the following columns:
Process: Indicates the process number.
Status: This should be ACTIVE
Log Sequence: Log sequence number of the log that is being shipped by the archiver. If it is not shipping any log then it should be 0.
State: This should be IDLE if the archiver is not shipping any log. If it is shipping any log then its state is BUSY.
In our case we had two archiver process running.
The status of both the arch process is ACTIVE.
The log sequence of First arch process is 0 and its state is IDLE. Hence it is healthy. However the log sequence of 2nd arch process is 24717 and its state is BUSY.
This was interesting because the problem occurred when the arch process was transferring the log 24717. This log was then manually shipped and applied. But the process still shows that it was shipping the 24717 log…
So we thought of increasing the arch processes. We increased the arch process from two to four.
SQL> alter system set log_archive_max_processes=4 scope=both;
We queried the v$archive_processes, the 3rd and 4th arch process was ready to ship the log files 24718 and 24719 logs respectively with their corresponding state as IDLE…
We enabled the log shipping and the RFS process in DR was started and the log shipping went smoothly.
However the 2nd arch process still was showing the same log sequence (24717) and state (BUSY)…. We then killed that archiver process…. And we were all good!
Hope That Helped
Prashant Dixit
prince said
tis is cool stuff…
Gaurav Bhaskar said
Good Explanation. Keep it up !.
Randolph said
Hi to all, for the reason that I am really keen of reading this webpage’s post
to be updated daily. It carries pleasant information.
sankar said
870 sid 030333 033303
1190 \\10.6.1.126\h$ 100530699137 cluster group “SSCDB01” /move cluster group “SSCDB01″ /move
Zaqplm@123
oracleexp0rts @EXPORTS select * from DBA_TABLESPACE_USAGE_METRICS where tablespace_name=’UNDOTBS01′;
oracleimpex @impex
select ‘alter system kill session ”’ ||sid|| ‘,’ || serial#|| ”’ immediate;’ from v$session where status=’ACTIVE’ and logon_time < SYSDATE-(4/24)
select 'alter system kill session ''' ||sid|| ',' || serial#|| ''' immediate;' from v$session where username='LTF_LEA_MIL' and status='ACTIVE';,'LTF_EOD_MIL'; and status='ACTIVE';
select 'alter system kill session ''' ||sid|| ',' || serial#|| ''' immediate;' from v$session where sid='640';
nohup impdp sys/sys#2009 directory=dpexp dumpfile=expdp_COLLPRD_03_12032018_%U.dmp.gz logfile=collprd03162018.log remap_schemas=collprd:collprd remap_tablespace=FCL_MST_DATA1:FCL_MST_DATA1
select username,sid,status from v$session where username is not null AND USERNAME’SYS’;
select ‘alter system kill session ”’ ||sid|| ‘,’ || serial#|| ”’ immediate;’ from gv$session where status=’INACTIVE’ and username!=’SYS’;
select ‘alter system kill session ”’ ||sid|| ‘,’ || serial#|| ”’ immediate;’ from gv$session where username!=’SYS’ and PROGRAM !=’JDBC Thin Client’;
select ‘alter system kill session ”’ ||sid|| ‘,’ || serial#|| ”’ immediate;’ from gv$session where status=’INACTIVE’ and username!=’SYS’;
select object_type,count(*) from dba_objects where owner=’COLLPRD’ and status=’VALID’ group by object_type;
SELECT name,ctime,ptime,EXPTIME, FROM sys.user$ WHERE name = ‘PIM_DB_ADMIN’;
select min(FHSCN) “LOW FILEHDR SCN”,max(FHSCN) “MAX FILEHDR SCN”,max(FHAFS) “Min PITR ABSSCN” from X$KCVFH;
select file#,status,to_char(checkpoint_change#) from v$datafile where status like ‘%SYS%’;
Kindly specify the database name which needs to be created on the below mentioned sever.
create public database link DBLOSFINNONE.US.ORACLE.COM connect to finnonelea identified by finnonelea using ‘FINNLIVE’;
select SCHEMA_USER,JOB,BROKEN,NEXT_DATE,NEXT_SEC,LAST_DATE from dba_jobs where job in (440683);
00400100009296
ifsc : UCBA000040
select SID,a.module,a.osuser, sql_text, disk_reads, executions, last_active_time ,
hash_value,address,PROGRAM_ID,a.status
from gv$session a, gv$sqlarea b
where b.hash_value = a.sql_hash_value
and a.schemaname != ‘SYS’
and a.status = ‘ACTIVE’ and a.module like ‘%AUTO%’;
select c.name, b.spid, a.sid
from v$session a, v$process b, v$bgprocess c
where c.paddr ’00’
and c.paddr = b.addr
and b.addr = a.paddr;
select SCHEMA_USER,JOB,BROKEN,NEXT_DATE,NEXT_SEC,LAST_DATE from dba_jobs where job in (893017) 10613596 Mumbai@567 10.2.5.66
select max(sequence#),applied from v$archived_log group by applied;
set line 200;
set line 200;
set pagesize 400;
col username for a20;
col object_name for a20;
select ‘CALLED PLSQL’, vs.username, d_o.object_name ,d_o.object_type, vs.sid
from dba_objects d_o
inner join
v$session vs
on d_o.object_id = vs.plsql_entry_object_id
union all
select ‘CURRENT PLSQL’, vs.username, d_o.object_name ,d_o.object_type,vs.sid
from dba_objects d_o
inner join
v$session vs
on d_o.object_id = vs.plsql_object_id ;
expdp sys/sscteam attach=SYS_EXPORT_FULL_26
STOP_JOB=IMMEDIATE
set lines 200 pagesize 200
col OWNER_NAME for a20
col JOB_NAME for a39
col OPERATION for a20
col JOB_MODE for a30
col STATE for a25
select * from dba_datapump_jobs;
cooluat_data1
SELECT a.sqltext .nbfc_cheque_dtl
FROM v$session s, v$sqlarea a
WHERE s.sql_address = a.addr
AND s.sql_hash_value = a.hash_value
AND s.username=’FINEOD_PROD’;
select l.* from v$locked_object l, dba_objects o
where l.object_id = o.object_id
and o.object_type = ‘TABLE’
and o.owner = upper(‘FINNONELEA’)
and o.object_name = upper(‘NBFC_CHEQUE_DTL’);
9769385110 it is outof scope in our domain.
select b.tsname tablespace_name , MAX(b.used_size_mb) cur_used_size_mb ,
round(AVG(inc_used_size_mb),2)avg_increas_mb from ( SELECT a.days,a.tsname , used_size_mb , used_size_mb – LAG
(used_size_mb,1) OVER ( PARTITION BY a.tsname ORDER BY a.tsname,a.days) inc_used_size_mb
from ( SELECT TO_CHAR(sp.begin_interval_time,’MM-DD-YYYY’) days
,ts.tsname ,MAX(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb
from dba_hist_tbspc_space_usage tsu , dba_hist_tablespace_stat ts ,dba_hist_snapshot sp, dba_tablespaces dt
where tsu.tablespace_id= ts.ts# AND tsu.snap_id = sp.snap_id
AND ts.tsname = dt.tablespace_name AND sp.begin_interval_time > sysdate-7
GROUP BY TO_CHAR(sp.begin_interval_time,’MM-DD-YYYY’), ts.tsname
ORDER BY ts.tsname, days ) a
) b GROUP BY b.tsname ORDER BY b.tsname;
pass@123
2400+2250
#############################################
Tablespace 85% full
set line 200
set pagesize 300
select tablespace_name,free_percent,tablespace_size_mb,free_size_mb
from (
SELECT b.tablespace_name, b.tablespace_size_mb,
round((sum(nvl(fs.bytes,0))/1024/1024/b.tablespace_size_mb *100)) free_percent,
round(sum(nvl(fs.bytes,0))/1024/1024) free_size_mb
FROM dba_free_space fs,
(SELECT tablespace_name, sum(bytes)/1024/1024 tablespace_size_mb FROM dba_data_files
GROUP BY tablespace_name
) b
where fs.tablespace_name = b.tablespace_name
group by b.tablespace_name, b.tablespace_size_mb
) ts_free_percent
WHERE free_percent < 15
order by free_percent
/
set lines 200
col file_name for a60
SELECT a.tablespace_name, a.file_name, a.bytes allocated_bytes,
b.free_bytes
FROM dba_data_files a,
(SELECT file_id, SUM(bytes/1024/1024/1024) free_bytes
FROM dba_free_space b GROUP BY file_id) b
WHERE a.file_id=b.file_id
ORDER BY a.tablespace_name;
select OWNER,
sed -n '/JAN 06 08:30/,/Jan 23 10:30/p' /var/log/mail.log
grep "06/Jan/2018:08:3[1-5]" logfile
grep -A 999999999 '06/Jan/2018:08:30' access_log
sed -n '/06\/Jan\/2018:08:/,$p' /orahomep02/prod2/product/10.2.0/Db_1/network/log/listener_ltfp02.log
set lines 200 pagesize 200
col REQ_START_DATE for a50
col RUN_DURATION for a50
col LOG_DATE for a50
select
log_date,
job_name,
status,
req_start_date,
actual_start_date,
run_duration
from
dba_scheduler_job_run_details
— where
— job_name = 'DEMO_JOB'
— status ‘SUCCEEDED’
order by
actual_start_date;
select
log_date,
job_name,
status,
run_duration
from
dba_scheduler_job_run_details where job_name=’REPO_COLLECTION’;
6b4a3t56tduwf
Set long 999999999
SELECT DBMS_METADATA.GET_DDL(‘DB_LINK’,’DB_LINK_SPOORS_COLLUAT’,’PUBLIC’) from dual;
run
{
ALLOCATE CHANNEL c1 DEVICE TYPE DISK;
ALLOCATE CHANNEL c2 DEVICE TYPE DISK;
ALLOCATE CHANNEL c3 DEVICE TYPE DISK;
ALLOCATE CHANNEL c4 DEVICE TYPE DISK;
ALLOCATE CHANNEL c5 DEVICE TYPE DISK;
ALLOCATE CHANNEL c6 DEVICE TYPE DISK;
recover database noredo;
}
col STATUS format a9
col hrs format 999.99
select SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,’mm/dd/yy hh24:mi’) start_time,
to_char(END_TIME,’mm/dd/yy hh24:mi’) end_time,
elapsed_seconds/3600 hrs from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;
/
SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND (SOFAR/TOTALWORK*100, 2) “% COMPLETE”
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE ‘RMAN%’ AND OPNAME NOT LIKE ‘%aggregate%’
AND TOTALWORK! = 0 AND SOFAR TOTALWORK;
SID SERIAL# CONTEXT SOFAR TOTALWORK %COMPLETE
———- ———- ———- ———- ———- ———-
18 29 1 9115569 19258880 47.33
You can also check historical backup status with the help of following script:
set linesize 500 pagesize 2000
col Hours format 9999.99
col STATUS format a10
select SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,’mm-dd-yyyy hh24:mi:ss’) as RMAN_Bkup_start_time,
to_char(END_TIME,’mm-dd-yyyy hh24:mi:ss’) as RMAN_Bkup_end_time,
elapsed_seconds/3600 Hours from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;
SESSION_KEY INPUT_TYPE STATUS RMAN_BKUP_START_TIM RMAN_BKUP_END_TIME HOURS
———– ————- ———- ——————- ——————- ——–
137764 DB FULL COMPLETED 06-06-2017 02:00:32 06-06-2017 04:12:13 2.19
select SCHEMA_USER,JOB,BROKEN,NEXT_DATE,NEXT_SEC,LAST_DATE from dba_jobs where job in (282183);
exec dbms_job.broken(282183,false);
select max(sequence#) from v$archived_log where thread#=1;
select max(sequence#) from v$archived_log where thread#=2;
exec dbms_job.change( 282183,’REPO_COLLECTION;’,trunc(sysdate+1)+2.75/24,’trunc(sysdate+1)+3/24′);
Hi Team,
Filesystem is used for datafile hence we are not able to free space.
exec dbms_job.change( 440683, ‘REPO_COLLECTION;’, trunc(sysdate+1)+3/24, ‘trunc(sysdate+1)+3/24’);
select file_name, bytes, maxbytes,
increment_by*(bytes/blocks) “INCREMENT”,
maxbytes-bytes remaining,
(maxbytes-bytes)/(increment_by*(bytes/blocks)) EXTENSIONS
from dba_data_files
where autoextensible = ‘YES’
/
02267364500
SELECT sql_text, executions,
ROUND(disk_reads / executions,2) reads_per_run,
disk_reads, buffer_gets,
ROUND((buffer_gets–disk_reads)/
buffer_gets, 2) hit_ratio,
SQL_TEXT
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 0
AND (buffer_gets-disk_reads)/buffer_gets 0)
ORDER BY id1, request;
set ORACLE_SID=QUANTIS
purge -age 14400 -type ALERT
sqlplus /nolog
1. Removing files older than 14 Days from the current working directory
find . -type f -mtime +1 -exec rm -f {} \;
2. Finding the top 5 largest files from the current working directory
find . -ls | sort -nrk 7 | head -5
3. Find files larger than 100MB from the current working directory
find . -size +100000k
4. Delete audit records that’s older than 30 days
find $ORACLE_HOME/rdbms/audit -name “*.aud” -mtime +30 -exec rm {} \;
5. Delete files in /tmp that’s older than 30 days
find /tmp -group dba -type f -mtime +5 -exec rm -f {} \;
find /tmp/dba -group dba -type f -mtime +5 -exec rm -f {} \;
6. Delete *.trc files more than 5 days old.
find $TRACE_DIR -name ‘*.trc’ -type f -mtime +5 -exec rm {} \;
ASMM SPACE CHECK==================
select name, state, total_mb, free_mb from v$asm_diskgroup;
SELECT name, type, ceil (total_mb/1024) TOTAL_GB , ceil (free_mb/1024) FREE_GB, required_mirror_free_mb,
ceil ((usable_file_mb)/1024) FROM V$ASM_DISKGROUP;
SPACE %percentage
SELECT name, free_mb, total_mb, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup;
set lines 255
col path for a35
col Diskgroup for a15
col DiskName for a20
col disk# for 999
col total_mb for 999,999,999
col free_mb for 999,999,999
compute sum of total_mb on DiskGroup
compute sum of free_mb on DiskGroup
break on DiskGroup skip 1 on report –
set pages 255
select a.name DiskGroup, b.disk_number Disk#, b.name DiskName, b.total_mb, b.free_mb, b.path, b.header_status
from v$asm_disk b, v$asm_diskgroup a
where a.group_number (+) =b.group_number
order by b.group_number, b.disk_number, b.name
/
sqlplus / as sysdba
col name for a60
col header_change for 999999999999
col controlfile_change for 999999999999
set lines 300
select dh.name, dh.checkpoint_change# as Header_Change, df.checkpoint_change# as Controlfile_Change
from v$datafile df, v$datafile_header dh
where df.file# = dh.file#
order by dh.name;
select name Diskgroup,round(total_mb/1024,2) “Total_TB”,round(free_mb/1024,2) “Free_GB”,round(((free_mb/total_mb)*100),2) “Available%” from v$asm_diskgroup;
SELECT BS_KEY, I.NAME, BACKUP_TYPE, B.PIECES,B.CONTROLFILE_INCLUDED, COMPLETION_TIME, KEEP_UNTIL FROM rman.RC_DATABASE_INCARNATION i,
rman.RC_BACKUP_SET b.WHERE i.DB_KEY = b.DB_KEY AND i.CURRENT_INCARNATION = ‘YES’ AND COMPLETION_TIME>SYSDATE-1 ORDER BY 6;
Check the status of service
C:\>sc query oracleserviceORADB
ARCHIVE PROCESS GOT STRUCK
select
a.osuser,
a.sid,
a.serial#,
p.pid,
p.spid,
b.name
from
v$session a,
v$process p,
v$bgprocess b
where
a.paddr=b.paddr
and
a.paddr=p.addr
and
p.background=1
and b.name like ‘ARC%’
;
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=full.dmp logfile=expdp_full.log query=’SCOTT.EMP:”WHERE deptno=0″,SCOTT.DEPT:”WHERE deptno=0″‘
Automatic Gap Resolution:
1. Shutdown the physical standby database.
2. Determine the current sequence on the primary database.
3. Perform at least three log switches on the primary database.
4. Verify that the logs did not get transferred to the standby archive dest.
5. Start the standby database.
6. Perform a log switch on the primary and verify that the gap gets resolved on the standby.
ls -lrth
df -h
cd /pool3_redo/archbkP02
ls -lrt | head
rm LTFP02_1_47[5-6]*.ARC
df -h .
ls -lrt | head
rm LTFP02_1_477*.ARC
df -h .
ls -lrt | head
rm LTFP02_1_478*.ARC
schemas=DM_BR_STAGE,LTF_TLR_MIL,LTF_STAGE_MIL,
‘LTF_SSO1_MIL’,’LTF_SEC_MIL’,’LTF_SDB_MIL’,’LTF_PDC_MIL’,’LTF_MAS_STAGE_MIL’,’LTF_LEA_MIL’,’LTF_INT_MIL’,’LTF_FA_MIL’,’LTF_FAS_MIL’,’LTF_EOD_MIL’,’LTF_DEP_MIL’,’LTF_CPS_MIL’,’LTF_COMM_MIL’,’LTF_CLR_MIL’,’LTF_CASTRANS_MIL’,’LMS_STAGING_MIL’ table_exists_action=skip
set long 999999999
select dbms_metadata.get_ddl(‘INDEX’,index_name,owner)||’;’from dba_indexes where owner=’LTF_CASTRANS_MIL’;
select owner,object_type,count(1) from dba_objects where object_type in(‘INDEX’,’TRIGGER’) and owner=’AIG_RMS’ group by owner,object_type;
set long 999999999
select dbms_metadata.get_ddl(‘TRIGGER’, trigger_name, owner)||’;’from dba_triggers where owner=’AIG_RMS’;
select owner,object_type,count(1) from dba_objects where object_type in(‘INDEX’,’TRIGGER’) and owner in (‘LTF_SSO1_MIL’,’LTF_SEC_MIL’,’LTF_SDB_MIL’,’LTF_PDC_MIL’,’LTF_MAS_STAGE_MIL’,’LTF_LEA_MIL’,
‘LTF_INT_MIL’,’LTF_FA_MIL’,’LTF_FAS_MIL’,’LTF_EOD_MIL’,’LTF_DEP_MIL’,’LTF_TLR_MIL’,’LTF_CPS_MIL’,’LTF_COMM_MIL’,’LTF_CLR_MIL’,’LTF_CASTRANS_MIL’,’LMS_STAGING_MIL’) group by owner,object_type;
COL DAY FORMAT a15;
COL HOUR FORMAT a4;
COL TOTAL FORMAT 999;
SELECT TO_CHAR(FIRST_TIME,’YYYY-MM-DD’) DAY,TO_CHAR(FIRST_TIME,’HH24′) HOUR,COUNT(*) TOTAL FROM V$LOG_HISTORY
GROUP BY TO_CHAR(FIRST_TIME,’YYYY-MM-DD’),TO_CHAR(FIRST_TIME,’HH24′)
ORDER BY TO_CHAR(FIRST_TIME,’YYYY-MM-DD’),TO_CHAR(FIRST_TIME,’HH24′)
ASC;
create public database link
mylink
connect to
remote_username
identified by
mypassword
using ‘tns_service_name’;
Starting in 11g release 2, the syntax has been enhanced to remove the need to epscify a TNS service_name:
create public database link
mylink
connect to
remote_username
identified by
mypassword
using ‘myserver:1521/MYSID’;
4461710444
Thanks & Regards
CIT-Service Desk Administrator
Contact no – 022 6705 8649
===============================
LONGOPS
SELECT * FROM (select username,opname,sid,serial#,context,sofar,totalwork ,round(sofar/totalwork*100,2) “% Complete” from v$session_longops)
WHERE “% Complete” != 100;
/
or
set lines 300
col TARGET for a40
col SQL_ID for a20
select SID,TARGET||OPNAME TARGET, TOTALWORK, SOFAR,TIME_REMAINING/60 Mins_Remaining,
ELAPSED_SECONDS,SQL_ID from v$session_longops
where TIME_REMAINING>0 order by TIME_REMAINING
/
==================================================================
select owner,table_name,round((blocks*8192),2)/1024/1024 “size (MB)” , round((num_rows*avg_row_len/1024/1024),2) “actual_data (MB)”,
round((round((blocks*8192),2)/1024/1024 – round((num_rows*avg_row_len/1024/1024),2)),1) “wasted_space (MB)”
from dba_tables where (round((blocks*8192),2)/1024/1024 > round((num_rows*avg_row_len/1024/1024),2))
and (round((round((blocks*8192),2)/1024/1024 – round((num_rows*avg_row_len/1024/1024),2)),1)) > 100 and owner=’FINLEA_PROD’
order by 5 desc;
================================
TOP 20 FRAGMENTED tables listener
SELECT * FROM
(SELECT
SUBSTR(TABLE_NAME, 1, 21) TABLE_NAME,
NUM_ROWS,
AVG_ROW_LEN ROWLEN,
BLOCKS,
ROUND((AVG_ROW_LEN + 1) * NUM_ROWS / 1000000, 0) NET_MB,
ROUND(BLOCKS * (8000 – 23 * INI_TRANS) *
(1 – PCT_FREE/100)/1000000, 0) GROSS_MB,
ROUND((BLOCKS * (8000 – 23 * INI_TRANS) * (1 – PCT_FREE / 100) –
(AVG_ROW_LEN + 1)*NUM_ROWS)/1000000) “WASTED_MB”
FROM DBA_TABLES
WHERE
NUM_ROWS IS NOT NULL AND
OWNER LIKE ‘%OWNER%’ AND
PARTITIONED = ‘NO’ AND
(IOT_TYPE != ‘IOT’ OR IOT_TYPE IS NULL)
ORDER BY 7 DESC)
WHERE ROWNUM <=20;
To Turn Off Auto Extend On Data files
select 'alter database datafile '''||file_name||''' AUTOEXTEND OFF;' from dba_data_files where autoextensible='YES' and tablespace_name='SSCTEMP01';
To Turn On Auto Extend On Data files
select 'alter database datafile '''||file_name||''' AUTOEXTEND ON;' from dba_data_files where autoextensible='NO' and tablespace_name='SSCTEMP01';
For Temp files Turn Off
select 'alter database tempfile '''||file_name||''' AUTOEXTEND OFF;' from dba_temp_files where autoextensible='YES';
For Temp files Turn On
select 'alter database tempfile '''||file_name||''' AUTOEXTEND ON;' from dba_temp_files where autoextensible='NO';
bplist -S CITBKNBU01 -C POVSSSCDBPRM -t 4 -s 05/02/2019 -e 05/03/2019 -l -R /
prctl -n project.max-shm-memory -i project 101
prctl -n project.max-shm-memory -r -v 12G -i project 101
expdp dumpfile=expdppre_EOM_GTBPROD_31080018_%U.dmp DIRECTORY=EXPORT_DIR SCHEMAS=WCH_LT,CSCR_LT,CORPLOS,ARX_LT,INTG_LT logfile=expdppre_EOM_GTBPROD_31082018.log compression=all parallel=8
expdp dumpfile=expdppost_EOM_GTBPROD_31082018_%U.dmp DIRECTORY=EXPORT_DIR SCHEMAS=WCH_LT,CSCR_LT,CORPLOS,ARX_LT,INTG_LT logfile=expdpost_EOM_GTBPROD_31082018.log compression=all parallel=8
compression=all parallel=6
expdp dumpfile=expdppre_EOM_GTBPROD_30112018_%U.dmp DIRECTORY=expdp SCHEMAS=WCH_LT,CSCR_LT,CORPLOS,ARX_LT,INTG_LT logfile=expdppre_EOM_GTBPROD_31082018.log compression=all parallel=12
expdp dumpfile=expdppost_EOM_GTBPROD_30112018_%U.dmp DIRECTORY=EXPORT_DIR SCHEMAS=WCH_LT,CSCR_LT,CORPLOS,ARX_LT,INTG_LT logfile=expdpost_EOM_GTBPROD_30112018.log compression=all parallel=12
SET PAUSE ON
SET PAUSE 'Press Return to Continue'
SET PAGESIZE 60
SET LINESIZE 300
SELECT to_char(first_time, 'yyyy – mm – dd') aday,
to_char(first_time, 'hh24') hour,
count(*) total
FROM v$log_history
WHERE thread#=&EnterThreadId
GROUP BY to_char(first_time, 'yyyy – mm – dd'),
to_char(first_time, 'hh24')
ORDER BY to_char(first_time, 'yyyy – mm – dd'),
to_char(first_time, 'hh24') asc
/
SET PAGESIZE 90
SET LINESIZE 150
set heading on
column "00:00" format 9999
column "01:00" format 9999
column "02:00" format 9999
column "03:00" format 9999
column "04:00" format 9999
column "05:00" format 9999
column "06:00" format 9999
column "07:00" format 9999
column "08:00" format 9999
column "09:00" format 9999
column "10:00" format 9999
column "11:00" format 9999
column "12:00" format 9999
column "13:00" format 9999
column "14:00" format 9999
column "15:00" format 9999
column "16:00" format 9999
column "17:00" format 9999
column "18:00" format 9999
column "19:00" format 9999
column "20:00" format 9999
column "21:00" format 9999
column "22:00" format 9999
column "23:00" format 9999
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0), '99')) "00:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0), '99')) "01:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0), '99')) "02:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0), '99')) "03:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0), '99')) "04:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0), '99')) "05:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0), '99')) "06:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0), '99')) "07:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0), '99')) "08:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0), '99')) "09:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0), '99')) "10:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0), '99')) "11:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0), '99')) "12:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0), '99')) "13:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0), '99')) "14:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0), '99')) "15:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0), '99')) "16:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0), '99')) "17:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0), '99')) "18:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0), '99')) "19:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0), '99')) "20:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0), '99')) "21:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0), '99')) "22:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0), '99')) "23:00"
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM
round((num_rows * avg_row_len / 1024/1024), 2))
order by 4 desc)
WHERE ROWNUM <= 60;
spool off;
DEFRAGMENTAION FOR TOP 20 TABLES
SELECT * FROM
(SELECT
SUBSTR(TABLE_NAME, 1, 21) TABLE_NAME,
NUM_ROWS,
AVG_ROW_LEN ROWLEN,
BLOCKS,
ROUND((AVG_ROW_LEN + 1) * NUM_ROWS / 1000000, 0) NET_MB,
ROUND(BLOCKS * (8000 – 23 * INI_TRANS) *
(1 – PCT_FREE / 100) / 1000000, 0) GROSS_MB,
ROUND((BLOCKS * (8000 – 23 * INI_TRANS) * (1 – PCT_FREE / 100) –
(AVG_ROW_LEN + 1) * NUM_ROWS) / 1000000) “WASTED_MB”
FROM DBA_TABLES
WHERE
NUM_ROWS IS NOT NULL AND
OWNER LIKE '%OWNER%' AND
PARTITIONED = 'NO' AND
(IOT_TYPE != 'IOT' OR IOT_TYPE IS NULL)
ORDER BY 7 DESC)
WHERE ROWNUM round((num_rows*avg_row_len/1024),2))
order by 4 desc;
DEFRAGMENTAION FOR SCHEMA WISE
select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||’MB’ “TOTAL_SIZE”, round((num_rows*avg_row_len
/1024/1024),2)||’Mb’ “ACTUAL_SIZE”, round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||’MB’ “FRAGMENTED_SPACE” from
dba_tables where owner in(‘a’,’b’,’c’,’d’) and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2)
> 100 order by 8 desc;
Dear Oracle Team,
Please find the Queries below :
alter session set nls_date_format = ‘dd.mm.yyyy hh24:mi:ss’;
SELECT s.username,s.program, s.machine,s.program,s.logon_time,s.osuser
FROM gv$session s
WHERE s.last_call_et < (60*60*24)
AND USERNAME NOT IN ('SYS','ÓRACLE','SYSRAC')
ORDER BY 1 DESC;
——How to find objects in a currently running package.
Run the package.Find SID from session browser.then query the below sql:
select OWNER,TABLE_NAME,INDEX_NAME,STATUS from dba_indexes where table_name in (select object from gv$access where sid=1069 and owner not like '%SYS%' and type ='TABLE')
select * from dba_objects where object_name in ('UII_COMMON_AFF_SERVICES_MSI','UII_API_ERROR_PKG','UII_API_CONSTANT_PKG','UII_API_COMMON_PKG')
select * from dba_objects where object_name='EXPORTS';
sankar said
conn sys/wipr0cas@casdbnew as sysdba
set colsep |
SET LINES 180 PAGESIZE 1000
select * from tbs_sizes order by PERC_FREE;
SELECT /* + RULE */ df.tablespace_name “Tablespace”,
df.bytes / (1024 * 1024) “Size (Mb)”,
SUM(fs.bytes) / (1024 * 1024) “Free (Mb)”,
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) “% Free”,
Round((df.bytes – SUM(fs.bytes)) * 100 / df.bytes) “% Used”
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
ORDER BY 4;
PROMPT####################TEMP_TABLESPACE_CASDBNEW#################################################################
SELECT A.tablespace_name tablespace, D.mb_total, SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total – SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(SELECT B.name,C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;
show parameter undo