Tales From A Lazy Fat DBA

Den of an Oracle DB Performance & Troubleshooting freak … \,,/

  • Likes

    • 203,899
  • Archives

  • Categories

  • Subscribe

  • Prashant Dixit is the FatDBA

  • Follow Tales From A Lazy Fat DBA on WordPress.com
  • My Twitter Feeds

  • Disclaimer!

    FatDBA or Oracle ‘Ant’ is an independent web-blog/site.The experiences, Test cases, views, and opinions expressed in this website are my own and does not reflect the views or opinions of my employer.

    This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.
    Product and company names mentioned in this website may be the trademarks of their respective owners.

Archive for the ‘Basics’ Category

Core/Basics/Theory

No active extraction maps – Golden Gate STATS command, what’s that ?

Posted by FatDBA on April 8, 2018

Hey Mates,

Not sure if you guys have ever encountered a situation where you’ve got a return message “No active extraction maps” from Golden Gate STATS command against your Golden Gate process.
For GG newcomers, STATS command is used to display statistics for one or more Extract,Pump/Replicat groups and the output includes DML and DDL operations that are included in the Oracle GoldenGate configuration.

Now coming back to the error scenario. Let’s see where we got that message and what does that means.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 20> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     GGTUNEX1    00:00:03      00:00:08
EXTRACT     RUNNING     GGTUNPU1    00:00:00      00:00:01
REPLICAT    RUNNING     GGTUNRP1    00:00:00      00:00:03

Now when i tried to check statistics for my processes, we’ve got a message which says that there are “No active extraction maps”

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 18> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 22> stats GGTUNPU1

Sending STATS request to EXTRACT GGTUNPU1 ...
No active extraction maps.

So what does that means ??
The message that is displayed indicates that nothing has been processed in the extract or replicat. Once data for the tables in the extract and replicat are processed (captured from or applied to DB) the STATS command produces processing statistics.

Let me try to do some manipulations at the source table which is part of replication and see if this brings something to STATS results.

SQL> update emp set ENAME='KARTIKEY' where EMPNO=8090;
1 row updated.

SQL> commit;
Commit complete.

Let’s try now.

GGSCI (rac1.localdomain as ggadmin1@dixitdb1) 43> stats GGTUNEX1

Sending STATS request to EXTRACT GGTUNEX1 ...

Start of Statistics at 2018-04-05 04:03:08.

Output to /acfsmounts/acfsggv1/ggdir/dirdat/s1:
Extracting from SCOTT.EMP to SCOTT.EMP:

*** Total statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Latest statistics since 2018-04-05 03:59:38 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.

Yup, we’ve got stats for the process now after the change.

Hope It Helps
Prashant

Posted in Basics | Tagged: | Leave a Comment »

Some new features of Oracle Database 18c

Posted by FatDBA on February 28, 2018

Hey Everyone,

Today’s post is to discuss few of the new features (Small but nice) of the all new Oracle 18c (Not In depth) that i have tested.

0. Oracle Database 18c is the first version of the product to follow a yearly release pattern. Yup, that’s correct!

1. Read Only Oracle Home (ROOH)
Yes, finally we have the RO Oracle Homes.

2. Columnar Format Exadata Flash Cache Enhancements
With Oracle Database 18c we now support accessing non-HCC objects in the Exadata Flash cache in full Database In-Memory columnar format. In 12.2 this was restricted to just HCC objects.

3. Scalable Sequences
A scalable sequence is particularly efficient when used to generate unordered primary or unique keys for data ingestion workloads having high level of concurrency. Scalable sequences significantly reduce the sequence and index block contention.

4. Converting Normal or High Redundancy Disk Groups to Flex Disk Groups without Restricted Mount
You can convert a conventional disk group (disk group created before Oracle Database18c) to an Oracle ASM flex disk group without using the restrictive mount (MOUNTED RESTRICTED) option.

5. ALTER SYSTEM CANCEL SQL
Another way to kill/cancel a SQL in a session.
The syntax would be like …

ALTER SYSTEM CANCEL SQL ‘SID, SERIAL[, @INST_ID][, SQL_ID]’;
e.g. ALTER SYSTEM CANCEL SQL ‘448, 98175, @1, 761hchah78addfj’;

6. The default value of PARALLEL_THREADS_PER_CPU is finally set 1 as default!
PARALLEL_THREADS_PER_CPU describes the number of parallel execution processes or threads that a CPU can handle during parallel execution. Good Move! 🙂

7. For users of Exadata and Real Application Clusters (RAC), Oracle Database 18c brings changes that will enable a significant reduction in the amount of undo that needs to be transferred across the interconnect. It achieves this by using RDMA, over the Infiniband connection, to access the undo blocks in the remote instance. This feature combined with a local commit cache significantly improves the throughput of some OLTP workloads when running on top of RAC.

8. 18c Authenticate and authorize Oracle users directly with Microsoft Active Directory

9. New Oracle Spatial features in Oracle Database 18c include distributed transactions, sharding, easier to use web services admin console.

 

Hope It Helps
Prashant Dixit

Posted in Basics | Tagged: , | 2 Comments »

All about Oracle 12c Database In-Memory!

Posted by FatDBA on December 27, 2017

Hi Mates,

Today i will discuss about the Database In-Memory option of Oracle 12c databases. I know i am little late to add about this feature but recently I’ve implemented the option for one of my customer in Latin Americas region and there I’ve got more exposure to understand it better and implement the feature.

So, lets start with most probably my last post of the year!

Recently i have got a chance to test and to benchmark the performance gains in one of out Pre-Prod environment. This feature applies on Tablespaces, Tables, MViews, (Sub) Partitions (Except objects owned by SYS, SYSTEM and SYSAUX). To understand the feature i would first like to shed some light on two of the ways how Oracle stores tables on both Disk and Memory using conventional ‘Row Format’ and with the all new In-Memory ‘Column’ format.

Row Arrangement: Is the same old traditional method to store data in row formats. This is best for OLTP systems as queries runs faster with this approach as it quickly fetches all of the columns in a record.
Column Based Arrangement: This way it stores records in a separate column store. This proves good for OLAP systems where large set of data is chosen but only for few number of columns.

Hence, based on above two methods or data arrangements in database it is clear that the row based method is best for DMLs and column based arrangement is good when selecting large portion of data, so both of the two methods have their own respective pros and cons. But starting from Oracle 12.1.0.2 we have the all new feature of ‘DB In-Memory’ which use best of both the approaches. I mean it uses both row and column arrangements to keep data in memory. Our smart optimizer automatically knows which query to route as per the workload (OLTP and for Analytical processing).

The In-Memory feature uses the IM Column store which is a new occupant of the SGA (In-Memory Area). This In-Memory Area is adjusted by a parameter INMEMORY_SIZE.

As far as IM Column Store it is filled by the information collected by worker processes e.g. w001, w002 etc.; each of the worker process updates the IM Compression Units or IMCUs.

So, enough the background and internals involved, lets jump to the real work and check how this thing practically works.
First check the minimum comparability of the database

COMPATIBLE = 12.1.0.0.0

Lets first enable the in-memory column store. There are many of the related parameters and are given below.

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
inmemory_adg_enabled                 boolean     TRUE
inmemory_clause_default              string
inmemory_expressions_usage           string      ENABLE
inmemory_force                       string      DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query                       string      ENABLE
inmemory_size                        big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
inmemory_virtual_columns             string      MANUAL
optimizer_inmemory_aware             boolean     TRUE

Before i show next logs on how to enable it, i would first like to show you the database startup details with no In-Memory enabled.

SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             771753984 bytes
Database Buffers          402653184 bytes
Redo Buffers                8155136 bytes
Database mounted.
Database opened.
SQL>

Okay now lets enable it!
Now a question might be asked about the criterion of adding objects to the pool. So in my opinion the best candidates would
be — Very hot data, large segments (not less than 1MB), collect these stats from sources like Oracle segment statistics, number of column scans, AWR reports etc. Analytical queries etc.

SQL> alter system set inmemory_size=250m scope=spfile;

System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size                  8620032 bytes
Variable Size             822085632 bytes
Database Buffers           83886080 bytes
Redo Buffers                8155136 bytes
In-Memory Area            268435456 bytes
Database mounted.
Database opened.

Now i will try to load some stuff in to the memory.

SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate 
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                DISABLED

00:14:42 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE                         >>>>>>> 1MB pool is used to store the column formatted data.
64KB POOL                     50331648          0 DONE                         >>>>>>> 64 KB pool is used to store the metadata about its residents.

Now i am moving the discussed table to in memory.

exec DBMS_INMEMORY.POPULATE(schema_name => dixit, table_name => 'TEST1');
or Using

00:15:01 SQL> alter table test1 inmemory;

Table altered.


00:17:59 SQL> SELECT table_name,inmemory,inmemory_priority,inmemory_distribute,inmemory_compression,inmemory_duplicate
FROM user_tables WHERE table_name='TEST1';

TABLE_NAME           INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
-------------------- -------- -------- --------------- ----------------- -------------
TEST1                ENABLED  NONE     AUTO            FOR QUERY LOW     NO DUPLICATE




00:20:17 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 DONE
64KB POOL                     50331648          0 DONE


00:18:25 SQL> select owner, segment_name, populate_status from v$im_segments;

no rows selected

Above results shows that no segment is added to the pool. Now lets try to query the table and see the results again.

00:20:26 SQL> select count(*) from test1;

  COUNT(*)
----------
   1000000



00:20:48 SQL> select pool, alloc_bytes, used_bytes, populate_status from v$inmemory_area;

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016          0 POPULATING
64KB POOL                     50331648          0 POPULATING


Okay now it's doing something as status from DONE has changed to POLULATING. Lets repeat the same command again.

POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
1MB POOL                     200278016   49283072 DONE
64KB POOL                     50331648     458752 DONE

Okay its there in the pool now! Lets check the in memory area, what we have inside it.

00:23:56 SQL> select owner, segment_name, populate_status from v$im_segments;

OWNER      SEGMENT_NAME         POPULATE_STAT
---------- -------------------- -------------
DIXIT      TEST1                COMPLETED

Lets conform if table is fully populated we have to look at v$im_segments_detail which compares the number of blocks in In-Memory and in the TEST1 table.

SELECT m.inst_id, 
       m.blocksinmem, 
       m.datablocks 
FROM   v$im_segments_detail m, 
       user_objects o 
WHERE  m.dataobj = o.object_id 
AND    o.object_name = 'TEST1';

   INST_ID BLOCKSINMEM DATABLOCKS
---------- ----------- ----------
         1        6922       6922

Lets see if there is any changes captured in Execution plan of the query.

SQL> explain plan for select count(*) from test1;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3896847026

-----------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    71   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE             |       |     1 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| TEST1 |  1000K|    71   (2)| 00:00:01 |
-----------------------------------------------------------------------------

9 rows selected.

Now we have a new operation type added to the plan ‘TABLE ACCESS INMEMORY FULL’. It says that the object was accessed directly from the memory.

You can compare the response times with/without the In-Memory Store by simply disabling In-Memory functionality in your session:

ALTER SESSION SET inmemory_query = ENABLE;
ALTER SESSION SET inmemory_query = DISABLE;

You can anytime remove the objects from the pool. Other options to use with In memory.

SQL> ALTER TABLESPACE example INMEMORY;

Enabling the In-Memory attribute on the TEST1 table but excluding the “dest_id” column

SQL> ALTER TABLE TEST1 INMEMORY NO INMEMORY(dest_id);

Enabling the in memory option for table TEST1 and setting the priority to CRITICAL. With this option set on object (Other options are HIGH, MEDIUM, LOW), it will be populated immediately after the database is opened.

SQL> ALTER TABLE TEST1 INMEMORY PRIORITY CRITICAL;

There are few other options like Compression (Objects compressed during population) , Joins, Scans are also there and will be covered in a separate post or you can read the official documentation on in memory to understand them.

But i would like to discuss little about how the In-Memory option works in RAC environment.

In case of user querying the database in-memory in RAC, serial queries will only access a fraction of the data from its own node. IMCUs or IM Compression Units are not traveled using interconnect or using cache fusion. It fetches the remainder data from the disk itself.
Parallel execution helps as it starts multiple processes and ensures that at least one parallel server slave is allocated for each RAC instance. For that we have to set the parallel_degree_policy or Auto DOP to AUTO which makes the query coordinator ICMU aware and it (QC) automatically starts parallel server processes on correct server.

Now after all the discussion time to share my final words on the subject.
As we know that caching or pooling are the concepts there with Oracle RDBMS or with any of the databases from a very long time; then – why this in memory now ?
We already have few of the areas like KEEP/RECYCLE pools, Result cache etc. to keep cache data or results.
So my answer is that the Oracle In-Memory column store enables objects to load in memory in compressed columnar format which makes the scans to perform better that on-disk reads and leads to performance boosts. Use it when there is a SQL related performance issue and you’ve tried all tuning methods and each one of them are failed to fix the problem and yes If the In-Memory performance benefit can outperform the additional Oracle license costs.

Hope It Helps
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | Leave a Comment »

MySQL – How to use LOAD DATA INFILE and INTO OUTFILE

Posted by FatDBA on December 20, 2017

Today i will discuss about the the useful but script/SQL based data export/import method in MySQL database that is – LOAD DATA INFILE and INTO OUTFILE.

Lets first create an export file/script for the table using SELECT … INTO OUTFILE, here you can specify the location of the export file.

mysql> select * from country into outfile 'countrycreate.sql';
Query OK, 109 rows affected (0.00 sec)

-rw-rw-rw-. 1 mysql mysql 3.6K Dec 20 01:07 countrycreate.sql

As there is no table definition captured using SELECT INTO OUTFILE way, so you should always ensure that you have a copy of the table definition for restoration of the file.

bash-4.1$ mysqldump -u root -p --no-data dixit country > /var/lib/mysql/dixit/countryschemadef.sql
Enter password:

-rw-rw-rw-. 1 mysql mysql 3.6K Dec 20 01:07 countrycreate.sql
-rw-r--r--. 1 mysql mysql 1.6K Dec 20 01:10 countryschemadef.sql

Lets see the contents of this newly created file.

bash-4.1$ more countryschemadef.sql
-- MySQL dump 10.13  Distrib 5.7.20, for Linux (x86_64)
--
-- Host: localhost    Database: dixit
-- ------------------------------------------------------
-- Server version       5.7.20

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;

--
-- Table structure for table `country`
--

DROP TABLE IF EXISTS `country`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `country` (
  `country_id` int(11) DEFAULT NULL,
  `country` text,
  `last_update` text
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2017-12-20  1:10:20

Lets create the new user and load the table data to it.


bash-4.1$ mysqladmin -u root -p create dixit2
Enter password:


bash-4.1$ mysql -u root -p dixit2  load data infile '/var/lib/mysql/dixit/countrycreate.sql' into table country;
Query OK, 109 rows affected (0.01 sec)
Records: 109  Deleted: 0  Skipped: 0  Warnings: 0

mysql>
mysql>
mysql> select count(*) from country;
+----------+
| count(*) |
+----------+
|      109 |
+----------+
1 row in set (0.00 sec)

All set!

Hope It Helps!
Prashant Dixit

Posted in Basics | Tagged: , | Leave a Comment »

MySQL ERROR 1054 (42S22): Unknown column ‘Password’ in ‘field list’ – Version 5.7

Posted by FatDBA on November 27, 2017

mysql> update mysql.user set Password = PASSWORD(‘mysql’) where user =’root’;
ERROR 1054 (42S22): Unknown column ‘Password’ in ‘field list’

WHY ??????
This was working all good in other instances of MySQL where i had earlier versions installed, why not this one – Puzzled, Perplexed!
Let me check version information of this instance.

mysql> show variables like "%version%";
+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| innodb_version | 5.7.20 |
| protocol_version | 10 |
| slave_type_conversions | |
| tls_version | TLSv1,TLSv1.1 |
| version | 5.7.20 |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------+
8 rows in set (0.01 sec)

Well, starting from MySQL version 5.7 the PASSWORD column from mysql.user table has been removed and now replaced with ‘authentication_string’.
So the all new syntax for this password reset would be like this …

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql>

mysql> update user set authentication_string=password(‘mysql’) where user=’root’;
Query OK, 2 rows affected, 1 warning (0.00 sec)
Rows matched: 3 Changed: 2 Warnings: 1


Hope That Helps
Prashant Dixit

Posted in Basics | Tagged: | 2 Comments »

New HISTORY command in SQLPlus on 12c Release 2 :)

Posted by FatDBA on July 5, 2017

The newly release Oracle Database version 12c Release 2 comes with more than 300 new features, this is one of the coolest (Non-Vital) new feature that i have noticed. This is the History command for SQLPlus … Finally!!
Me being a great fan of CMD prompt advocate i thought to share this one. No need to use RLWRAP rpm’s now.

Let me show you how to use this feature.

First you will have to enable this feature.

SQL> history
SP2-1650: History is off, use “SET HIST[ORY] ON” to enable History.
SQL> set history on
SQL>

Verify if it is enabled.

SQL> show history
history is ON and set to “100”

Execute some command on the prompt and check the history.

SQL> history
1 select count(*) from dba_segments;
2 select * from tab;
3 select * from dixit.tab;
4 select * from tab.dixit;
5 select * from dixit.t214;

In case if you want to run the command coming at order number 1, then do it like this …

SQL> history 1 run

COUNT(*)
———-
11393

If there is a need to edit the command at ID 5.

SQL> history 5 edit

SQL> history
1 select count(*) from dba_segments;
2 select * from tab;
3 select * from dixit.tab;
4 select * from tab.dixit;
5 select * from dixit.t214;
6 show history
7 select count(*) from dixit.t214;


Hope This Helps
Prashant Dixit

Posted in Basics | Leave a Comment »

What’s new with Oracle 12C Release 2: RMAN Recovery Options

Posted by FatDBA on March 24, 2017

Hi Everyone,

Oracle introduced the long awaited Database 12c Release 2 earlier this year at Oracle Open World. The latest major release since 2013, originally announced in beta at Oracle Open World 2015, may have left some customers scratching their head after hearing the latest release would be what Oracle is calling “Cloud first availability”.

Some of the key Oracle Database 12c Release 2 highlights are:
Multitenancy improvements, which allow for more databases to run on a single machine, 4,096 to be exact, up from 252 in 12.1.
Performance increases with SQL Performance Analyzer, DB Replay and SQL Plan Management. Users can now increase query execution with the Optimizer Statistics Advisor, SQL JOIN processing.
Availability enhanced features in RMAN like table recovery, transport data across platforms, enhancements related to online operations and ongoing enhancements with Oracle Data Pump, SQL*Loader and external tables.

One of the coolest feature i have noticed during recent tests of Oracle 12c R2 was related with the RMAN recovery.
With this newest offering from Oracle Corporation now you can “RECOVER DATABASE UNTIL AVAILABLE REDO” 🙂

Hope That Helps
Prashant Dixit

Posted in Basics | Leave a Comment »

Part 1: ASM Installation on 11gR2 (VMWare)

Posted by FatDBA on January 10, 2016

Hello Everyone,
Today i would like to start series/chapters describing Oracle Automatic Storage Management (Oracle ASM) concepts and provides an overview of Oracle ASM features. Followed posts will covers subjects like Installation, Configuration, Administration/Management, Monitoring. Troubleshooting and Optimization etc.

In this maiden post (Part 1) i would like to discuss and elaborate about ASM installation and related areas.

Prerequisites:
Considering that you already have the OS ready with all packages per-installed before we begin our ASM installation on the top. I will start with right from the scratch.

Step 1:
Preparing Disks or Partitions which will be used while creating the ASM diskgroups.
I’ve created 3 Persistent Disks each of 4GB in size from the VM Disk (I will perform all steps in VM environment).

This is how the VM Setting will look like once you are done with the Disk creation.
*Forgot about the Fifth Hard Disk of 10GB for now. Will explain the usage later on the series.

1

Once you have the disks created, Next you’ll have to format the newly created disks to make them usable: Using fdisk command. Command displays the status of available newly created partitions/disks as:
/dev/sdb, /dev/sdc, /dev/sdd – Each of 4GB (4294 MBs) in size.

[root@localhost ~]# fdisk -l

Disk /dev/sda: 91.3 GB, 91268055040 bytes, 178257920 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000aab6c

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 178257919 88615936 8e Linux LVM

Disk /dev/sdb: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Click Here to Read More!!

Posted in Advanced, Basics | Tagged: | Leave a Comment »

runInstaller Error: An unexpected error has been detected by HotSpot Virtual Machine

Posted by FatDBA on January 6, 2016

Hello Everyone,
Installing your Oracle Software using GUI Method requires to call “runInstaller” script and is always an easy step if you have proper permissions, DISPLAY settings in place.
But here i would like to discuss one of the case where I’ve spent several hours to fix one of the error that occurred every-time i called runInstaller script even after setting all required permissions and DISPLAY variables.

It fails to render the installer and creates a log file under /tmp directory with below mentioned contents.

#
# An unexpected error has been detected by HotSpot Virtual Machine:
#
# SIGSEGV (0xb) at pc=0x0000003e2ce14d70, pid=4000, tid=140717162321680
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_51-b10 mixed mode)
# Problematic frame:
# C [ld-linux-x86-64.so.2+0x14d70]
#

————— T H R E A D —————

Current thread (0x000000004220d3f0): JavaThread “AWT-EventQueue-0” [_thread_in_native, id=4014]

siginfo:si_signo=11, si_errno=0, si_code=128, si_addr=0x0000000000000000

Registers:
RAX=0x0000000000000001, RBX=0x000000004216ae50, RCX=0x000000009eba2203, RDX=0x000000000fabfbff
RSP=0x00007ffb44792278, RBP=0x00007ffb447923c0, RSI=0x0000000000000000, RDI=0x0000000000000058
R8 =0x0000000000000000, R9 =0x0000000000000000, R10=0x00007ffb447921f0, R11=0x000000004216ae50
R12=0x00007ffb447923e8, R13=0x0000000041f85330, R14=0x0000000000000000, R15=0x0000000000000000
RIP=0x0000003e2ce14d70, EFL=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000000
TRAPNO=0x000000000000000d

Top of Stack: (sp=0x00007ffb44792278)
0x00007ffb44792278: 0000003e2ce0aaea 0000000000000000

Signal Handlers:
SIGSEGV: [libjvm.so+0x67ed60], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGBUS: [libjvm.so+0x67ed60], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGFPE: [libjvm.so+0x582020], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGPIPE: [libjvm.so+0x582020], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGILL: [libjvm.so+0x582020], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x583ed0], sa_mask[0]=0x00000000, sa_flags=0x14000004
SIGHUP: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGINT: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGQUIT: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004
SIGTERM: [libjvm.so+0x5839a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x14000004

————— S Y S T E M —————

OS:Red Hat Enterprise Linux Server release 6.0 (Santiago)

uname:Linux 2.6.32-573.12.1.el6.x86_64 #1 SMP Tue Dec 15 06:42:08 PST 2015 x86_64
libc:glibc 2.12 NPTL 2.12
rlimit: STACK 10240k, CORE 0k, NPROC 16384, NOFILE 65536, AS infinity
load average:0.09 0.06 0.08

CPU:total 1 em64t

Memory: 4k page, physical 2046684k(69828k free), swap 2031612k(2031612k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (1.5.0_51-b10) for linux-amd64, built on Jun 6 2013 09:59:46 by java_re with gcc 3.2.2 (SuSE Linux)

time: Sat Jan 2 23:09:21 2016
elapsed time: 2 seconds

The workaround to the problem is to set “LD_BIND_NOW” environment variable to a value “1” as shown below and re-launch the installer.

bash-4.1$ export LD_BIND_NOW=1
bash-4.1$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB. Actual 27339 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-01_03-29-40AM. Please wait …
bash-4.1$

This bug seems to have reported on 11.2.0.1 & 11.2.0.3.

Hope That Helps
Prashant Dixit

Posted in Advanced, Basics | Tagged: , | 1 Comment »

Using DBCA (DB Create) in Silent Mode.

Posted by FatDBA on August 23, 2013

DBCA is one of the most important and easiest utility to create new databases. Most of us have used it during our careers in GUI mode, but we have an option to launch it in non-graphical mode (Non-Interactive Mode) as well. Below provided steps can be performed to create database in silent/non-interactive mode. So you can launch DBCA in text mode from your favorite ssh client/terminal like putty, secureCRT, SSH Tectia Client etc. and can save yourself from that ‘old’ manual DB create methods (Using catprocs, catalog sql’s and creating new dump directories).

To create database: This requires a response file which includes all necessary details like GGDBNAME (Global Database Name), SID, Listener Name, SYS/SYSTEM/DBSNMP/SYSMAN Passwords etc.
Browser the response file (ORACLE_HOME/assistants/dbca) and make all mandatory changes.

Step 1: Location of response file
[oracle@prashant dbca]$ pwd
/u01/app/oracle/product/11.2.0/db_1/assistants/dbca    — Default Location of Response File.

[oracle@prashant dbca]$ ls
dbca.rsp  doc  jlib  logs  templates

Create a copy of original file.
[oracle@prashant dbca]$ cp dbca.rsp  dbcacreatedb.rsp

[oracle@prashant dbca]$ ls
dbcacreatedb.rsp  dbca.rsp  doc  jlib  logs  templates

Step 2:
Edit all mandatory changes in response file.
GDBNAME = “silent”
SID = “silent”
SYSPASSWORD = “oracle90”
SYSTEMPASSWORD = “oracle90”
SYSMANPASSWORD = “oracle90”
DBSNMPPASSWORD = “oracle90”
CHARACTERSET   = “US7ASCII” (Default)
NATIONAL CHARACTERSET = “UTF8” (Default)

Although there are various sections inside the response file “.rsp” file like createdatabase, deleteDatabase, createTemplateFromDB or createCloneTemplate. In this post we have talk only about ‘createDatabase‘ and is the area of interest (Used to create new database).

All Examples :
—————————–
1. dbca -progress_only -responseFile <response file>
Display a progress bar depicting progress of database creation
process.

2. dbca -silent -responseFile <response file>
Creates database silently. No user interface is displayed.

3. dbca -silent -createDatabase -cloneTemplate
-responseFile <response file>    
Creates database silently with clone template. The template in
responsefile is a clone template.

4. dbca -silent -deleteDatabase -responseFile <response file>
Deletes database silently.

Step: 3

So, we are going to install the database using ‘STEP 3’ without using -cloneTemplate. This will create database silently and will show progress in percentages.
[oracle@prashant dbca]$ dbca -silent -createDatabase -responseFile dbcacreatedb.rsp
Copying database files
1% complete
3% complete
11% complete
18% complete
26% complete
37% complete
Creating and starting Oracle instance
40% complete
45% complete
50% complete
55% complete
56% complete
60% complete
62% complete
Completing Database Creation
66% complete
70% complete
73% complete
85% complete
96% complete
100% complete
Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/silent/silent.log” for further details.

It seems that Database Creation is successful completed. Let’s check creation  log file.
File: “/u01/app/oracle/cfgtoollogs/dbca/silent/silent.log”

Okay, while checking log file i found some new entries after creating database:

Database creation complete. For details check the log-files at:
/u01/app/oracle/cfgtoollogs/dbca/silent.
Database Information:
Global Database Name:silent
System Identifier(SID):silent

[oracle@prashant dbca]$ cd /u01/app/oracle/cfgtoollogs/dbca/silent
[oracle@prashant silent]$ ls -ltr
total 104
-rw-r—– 1 oracle oinstall  1372 Aug 23 11:47 rmanRestoreDatafiles.sql
-rw-r—– 1 oracle oinstall   330 Aug 23 11:49 CloneRmanRestore.log
-rw-r—– 1 oracle oinstall  1135 Aug 23 11:50 cloneDBCreation.log
-rw-r—– 1 oracle oinstall     8 Aug 23 11:50 postScripts.log
-rw-r—– 1 oracle oinstall     6 Aug 23 11:50 lockAccount.log
-rw-r—– 1 oracle oinstall   349 Aug 23 11:51 postDBCreation.log
-rw-r—– 1 oracle oinstall   458 Aug 23 11:51 OraDb11g_home1_silent_creation_checkpoint.xml
-rw-r—– 1 oracle oinstall 66392 Aug 23 11:51 trace.log
-rw-r—– 1 oracle oinstall   654 Aug 23 11:51 silent.log

* There are some interesting files generated during DB creation like CloneRmanRestore.log, cloneDBCreation.log, postDBCreation.log and rmanRestoreDatafiles.sql.

Let’s check our new database ‘Silent’.
[oracle@prashant silent]$ ps -ef|grep pmon
oracle   12671     1  0 Aug20 ?        00:00:03 ora_pmon_sairam
oracle   20771     1  0 11:50 ?        00:00:00 ora_pmon_silent
oracle   22315 18308  0 13:06 pts/2    00:00:00 grep pmon

Okay we have a PMON for databse (Silent).
[oracle@prashant silent]$ . oraenv
ORACLE_SID = [sairam] ? silent
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle
[oracle@prashant silent]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 23 13:07:04 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select name, open_mode from v$database;

NAME      OPEN_MODE
——— ——————–
SILENT    READ WRITE

 

Thanks
Prashant Dixit
“Sharing is Good”

Posted in Advanced, Basics | Tagged: , , | Leave a Comment »

 
%d bloggers like this: