Step by step guide for upgrading and migrating to Oracle Database 12c Release 2 (

Create the directory of the new Oracle Grid home and for the database.

mkdir -p /u01/app/12.2.0/grid
chown grid:oinstall /u01/app/12.2.0/grid
mkdir -p /u01/app/oracle/product/12.2.0/db_1
chown oracle:oinstall /u01/app/oracle/product/12.2.0/db_1

Apply recent patch :

Remove the old version and copy the new version Opatch
rm -fr /u01/app/12.1.0/grid/OPatch 
rm -fr /u01/app/oracle/product/12.1.0/db_1/OPatch 

cp -R OPatch /u01/app/12.1.0/grid/
cp -R OPatch /u01/app/oracle/product/12.1.0/db_1/

[root@ggsrv1 ~]# cd /media/sf_Software_and_seeds/Patch_software
[root@ggsrv1 Patch_software]# cp -R OPatch /u01/app/12.1.0/grid/
[root@ggsrv1 Patch_software]# cp -R OPatch /u01/app/oracle/product/12.1.0/db_1/
[root@ggsrv1 Patch_software]# chown -R grid:oinstall /u01/app/12.1.0/grid/OPatch
[root@ggsrv1 Patch_software]# chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1/OPatch
[root@ggsrv1 Patch_software]# /u01/app/12.1.0/grid/OPatch/opatch version

/u01/app/12.1.0/grid/OPatch/opatchauto apply /media/sf_Software_and_seeds/Patch_software/21255373 -log /tmp/21255373_response.txt

Upgrade Oracle Grid Infrastructure

Note: Oracle Grid Infrastructure 12.2 has added new features that require plenty of disk space in the CRS disk group. Therefore, you must expand the size of the CRS disk group before you can upgrade Oracle Grid to the new release. CRS disk group size increased to nearly 50GB

Before proceeding with the upgrade you must shutdown all Oracle Databases that are using Oracle ASM for storage.

Upgrade Oracle Grid Infrastructure :

Login as grid user

Open a terminal window and change the current directory to the directory where the Oracle Grid Infrastructure installation file was copied.

[root@ggsrv1 ~]# sudo su - grid
[grid@ggsrv1 ~]$ cd Downloads/
[grid@ggsrv1 Downloads]$ ls
[grid@ggsrv1 Downloads]$

Unzip the file to the new Grid home directory.

[grid@ggsrv1 Downloads]$ unzip -d /u01/app/12.2.0/grid

In the terminal windows, change the current directory to the new Oracle Grid directory.

[grid@ggsrv1 Downloads]$ cd /u01/app/12.2.0/grid
[grid@ggsrv1 grid]$

Set the ORACLE_HOME variable to the new Oracle Grid home directory then run


[grid@ggsrv1 grid]$ export ORACLE_HOME=/u01/app/12.2.0/grid
[grid@ggsrv1 grid]$ ./



In the Putty window, verify the upgrade has been successfully concluded:

/u01/app/12.2.0/grid/bin/crsctl check cluster -all
/u01/app/12.2.0/grid/bin/crsctl query crs activeversiona
[root@ggsrv1 ~]# sudo su - grid
[grid@ggsrv1 ~]$ cp /u01/app/12.1.0/grid/network/admin/*.ora $ORACLE_HOME/network/admin

Upgrading Oracle Database

Installing Oracle Database 12.2 Software

cd /media/sf_staging/12.2/database/
mkdir -p /u01/app/oracle/product/12.2.0/db_1
chown oracle:oinstall /u01/app/oracle/product/12.2.0/db_1


In the Putty session, switch the current user to oracle then run the Pre-Upgrade Information Tool (preupgrade.jar) by issuing the following command. This is a single-line command. Make sure you copy it all before you paste it in the Putty window.

/u01/app/oracle/product/12.1.0/db_1/jdk/bin/java -jar /u01/app/oracle/product/12.2.0/db_1/rdbms/admin/preupgrade.jar FILE TEXT DIR /home/oracle/scripts

The command should give the following output:
Preupgrade generated files:


As sysdba, run the preupgrade_fixups.sql script.

sqlplus / as sysdba
# to obtain list of invalid objects:
# compile invalid views
# to obtain list of the materialized views:

Gather the optimizer statistics on the dictionary objects. Run the following command as sys user. This is recommended action to reduce the upgrade time.


Purge the Recycle Bin


Make sure that the parameter SEC_CASE_SENSITIVE_LOGON is set to TRUE.


Upgrading Database

Start the dbua utility by issuing the following commands:

export ORACLE_HOME=$ORACLE_BASE/product/12.2.0/db_1


Post-upgrade Tasks

SQL> select * from v$version;
SQL> set line 200
SQL> select * from v$version;
BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release - 64bit Production              0
PL/SQL Release - Production                                                    0
CORE      Production                                                                0
TNS for Linux: Version - Production                                            0
NLSRTL Version - Production                                                    0

In the Putty window, switch current user to oracle and fix the ORACLE_HOME variable setting in .bash_profile.

vi .bash_profile

Copy the tnsnames.ora file from the old Oracle home directory to the upgraded Oracle home directory.

cp /u01/app/oracle/product/12.1.0/db_1/network/admin/*.ora /u01/app/oracle/product/12.2.0/db_1/network/admin

Make sure the current user in the Putty session is oracle then start SQL*Plus and execute the postupgrade_fixups.sql script:

[oracle@ggsrv1 ~]$ sqlplus / as sysdba
SQL*Plus: Release Production on Wed Jan 2 00:52:21 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production

SQL> @/home/oracle/scripts/postupgrade_fixups.sql
SQL> !

[oracle@ggsrv1 ~]$  srvctl status database -d db1
Database is running.

OCM 12c Preparation

Today (19th Dec 2018) I will start my Oracle OCM 12C study project. I will list here the topics and some study guides for each of them, so I can exchange with you who is also applying, interested or have any skills. Please comment in the post related to each topic if you find out any more valuable information to add.

My target is to take the exam by Dec/2019.

Exam dates:



Hadoop Installation with Single DataNode on Ubuntu (VirtualBox)

Download link!

The above document contains :

01) Updating the Ubuntu repository.
02) Installing openjdk-6-jdk.
03) Installing openssh-server.
04) Downloading and installing Hadoop.
05) Configuring Hadoop enveronment.
07) Create a ssh key.
08) Format the name node
09) Start the namenode, datanode
10) Start the task tracker and job tracker.
11) Checking if Hadoop started correctly.

How to solve the issue caused by a poor estimate of cardinality in SQL Server.

The table statistics indicate that the tables involved have very few rows, but in reality, the row count is much higher.

Updating statistics for the tables involved will improve performance.

Rebuild Table Statistics:

-- Execute the following query to rebuild the statistics held for the 
-- Proseware.Campaign and Proseware.CampaignResponse tables.
ALTER TABLE Proseware.Campaign REBUILD
ALTER TABLE Proseware.CampaignResponse REBUILD;

Lets re-run the query and take the execution plan.

Compare the Execution Plan:

Note that the estimated and actual row counts now match almost exactly. The query will execute faster than it did previously. The execution plan includes a suggestion for an index that would further improve query performance.

Create an Azure SQL Database

This document contains the steps for :
1. Creating a Free Trial Azure Subscription
2. Create an Azure SQL Database.
3. Configure Firewall Rules for your Azure SQL Database Server.
4. Installing and Connecting from a Client Tool
5. Setting up Microsoft Visual Studio.

Create_an_Azure_SQL_Database <download link>

Soft parsing of SQL statements was consuming significant database time.

Every query needs to be parsed before it got executed. If some queries are quite frequently used, a good design will be reuse the parsed query. That means, it will not parse the same query again (provided that we are using bind variables), rather will reuse the existing parsed query and then execute it. In an ideal world, execute should be more than the parse. More parsing requires more CPU.

Ideally when execute is more than parse, the figure will be positive. The ratio goes down if there are more parses than the execute. Then some of the queries are parsed, but less or never executed!.

Causes could be one of the followings:

  • There might be no “Prepared Statement caching” in java layer or in jdbc connection pool.


Enable Statement Caching in Java:

// Enable statement caching


  • There might be the case, before the execution of a query, the cursor was closed.
  • There might not be enough “session cached cursors” to hold the queries to reuse.


Fallback if you cannot change the application(java) to use statement caching :

session_cached_cursors = X
  • There might be the case where new queries were coming to play quite frequently.
  • There might be connection interruptions, this need to be investigate from network end.

Below SQL query will help to identify, the SQL’s which is parsed but not executed or less executed:

set linesize 200;
set pagesize 1000;
col sql_text format a40;
SELECT sq.sql_text, st.executions_total, st.parse_calls_total
, round( 100*(1-( st.parse_calls_total / st.executions_total )),2) execute_to_parse
, st.executions_delta, st.parse_calls_delta
, round( 100*(1-( st.parse_calls_delta / st.executions_delta )),2) delta_ratio
WHERE s.snap_id = st.snap_id
AND s.begin_interval_time >= to_date('2018-10-19 01:00 pm','YYYY-MM-DD HH:MI pm')
AND s.end_interval_time <= to_date('2018-10-19 03:00 pm','YYYY-MM-DD HH:MI pm') AND st.sql_id = sq.sql_id AND st.parsing_schema_name='<schema_owner>' AND st.executions_total !=0 AND st.executions_delta !=0 ORDER BY delta_ratio;

How to solve High ITL Waits for given segments

The segments ( table and it’s indexes)  are encountering High ITL (Interested Transaction List) waits, it is usually because of frequent inserts against blocks with insufficient space to support the demands of the concurrency requested of them.  So when a transaction occurs it marks the blocks that the transaction is ‘interested in’ modifying by placing an entry in the block’s ITL. As shown below by default the INITRANS value for a table is one  and for an index is two.

If there are insufficient free ITL slots then new ones will be taken in the free space reserved in the block. If this runs out and too many concurrent DML transactions are competing for the same data block then we observe contention against the following wait event – “enq: TX – allocate ITL entry”.


STATISTIC_NAME                   VALUE
--------------------------- ----------
logical reads                 25024144
db block changes               5300608
physical reads                  815272
physical writes                 260050
physical read requests          815272
physical write requests         234212
gc cr blocks received              281
gc current blocks received      698263
ITL waits                          575
row lock waits                   11068
space used                   108721232
space allocated              268435456

The database is affecting the “COMMIT/ROLLBACK”,is enq: TX – allocate ITL entry, – AWR report:


Enqueue Type (Request Reason)
Succ Gets
Failed Gets
TX-Transaction (allocate ITL entry)

Segments by ITL Waits (from AWR)

Tablespace Name
Object Name
Subobject Name
Obj. Type
ITL Waits
% of Capture

There is a small section of predefined ITL (defined by the MAXTRANS parameter) in the block header, the ITL structure grows dynamically as concurrency occurs. If there is no space to handle the concurrency then waits occur. The primary cause of ITL waits is that free slots in the ITL structure in the block are not available. The lack of slots can be due to low setting of the INITRANS & MAXTRANS, which reserves minimal ITL slots initially and place a hard limit on the number of transactions that can have locks on a block respectively and/or, the block is so packed that there is no room for the ITL to grow Setting a high value of INITRANS will try to ensure that there are enough free slots in the ITL to hold all the concurrent transactions with a goal of minimal or no dynamic extension of the ITL. However, doing so also means that there is less space in the block for actual data, potentially increasing wasted space.

Another option is to try to make the data is less packed so that ITL can grow enough to accommodate the surges in ITL. This can be done by increasing PCTFREE, increasing FREELISTS and FREELIST GROUPS parameters for a table. This will make a block hold less data and more room for the ITL to grow. As a direct result of the reduction in packing, the table may experience fewer buffer busy wait events, and an increase in performance.

The main solution to this issue is to increase the ITL capability of the table or index by re-creating it and altering the INITRANS or PCTFREE parameter to be able to handle more concurrent transactions. This in turn will help to reduce “enq: TX – allocate ITL entry” wait events.

To reduce enq: TX – allocate ITL entry” wait events, We need to follow the steps below:

A Combination of increasing both INITRANS and PCTFREE

1) Set INITRANS to 50 and  pct_free to 40

alter table <table_name> PCTFREE 20  INITRANS 50;

2) Re-organize the table using move (alter table <table_name> move;)

3) Then rebuild all the indexes of the table as below

alter index <index_name>  rebuild PCTFREE 20 INITRANS 50;


Note :  Move the Table

1. This operation is fast especially when compared to shrink.
2. This can be performed using simple commands.

1. Table is not available for DML during the move operation.
2. Additional space is needed.
3. Indexes need to be rebuilt manually post move operation.

Additional Note: Alter table move ONLINE can be used only for :
-> index-organized tables and for nested table storage tables that are index organized. Using ONLINE clause, DML operations on the IOTs are allowed during rebuilding of the primary key index of the table.
-> For moving partitions and sub partitions online.



My 1st UDEMY course on Step-by-Step Instructions for Creating a Physical Standby Database (RMAN Active dupication)


Kindly enroll this course and give me your valuable suggestion, so I can improve the content of the course accordingly:

This course is not detailing about what is Oracle Dataguard but it help you to build a Physical Standby as quick as possible for your further practices

What are the requirements?

Basic knowledge of Database Administration
Standard laptop / desktop for practicing course assignments
Windows / Mac / Linux PC, Minimum 8 GB RAM and 100 GB HDD Storage
Virtualization software VirtualBox – Create two machines
OEL 5.9 / 6.5 ISO image file to install Linux on Virtual Box

What am I going to get from this course?

Database creation using DBCA.
Preparing the Primary Database for Standby Database Creation.
Preparing the Standby system
Step-by-Step Instructions for Creating a Physical Standby Database (RMAN Active duplication)
Post-Creation Steps.
Bonus Topic: Oracle 12CR2: Creating PDBs from the Seed.

What is the target audience?

Anyone who would like to learn Oracle Data Guard Technology
On job people, who are looking to get a guidance to create a Physical Standby Database

Oracle 12CR2: Creating PDB from the Seed

• Create a PDB using SQL*Plus

• Delete a PDB using the DBCA

• Add the created Pluggable Database net service name in the tnsnames.ora file