Advertisements

Soft parsing of SQL statements was consuming significant database time.

Every query needs to be parsed before it got executed. If some queries are quite frequently used, a good design will be reuse the parsed query. That means, it will not parse the same query again (provided that we are using bind variables), rather will reuse the existing parsed query and then execute it. In an ideal world, execute should be more than the parse. More parsing requires more CPU.

Ideally when execute is more than parse, the figure will be positive. The ratio goes down if there are more parses than the execute. Then some of the queries are parsed, but less or never executed!.

Causes could be one of the followings:

  • There might be no “Prepared Statement caching” in java layer or in jdbc connection pool.

Solution:

Enable Statement Caching in Java:

// Enable statement caching

((OracleConnection)connection).setStatementCacheSize(x);

((OracleConnection)connection).setImplicitCachingEnabled(true);
  • There might be the case, before the execution of a query, the cursor was closed.
  • There might not be enough “session cached cursors” to hold the queries to reuse.

 

Fallback if you cannot change the application(java) to use statement caching :

session_cached_cursors = X
  • There might be the case where new queries were coming to play quite frequently.
  • There might be connection interruptions, this need to be investigate from network end.

Below SQL query will help to identify, the SQL’s which is parsed but not executed or less executed:

set linesize 200;
set pagesize 1000;
col sql_text format a40;
SELECT sq.sql_text, st.executions_total, st.parse_calls_total
, round( 100*(1-( st.parse_calls_total / st.executions_total )),2) execute_to_parse
, st.executions_delta, st.parse_calls_delta
, round( 100*(1-( st.parse_calls_delta / st.executions_delta )),2) delta_ratio
FROM DBA_HIST_SQLSTAT st
, DBA_HIST_SQLTEXT sq
, DBA_HIST_SNAPSHOT s
WHERE s.snap_id = st.snap_id
AND s.begin_interval_time >= to_date('2018-10-19 01:00 pm','YYYY-MM-DD HH:MI pm')
AND s.end_interval_time <= to_date('2018-10-19 03:00 pm','YYYY-MM-DD HH:MI pm') AND st.sql_id = sq.sql_id AND st.parsing_schema_name='<schema_owner>' AND st.executions_total !=0 AND st.executions_delta !=0 ORDER BY delta_ratio;
Advertisements

Oracle 12CR2: Creating PDB from the Seed

• Create a PDB using SQL*Plus

• Delete a PDB using the DBCA

• Add the created Pluggable Database net service name in the tnsnames.ora file

Upgrade the Oracle RAC Grid Infrastructure from release 12.1.0.2 to release 12.2.0.1.

Upgrade Oracle Grid Infrastructure:

Note: Oracle Grid Infrastructure 12.2 has added new features that require plenty of disk space in the CRS disk group. Therefore, you must expand the size of the CRS disk group before you can upgrade Oracle Grid to the new release.

Expand the CRS disk group in ASM.
## Shut down srv1 and srv2.
## Add a new fixed-size sharable disk to srv1. Give it the name DISK4 and set its size to 40 GB.

up21.JPG
In Oracle VirtualBox, click on srv1 >> click on Settings >> click on Storage in the right pane >> click on Controller: SATA >> click on Add Hardisk >> click on Add a New Disk >> select VDI option >> make it fixed-size >> enter the full-path filename of the disk >> set its size to 40 GB

up22.JPG
## Once the disk is created, make it sharable.
File menu item >> Virtual Media Manager >> select the created disk DISK4 >> click on the Modify button >> choose the option to make this file shareable >> press Ok >> press Close
## Link the new disk to srv2.
click on srv2 >> click on Settings >> click on Storage in the right pane >> click on Controller: SATA >> click on Add Hardisk >> click on an Existing Disk button>> navigate to the new disk file and press OK

 ## Create the directory of the new Oracle Grid home. Make grid the owner of the home directory.

Start srv1 and wait for its OS to load.
## Start Putty and login to srv1 as root user. Format the added disk.
## display all the available disks:
ls -l /dev/sd*
# format the disk:
# answer "n", "p","1", default, default, "w" when prompted
fdisk /dev/sde

## Add the partitioned disk to the ASM recognized disk list.
oracleasm listdisks
oracleasm createdisk DISK4 /dev/sde1
oracleasm listdisks

Start srv2 and wait for its OS to load.
## Start Putty and login to srv2 as root user.
## Scan the ASM disks and make sure DISK4 is seen by srv2.

oracleasm scandisks
oracleasm listdisks

## Login to the VirtualBox window of srv1 as grid.

## Start asmca utility and add DISK4 to the CRS disk group.
Right-click on the CRS disk group >> select Add Disks >> select DISK4 then click on OK button
You should see the CRS disk group size increased to nearly 50GB, as shown in the following screenshot:

up20.JPG

## Make sure you have Putty sessions connected to srv1 and srv2 as root.
## Create the directory of the new Oracle Grid home. Make grid the owner of the home directory.

##srv1
mkdir -p /u01/app/12.2.0/grid
chown grid:oinstall /u01/app/12.2.0/grid

##Create the same directory in srv2.

ssh srv2
mkdir -p /u01/app/12.2.0/grid
chown grid:oinstall /u01/app/12.2.0/grid
exit

##In the VirtualBox window of srv1, login as grid user

##Unzip the linuxx64_12201_grid_home.zip file to the new Grid home directory.

unzip linuxx64_12201_grid_home.zip -d /u01/app/12.2.0/grid

##In the terminal windows, change the current directory to the new Oracle Grid directory.

cd /u01/app/12.2.0/grid
export ORACLE_HOME=/u01/app/12.2.0/grid
./gridSetup.sh
up1

up2.JPG

up3.JPG

up4

 

Click on SSH Connectivity button

Enter the OS grid user password.

Click on Test button. If it reports that the SSH connectivity is not configured, click on Setup button.

up5.JPG

up6.JPG

up7

 

Make sure asmadmin and asmdba are selected.

up8

Make sure /u01/app/grid is selected.

up9Mark the check box “Automatically run configuration scripts

Enter the root password

up10

Set the upgrade on srv2 to Batch 2.

If you keep it in Batch 1, the system will not be available while the upgrade is going on.

up11

up12

up13.JPG

If you receive error: “cvuqdisk-1.0.10-1” being unavailable on the system, click on “Fix and Check Again” button.

The following warning can be ignored (for a production system they must be addressed):

– Memory is less than 8 GB

– resolv.conf Integrity

– (Linux)resolv.conf Integrity

Select Ignore All check box then click on Next button

Note: You could have run the Cluster Verification Utility (CVU) before running the installer. Running it from within the installer gives the same results.

up14.JPG

up15.JPG

click on Install button

up16.JPG

up17.JPG

up18.JPG

up19.JPG

In the Putty window, verify the upgrade has been successfully concluded:

/u01/app/12.2.0/grid/bin/crsctl check cluster -all
/u01/app/12.2.0/grid/bin/crsctl query crs activeversion

How to Audit User Activity in Oracle Database

Step 1.

SQL> alter system set audit_trail=db scope=spfile;
System altered.

SQL> alter system set audit_sys_operations=true scope=spfile;

Step 2. stop Database

Step 3. start database.

Step 4. check parameter
SQL> show parameter audit_
NAME TYPE VALUE
———————————— ———– ——————————
audit_file_dest string /u01/db/product/10.2.0/rdbms/audit
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string DB

Step 5. Log in as sysdba.

SQL> audit all by apps by access;

OR
Give the audit permission for delete and drop objects.

SQL> audit DELETE ANY TABLE,DROP ANY INDEX,DROP ANY PROCEDURE,DROP ANY TABLE,DROP ANY VIEW,ALTER ANY PROCEDURE,ALTER ANY INDEX by apps
by access whenever successful;

Step 7. Check the operation (log in as sysdba);

SQL> SELECT username, extended_timestamp, owner,obj_name ,action_name FROM dba_audit_trail WHERE owner = ‘APPS’ and extended_timestamp=sysdate -1 ORDER BY timestamp;

SQL> SELECT count(1) FROM dba_audit_trail WHERE owner = ‘APPS’ and extended_timestamp=sysdate -1 ORDER BY timestamp;

SQL> SELECT count(1) FROM dba_audit_trail WHERE owner = ‘APPS’ and extended_timestamp=sysdate – 1

SELECT count(1) FROM dba_audit_trail WHERE owner = ‘APPS’ ORDER BY timestamp;

Step 8. Check how many no of record in audit tables;
select count(1) from sys.aud$;

Some basic education on how to read an oracle error stack

The errors are given in descending order – the top most error is typically the application error. The errors beneath it go into more and more technical/explicit detail of what went wrong, usually down to the operating system level.

Thus the cause of the (application) exception at the top of the error stack is the (system) exception at the bottom of the stack. If we only look at the top and bottom errors, then:

ORA-00204: error in reading (block 3, # blocks 8) of control file

Oracle error, saying that it failed attempting to read a file. Why? The bottom error says:

O/S-Error: (OS 23) Data error (cyclic redundancy check).

O/S error obviously means that this is from the operating system. It tried to perform the file I/O requested by Oracle (application layer) and it failed. It reports a CRC data error on the file.

So is this an Oracle problem? No, the root error is from the o/s and it reports a CRC error on a file.

So should you have even asked for assistance here? Or should your first stop not have been going on google (or bling/yahoo/whatever) and searching “+windows+” (the o/s in question) and “+cyclic redundancy check+” (the error reported by the o/s)? Perhaps even throw “+ntfs+” into the search too (assuming that this is the file system being used).

One of the most important skills for a software engineer is the ability to solve problems – not technical details (like what is a CRC error?). The technical stuff you can look up in the reference docs and material. But if you do not try to understand a simple error stack and analyse the problem, you will never acquire the skill to solve problems. And always be reliant on the skills of others to spoonfeed you and tell you what the problem is.

CLONE ORACLE DATABASE WITH COLD BACKUP:

1. Identify and copy the database files

With the source database started, identify all of the database’s files. The following query will display all datafiles, tempfiles and redo logs:

set lines 100 pages 999

col name format a50

select name, bytes

from (select name, bytes

from v$datafile

union all

select name, bytes

from v$tempfile

union all

select lf.member “name”, l.bytes

from v$logfile lf

, v$log l

where lf.group# = l.group#) used

, (select sum(bytes) as poo

from dba_free_space) free

/

OR

SQL>Select name from v$datafile;

SQL>Select member from v$logfile;

Make sure that the clone databases file-system is large enough and has all necessary directories.

If the source database has a complex file structure, you might want to consider modifying the

above sql to produce a file copy script.

Stop the source database with:

shutdown immediate

Copy, scp or ftp the files from the source database/machine to the target.

Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

Start the source database up again

startup

2. Produce a pfile for the new database

This step assumes that you are using a spfile. If you are not, just copy the existing pfile.

From sqlplus:

create pfile=’init.ora’ from spfile;

This will create a new pfile in the $ORACLE_HOME/dbs directory.

Once created, the new pfile will need to be edited. If the cloned database is to have a new name,

this will need to be changed, as will any paths. Review the contents of the file and make

alterations as necessary.

Also think about adjusting memory parameters. If you are cloning a production database onto

a slower development machine you might want to consider reducing some values.

Now open the parameter file in clone database and change the following parameters with the respective current location.

CONTROL FILES

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

CORE_DUMP_DEST

LOG_ARCHIVE_DEST_1

And Place the BST4 pfile on /$ORACLE_HOME/dbs

Note. Pay particular attention to the control locations.

3. Create the clone controlfile

Create a control file for the new database. To do this, connect to the source database and request a dump of the current control file. From sqlplus:

alter database backup controlfile to trace as ‘/home/oracle/cr_.sql’

/

4. Edit the file

FROM: CREATE CONTROLFILE REUSE DATABASE “BST2” RESETLOGS …

TO: CREATE CONTROLFILE set DATABASE “BST4” RESETLOGS …

Change the word ‘REUSE’ to ‘set’ and the ‘BST2’ to ‘BST4′. Also change the datafiles location parameter to BST4 database location.

CONTROL FILES

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

CORE_DUMP_DEST

LOG_ARCHIVE_DEST_1

And Place the BST4 pfile on /DV1_u31/oraBST2/db/tech_st/10.2.0/dbs

5. In clone database SERVER export ORACLE_SID environment variable and start the instance

$export ORACLE_SID=bst4

$sqlplus

Enter User:/ as sysdba

SQL> startup nomount pfile=’initBST4.ora’;

6. Run create controlfile script to create the controlfile

SQL>@createcontrolfile.sql

Trouble shoot:

It is quite common to run into problems at this stage. Here are a couple of common errors and solutions:

ORA-01113: file 1 needs media recoveryYou probably forgot to stop the source database before copying the files.

Go back to step 1 and recopy the files.

ORA-01503: CREATE CONTROLFILE failed

ORA-00200: controlfile could not be created

ORA-00202: controlfile: ‘/u03/oradata/dg9a/control01.ctl’

ORA-27038: skgfrcre: file exists

Double check the pfile created in step 2. Make sure the control_files setting

is pointing at the correct location. If the control_file setting is ok, make sure that the control

files were not copied with the rest of the database files. If they were, delete or rename them.

7. Open the database

SQL>alter database open;

8. Perform a few checks

If the last step went smoothly, the database should be open.

It is advisable to perform a few checks at this point:

Check that the database has opened with:

select status from v$instance;

The status should be ‘OPEN’

Make sure that the datafiles are all ok:

select distinct status from v$datafile;

It should return only ONLINE and SYSTEM.

Take a quick look at the alert log too.

9. Set the databases global name

The new database will still have the source databases global name. Run the following to reset it:

alter database rename global_name to

/

10. Create a spfile

From sqlplus:

create spfile from pfile;

11. Change the database ID

If RMAN is going to be used to back-up the database, the database ID must be changed.

If RMAN isn’t going to be used, there is no harm in changing the ID anyway – and it’s a good practice to do so.

From sqlplus:

shutdown immediate

startup mount

exit

From unix:

nid target=/

NID will ask if you want to change the ID. Respond with ‘Y’. Once it has finished, start the database up again in sqlplus:

shutdown immediate

startup mount

alter database open resetlogs

/

12. Configure TNS

Add entries for new database in the listener.ora and tnsnames.ora as necessary.

13. Finished

That’s it!

How to switch the database to a new UNDO tablespace and drop the old one

STEP -1
=======

$ sqlplus / as sysdba

SQL> show parameter undo

NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1
SQL>

The current undo tablespace as suggested by the initialization parameter undo_tablespace is UNDOTBS1.
Leave this sysdba as is, open another console, log in as user SCOTT and initiate a transaction.

STEP -2
=======

— Create a new undo tablespace

CREATE UNDO TABLESPACE undotbs2
DATAFILE ‘/d01/apps/oradata/oraxpo/undotbs201.dbf’
SIZE 50M AUTOEXTEND ON NEXT 5M;

Tablespace created.

STEP -3
=======

— Switch the database to the new UNDO tablespace.

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2 SCOPE=BOTH;

System altered.

STEP -4
=======

— Try to drop the tablespace but failed.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

With the alter system set undo_tablespace=UNDOTBS2, the database UNDO tablespace is changed and any
new transaction’s undo data will go to the new tablespace i.e. UNDOTBS2.
But the undo data for already pending transaction (e.g. the one initiated by SCOTT before the database
UNDO tablespace switch) is still in the old tablespace with a status of PENDING OFFLINE. As far as it
is there you cannot drop the old tablespace.

STEP -5
=======

— The query shows the name of the UNDO segment in the UNDOTBS1 tablespace and its status.
Now lets see which users/sessions are running this pending transaction.

set lines 10000
column name format a10

SELECT a.name,b.status
FROM   v$rollname a,v$rollstat b
WHERE  a.usn = b.usn
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

NAME       STATUS
———- —————
_SYSSMU8$  PENDING OFFLINE

column username format a6

SELECT a.name,b.status , d.username , d.sid , d.serial#
FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
WHERE  a.usn = b.usn
AND    a.usn = c.xidusn
AND    c.ses_addr = d.saddr
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

NAME       STATUS          USERNA        SID    SERIAL#
———- ————— —— ———- ———-
_SYSSMU8$  PENDING OFFLINE SCOTT         147          4

So this is SCOTT with SID=147 and SERIAL#=4. Since we know now the user, we can go to him/her and
request to end the transaction gracefully i.e. issue a ROLLBACK or COMMIT. However,
if this is not possible (say the user initiated the transaction and left for annual leave 🙂
and trust me this happens) you may go ahead and kill the session to release the undo
segments in the UNDOTBS1 tablespace.

SQL> alter system kill session ‘147,4’;

System altered.

SELECT a.name,b.status , d.username , d.sid , d.serial#
FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
WHERE  a.usn = b.usn
AND    a.usn = c.xidusn
AND    c.ses_addr = d.saddr
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

no rows selected

As we can see once the session is kills we don’t see anymore segments occupied in the UNDOTBS1 tablespace.
Lets drop UNDOTBS1.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

If you are retaining undo data then you still won’t be able to drop the tablespace because it is still in use by undo_retention.
Let the UNDO_RETENTION time pass and then try to drop the tablespace. In my case it is 900 seconds i.e. 15 minutes.

— After 15 minutes.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

Oracle best practice: Primary and Standby archive crosscheck

Primary:

STEP -1
========

LOGIN TO THE PRIMARY SERVER

$ su – orapr1
Password:

STEP -2
=======

GET THE SEQUENCE MAX FROM V$LOG_HISTORY

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
76968

SQL> alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
76969

SQL> exit

=========================================

ON STANDBY SERVER:

$ ps -ef|grep pmon
oratst  2978     1   0   Sep 08 ?         147:34 ora_pmon_amantst
oracle  3039     1   0   Sep 08 ?         137:34 ora_pmon_airman
e460269 16109 16104   0 18:54:44 pts/1       0:00 grep pmon

$ su – oracle

Password:
mesg: cannot change mode

$ sqlplus

SQL*Plus: Release 10.2.0.3.0 – Production on Thu May 17 18:55:10 2012

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8548

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8549

SQL> select * from v$archive_gap;
SQL> select sequence#, archived, applied, status from v$archived_log;

Oracle exp and imp: oracle9i to 11g

Setp — 1
==========
Set the same character on source server:
========================================

In source database
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
Do actual export(exp parfile=exp.par)

Setp — 2
==========
Exporting the data from source database:
========================================

exp system/system123 file=NAS30.dmp log=NAS30.log owner=NAS30 buffer=100000000 grants=y rows=y statistics=none direct=y;

Setp — 3
==========
Extract the table sript from soured databae with the help of below script:
===========================================================================

For tablespace:
===============

select ‘create tablespace ‘ || df.tablespace_name || chr(10)
|| ‘ datafile ”’ || df.file_name || ”’ size ‘ || df.bytes
|| decode(autoextensible,’N’,null, chr(10) || ‘ autoextend on maxsize ‘
|| maxbytes)
|| chr(10)
|| ‘default storage ( initial ‘ || initial_extent
|| decode (next_extent, null, null, ‘ next ‘ || next_extent )
|| ‘ minextents ‘ || min_extents
|| ‘ maxextents ‘ ||  decode(max_extents,’2147483645′,’unlimited’,max_extents)
|| ‘) ;’
from dba_data_files df, dba_tablespaces t
where df.tablespace_name=t.tablespace_name
/

For the Users:
==============

set pagesize 0
set escape on
select ‘create user ‘ || U.username || ‘ identified ‘ ||
DECODE(password,
NULL, ‘EXTERNALLY’,
‘ by values ‘ || ”” || password || ””
)
|| chr(10) ||
‘default tablespace ‘ || default_tablespace || chr(10) ||
‘temporary tablespace ‘ || temporary_Tablespace || chr(10) ||
‘ profile ‘ || profile || chr(10) ||
‘quota ‘ ||
decode ( Q.max_bytes, -1, ‘UNLIMITED’, NULL, ‘UNLIMITED’, Q.max_bytes) ||
‘ on ‘ || default_tablespace ||
decode (account_status,’LOCKED’, ‘ account lock’,
‘EXPIRED’, ‘ password expire’,
‘EXPIRED \& LOCKED’, ‘ account lock password expire’,
null)
||
‘;’
from dba_users U, dba_ts_quotas Q
— Comment this clause out to include system & default users
where U.username not in (‘SYS’,’SYSTEM’)
and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)
;
set pagesize 100
set escape off

Setp — 4
==========

Fttp all the dumps and script of tablespace and users to destionation server:
=============================================================================

FTP:-
======
1.    Copy the file to tmp location.

#cp filename /tmp

2.    Change that file permission in the tmp location

#chmod 755 filename

3.    Open the command prompt.

In Run type CMD
>ftp source/destination ip address
>username
>password
>cd /tmp (file location)
>ls -lrt (To see all the files)
>lcd “location” (location is where you want to copy the file)
>bi (binary mode transfer)
>get filename (if its source ip address)
>put filename (if its destination ip address)
If you want to copy many files then (mput.file extension & mget.file extension)

Setp — 5
==========

1)    Instal Binary of Oracle 11g on Destination Server
=================================================

2)    Create the database on installed 11g
=====================================

3)    Run the script for creating tablespace and users
================================================

Setp — 6
=========
Set the same character on destination server:
=============================================

In destination database
export NLS_LANGAMERICAN_AMERICA.UTF8
Do actual import (imp parfile=imp.par)

Setp — 7
=========

Importing dump file to destination server
=========================================

imp nas30/nas30 file=NAS30.dmp log=imp_nas30.log fromuser=nas30 touser=imp_test buffer=100000000 grants=y ignore=y rows=y commit=y statistics=none;