Advertisements

How to switch the database to a new UNDO tablespace and drop the old one

STEP -1
=======

$ sqlplus / as sysdba

SQL> show parameter undo

NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1
SQL>

The current undo tablespace as suggested by the initialization parameter undo_tablespace is UNDOTBS1.
Leave this sysdba as is, open another console, log in as user SCOTT and initiate a transaction.

STEP -2
=======

— Create a new undo tablespace

CREATE UNDO TABLESPACE undotbs2
DATAFILE ‘/d01/apps/oradata/oraxpo/undotbs201.dbf’
SIZE 50M AUTOEXTEND ON NEXT 5M;

Tablespace created.

STEP -3
=======

— Switch the database to the new UNDO tablespace.

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2 SCOPE=BOTH;

System altered.

STEP -4
=======

— Try to drop the tablespace but failed.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

With the alter system set undo_tablespace=UNDOTBS2, the database UNDO tablespace is changed and any
new transaction’s undo data will go to the new tablespace i.e. UNDOTBS2.
But the undo data for already pending transaction (e.g. the one initiated by SCOTT before the database
UNDO tablespace switch) is still in the old tablespace with a status of PENDING OFFLINE. As far as it
is there you cannot drop the old tablespace.

STEP -5
=======

— The query shows the name of the UNDO segment in the UNDOTBS1 tablespace and its status.
Now lets see which users/sessions are running this pending transaction.

set lines 10000
column name format a10

SELECT a.name,b.status
FROM   v$rollname a,v$rollstat b
WHERE  a.usn = b.usn
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

NAME       STATUS
———- —————
_SYSSMU8$  PENDING OFFLINE

column username format a6

SELECT a.name,b.status , d.username , d.sid , d.serial#
FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
WHERE  a.usn = b.usn
AND    a.usn = c.xidusn
AND    c.ses_addr = d.saddr
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

NAME       STATUS          USERNA        SID    SERIAL#
———- ————— —— ———- ———-
_SYSSMU8$  PENDING OFFLINE SCOTT         147          4

So this is SCOTT with SID=147 and SERIAL#=4. Since we know now the user, we can go to him/her and
request to end the transaction gracefully i.e. issue a ROLLBACK or COMMIT. However,
if this is not possible (say the user initiated the transaction and left for annual leave 🙂
and trust me this happens) you may go ahead and kill the session to release the undo
segments in the UNDOTBS1 tablespace.

SQL> alter system kill session ‘147,4’;

System altered.

SELECT a.name,b.status , d.username , d.sid , d.serial#
FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
WHERE  a.usn = b.usn
AND    a.usn = c.xidusn
AND    c.ses_addr = d.saddr
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

no rows selected

As we can see once the session is kills we don’t see anymore segments occupied in the UNDOTBS1 tablespace.
Lets drop UNDOTBS1.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

If you are retaining undo data then you still won’t be able to drop the tablespace because it is still in use by undo_retention.
Let the UNDO_RETENTION time pass and then try to drop the tablespace. In my case it is 900 seconds i.e. 15 minutes.

— After 15 minutes.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

Advertisements

Oracle best practice: Primary and Standby archive crosscheck

Primary:

STEP -1
========

LOGIN TO THE PRIMARY SERVER

$ su – orapr1
Password:

STEP -2
=======

GET THE SEQUENCE MAX FROM V$LOG_HISTORY

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
76968

SQL> alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
76969

SQL> exit

=========================================

ON STANDBY SERVER:

$ ps -ef|grep pmon
oratst  2978     1   0   Sep 08 ?         147:34 ora_pmon_amantst
oracle  3039     1   0   Sep 08 ?         137:34 ora_pmon_airman
e460269 16109 16104   0 18:54:44 pts/1       0:00 grep pmon

$ su – oracle

Password:
mesg: cannot change mode

$ sqlplus

SQL*Plus: Release 10.2.0.3.0 – Production on Thu May 17 18:55:10 2012

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8548

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8549

SQL> select * from v$archive_gap;
SQL> select sequence#, archived, applied, status from v$archived_log;

Oracle exp and imp: oracle9i to 11g

Setp — 1
==========
Set the same character on source server:
========================================

In source database
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
Do actual export(exp parfile=exp.par)

Setp — 2
==========
Exporting the data from source database:
========================================

exp system/system123 file=NAS30.dmp log=NAS30.log owner=NAS30 buffer=100000000 grants=y rows=y statistics=none direct=y;

Setp — 3
==========
Extract the table sript from soured databae with the help of below script:
===========================================================================

For tablespace:
===============

select ‘create tablespace ‘ || df.tablespace_name || chr(10)
|| ‘ datafile ”’ || df.file_name || ”’ size ‘ || df.bytes
|| decode(autoextensible,’N’,null, chr(10) || ‘ autoextend on maxsize ‘
|| maxbytes)
|| chr(10)
|| ‘default storage ( initial ‘ || initial_extent
|| decode (next_extent, null, null, ‘ next ‘ || next_extent )
|| ‘ minextents ‘ || min_extents
|| ‘ maxextents ‘ ||  decode(max_extents,’2147483645′,’unlimited’,max_extents)
|| ‘) ;’
from dba_data_files df, dba_tablespaces t
where df.tablespace_name=t.tablespace_name
/

For the Users:
==============

set pagesize 0
set escape on
select ‘create user ‘ || U.username || ‘ identified ‘ ||
DECODE(password,
NULL, ‘EXTERNALLY’,
‘ by values ‘ || ”” || password || ””
)
|| chr(10) ||
‘default tablespace ‘ || default_tablespace || chr(10) ||
‘temporary tablespace ‘ || temporary_Tablespace || chr(10) ||
‘ profile ‘ || profile || chr(10) ||
‘quota ‘ ||
decode ( Q.max_bytes, -1, ‘UNLIMITED’, NULL, ‘UNLIMITED’, Q.max_bytes) ||
‘ on ‘ || default_tablespace ||
decode (account_status,’LOCKED’, ‘ account lock’,
‘EXPIRED’, ‘ password expire’,
‘EXPIRED \& LOCKED’, ‘ account lock password expire’,
null)
||
‘;’
from dba_users U, dba_ts_quotas Q
— Comment this clause out to include system & default users
where U.username not in (‘SYS’,’SYSTEM’)
and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)
;
set pagesize 100
set escape off

Setp — 4
==========

Fttp all the dumps and script of tablespace and users to destionation server:
=============================================================================

FTP:-
======
1.    Copy the file to tmp location.

#cp filename /tmp

2.    Change that file permission in the tmp location

#chmod 755 filename

3.    Open the command prompt.

In Run type CMD
>ftp source/destination ip address
>username
>password
>cd /tmp (file location)
>ls -lrt (To see all the files)
>lcd “location” (location is where you want to copy the file)
>bi (binary mode transfer)
>get filename (if its source ip address)
>put filename (if its destination ip address)
If you want to copy many files then (mput.file extension & mget.file extension)

Setp — 5
==========

1)    Instal Binary of Oracle 11g on Destination Server
=================================================

2)    Create the database on installed 11g
=====================================

3)    Run the script for creating tablespace and users
================================================

Setp — 6
=========
Set the same character on destination server:
=============================================

In destination database
export NLS_LANGAMERICAN_AMERICA.UTF8
Do actual import (imp parfile=imp.par)

Setp — 7
=========

Importing dump file to destination server
=========================================

imp nas30/nas30 file=NAS30.dmp log=imp_nas30.log fromuser=nas30 touser=imp_test buffer=100000000 grants=y ignore=y rows=y commit=y statistics=none;

Oracle Security Alert for CVE-2012-1675 “TNS Listener Poison Attack”

Oracle Critical Patch Updates (CPU)

The Oracle database server has a separate network connection process
that usually operates on TCP port 1521. The database registers as a
listener with this process and the process forwards the client requests
on to the actual database system that handles the requested
database instance. Since version 8i, these network connection
processes can register additional listeners. Such a listener can
even be registered for an existing database instance. The active listener
interprets this as a new Oracle Real Application Clusters (RAC) node and uses
the new listener to implement load balancing. In other words: every second
database connection will be routed via the new listener.

This security hole is particularly serious “because it allows remote
and unauthenticated attackers to redirect the database’s network
traffic on the database server to an arbitrary server and then
intercept it. All they need to know is the Oracle SID
or Oracle service name.”

Affected Products and Versions:

Oracle Database 11g Release 2, versions 11.2.0.2, 11.2.0.3
Oracle Database 11g Release 1, version 11.1.0.7
Oracle Database 10g Release 2, versions 10.2.0.3, 10.2.0.4, 10.2.0.5

Immediate solution for non-cluster envernment:

dynamic_registration_<listener> = off

For Example:

Step 1
======
LSNRCTL> show dynamic_registration

Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=XS2.WORLD))

LISTENER parameter “dynamic_registration” set to ON

The command completed successfully

Step 2
======

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
(PROGRAM = extproc)
)
(SID_DESC =
(global_dbname = ORCL.hostname)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
(sid_name = hostname)
)
)

ADR_BASE_LISTENER = /u01/app/oracle

INBOUND_CONNECT_TIMEOUT_ = 120

DYNAMIC_REGISTRATION_LISTENER = off

Conclusion:

The attack is on the Listener itself – so if we want to prevent this attack,
we need to secure that Listener, irrespective of its location.

Note: Mandatory if we expose our Listener to an unsecured or public network (e.g. internet).

As for Listeners running on our internal network –
Internal network already needs to be compromised in order for the attack to occur.

Optimizing the Oracle 11g Result Cache

The Oracle 11g Result Set Cache stores entire result sets in shared memory.
If a SQL query is  executed and its result set is in the cache then almost the entire overhead of the
SQL execution is avoided:  this includes parse time, logical reads, physical reads and any cache contention
overhead (latches for instance) that might normally be incurred.

Adverse effects:

1) Multiple SQLs that have overlapping data will store that data redundantly in the cache.  So the result set that contains
all customers from ‘X’ will duplicate some of the data in the cached result set for all customers from ‘Y’.
Therefore, the result set cache is not always as memory efficient as the buffer cache.

2) Any change to a dependent object – to any table referenced in the query – will invalidate the entire cached result set.
So the result set cache is most suitable for tables that are read only or nearly read only.

3) Really big result sets will either be too big to fit in the result set cache, or will
force most of the existing entries in the cache out.

4) Rapid concurrent creation of result sets in the cache will result in latch contention.

Note: The result set cache is part of the shared pool.  By default it is sized at only 1% of the shared pool,
so is usually pretty small. I increased the size of my result set cche to 10MB using the RESULT_CACHE_MAX_SIZE parameter.

************** Not suitable for all SQLs ************************

Result set caching makes sense  only if the following are true:

1) The result set is small
2) The SQL statement is expensive
3) The SQL statement does not experience high rates of concurrent execution
4) The SQL does not generate a large number of result sets
5) The SQL is against relatively static tables

Conclusion:
The result set cache best suits small result sets from expensive queries on tables that are infrequently updated.
So before applying the result set cache to all SQLs is to be ineffective and can lead to significant latch contention.
Application team has to drill down the SQLs and provide more information before going for some specific implemenation.