Manual Upgrade from 12.1 to 12.2 and all the little annoyances that go along with said upgrade….

Starting point….

Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 12c Release 2 (12.2) (Doc ID 2173141.1)

The above Oracle document lets you know what I was starting with….this post will have the highlights and the high (lows).

The original plan was to upgrade to 18c but a compile issue  made for a slight adjustment to this goal.  Bug 28793062 : PLSQL PACKAGE WITH USER DEFINED TYPE IS THROWING PLS-00801: [UNEXPECTED FRAGILE EXTERNAL REFERENCE.] IN 18C

So back to the original task…upgrading from 12.1 to using the following software:   Oracle Database 12c Release 2 ( for Linux x86-64


Step-Ordered Approach ….see more of the other blog posts for more explanation. It is the basic premise that Oracle software is backwards compatible for the upgrade/downgrade process.  So you can upgrade certain components before others.  For example the listeners for all of the and databases are actually running out of the 18c ORACLE_HOME, the RMAN catalog database is 18c ( , as well as the catalog version being 18c (actually 18.3).

So…major steps whenever a new upgrade cycle is started.

  • Install ORACLE software in a new ORACLE_HOME these are known as the binaries.  Download and install latest RU/RUR (see list above for what was done at this writing).
  • Migrate all listeners – listener.ora, ORACLE wallets, sqlnet.ora, tnsnames.ora to the new ORACLE_HOME
  • Migrate all clients to
  • Upgrade non-production databases, accessory databases (rman catalog) to 18c, upgrade rman catalog to the latest version as it works for previous versions.

Verify backups

SELECT * FROM v$backup WHERE status != ‘NOT ACTIVE’;



Execute Preupgrade script from source home against the database

$ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/preupgrade.jar FILE TEXT DIR /directory

Execute fixup scripts as indicated below:

Before upgrade log into the database and execute the preupgrade fixups


Copy over pfile/spfile to ORACLE_HOME

@shutdown immediate

Edit/etc/oratab to point to new ORACLE_HOME

logout, login

@startup upgrade


Post Upgrade Status Tool

$ sqlplus “/as sysdba”
SQL> @utlu122s.sql






Performance Issue – specific to our Vendor software, your mileage may vary.

alter system set optimizer_features_enable =’′ scope=both;

Migrated all accounts from 10g passwords to the stronger hash found in this version.

Upgraded Clean Address to 5.0.1

Upgraded APEX to

Upgraded ORDS to (it is my understanding that APEX is no longer required for 18.x versions of ORDS, will be removing it in the future).

SQLNET.ORA changes ….to resolve these spurious errors in the alert.log  TNS-12599: TNS:cryptographic checksum mismatch



grant execute on g$_nls to nlsuser;

grant select on sys.all_users to bansecr with grant option;
grant select on sys.all_tab_comments to baninst1 with grant option;

WARNING: too many parse errors’ in the 12.2 Alert.log (Doc ID 2320935.1)

This was due to the LB ping to a package ….we adjusted the LB as follows:

The load balancer health check now just checks /ords/touchnet/ for a 302. That should be sufficient for knowing if it is up or not.


Left to FIX:

Also…..our vendor specific upgrade process known as ESM has slowed down due to an Oracle bug.  I just use the workaround for now.

Dictionary Views Are Very Slow on DBA_TAB_COLUMNS Using CBQT (Doc ID 2454152.1)

sqlplus sys as sysdba

exec dbms_stats.delete_table_stats(ownname=>’SYS’,tabname=>’OBJ$’);

This is a temporary workaround until the Oracle merge patch is released for this version.


Our environment also includes standbys – physical and logical. Look here  for more additional information on how to manage an upgrade with DG.

The procedure I follow has been standard through several upgrade cycles.


Turn off the physical standby before starting the production upgrade.  Once production upgrade is complete, bring up the standby in the upgraded ORACLE_HOME. (I always install the software binaries in a new ORACLE_HOME months in advance). All of the changes are migrated automatically to the standby database as part of the REDO APPLY process.


Turn off logical standby during upgrade to production. After production upgrade is finished, upgrade the logical standby using the same steps as production :  startup upgrade, $ORACLE_HOME/bin/dbupgrade….etc.








Posted in Uncategorized | Leave a comment

Moving an Entire Infrastructure to the Cloud – AWS & Docker

Come to Educause 2017 Conference in Philadelphia, Pennslyvania to see this presentation.   Visit

The following is our abstract for this session:

“Moving the entire SUU infrastructure into the cloud with elasticity to scale up or out on demand. Purchasing AWS Reserved Instance for 3 years at  the cost of on-demand AWS resources. Docker + AWS allows for a dense ‘containerized’ environment that is scriptable and allows for quick deployment on an as-needed basis. Even using a single AWS region, there are multiple availability zones to distribute resources for fault tolerance. Since moving to AWS our hardware performance is faster than we ever could afford on local hardware. Docker also provides a environment that works well with AWS minimizing the hardware needed with a smaller memory footprint. XE works well with Docker to spin up additional apps and/or server instances on an as needed basis. Items that will be discussed as they pertain to our AWS implementation – Load Balancers, AWS EC2 instances types vs Performance, On-Demand vs Reserved Instance Costs, AWS Images vs. Custom, Banner/RedHat Licensing, WHOIS, C and Cobol Compiles, Linux Upgrade, SSH logins, Ellucian Banner, Firewall, JMeter, SQLNET Encryption, CAS and IP addressing. Oracle-specifics will be provided: Support, Installations, Moving Large databases to the Cloud, Mixed Hardware Environment and Licensing. What happened during our trial period? Major issues/lessons learned. How did we innovate? Integration with Slack to stop/start AWS instances, All IT staff login via SSH with an interim server, SSH key infrastructure. This retool allowed us to reduce the number of servers, save electricity, reduce load in our local server room and prove our disaster recovery capability. Latency that appeared during testing lead us to move the entire infrastructure at the same time. We are in the process of moving a final few components to AWS such as CAS, Evision MAPS and Degreeworks. Extensive Polling along with a Hardware Survey will be distributed to Participants “


“This presentation leaves copyright of the content to the presenter. Unless otherwise noted in the materials, uploaded content carries the Creative Commons Attribution-NonCommercial-ShareAlike license, which grants usage to the general public with the stipulated criteria.”

Posted in Uncategorized | Leave a comment

JVM Controller in FMW 11.x won’t start

This version of the motif library ( is not available on OEL7 or RedHat7.


The workaround is to create a symbolic link between the new motif library and the old one.

ln -s /usr/lib64/

At this moment Reports components will work.

Posted in Uncategorized | Leave a comment

Archive Logs in $ORACLE_HOME/dbs

This has been happening over the years but I couldn’t figure out why….finally found this article on My Oracle Support (MOS):  Setup Local archiving on standby to Prevent Logs sent to $ORACLE_HOME/dbs (on both GRID and SQLPLUS) (Doc ID 1271095.1)

I previously thought this was due to a misconfigured rman backup configuration/implementation.

Just a note here on using this document…

If you only set two archiving destinations on a standby your database may not start in the following situation.  The following sentence is a quote from the article.

“Check for at least one log_archive_dest_<n>, should point to a available location and the VALID_FOR parameter should be either VALID_FOR=(ALL_LOGFILES,ALL_ROLES) or (STANDBY_LOGFILES,STANDBY_ROLE).”

When I set two archiving locations, the database dumped core on restart , not mounting while complaining it had no destination to archive to:



Changing the following log_archive_dest_1 for all roles fixed the issue.


I don’t know if this is working as it was designed…or a bug.  Something unique about my environment is that the FLASH_RECOVERY_AREA is different on the primary and the standby.


Posted in 11g, DATA GUARD, MOS, my oracle support, oracle, Physical Standby, RMAN, standby, Uncategorized | Leave a comment

IOUG Collaborate 2016


Posted in Uncategorized | Leave a comment

Surviving A Month of Dead Connection Detection Zombie Land!


August 11th – Switched Over using a Physical Standby on new hardware for our ERP software in about 15 minutes. This was a manual SQLPLUS command-line method because of issues with using DataGuard to complete this task without hanging or connection-related issues (that never show up until switchover time). We are ecstatic at this point because we also implemented huge pages on Linux RH 64bit , resized the SGA, encrypted filesystems , DB_ULTRA_SAFE parameter, 10GB NIC and implemented ASMM all at the same time.  Life couldn’t be better. This is Oracle …and all SQLNET traffic is encrypted using Native Network Encryption….keep all of this in mind as you follow along on this horror story!

August 15th – Network guys updated Cisco switches and routers with software/firmware upgrades with planned downtime. Life seems normal up to this point.

August 16 th – As DBA I scan the alertlogs every 15 minutes for any of the following keywords: ORA and/or WARNING

I start getting complaints from endusers that our ERP is slow, freezing, etc. At this point we suspect the switchover to new hardware.  But we had made so many changes which one could it be?  I see a 10% increase in waits related to the redo logs (log file parallel write). So I moved the redo logs to unencrypted filesystem area….made no difference, moved them back.   I could reset the DB_ULTRA_SAFE to improve performance but that requires a database cycle.

By the way when mentioning encrypted native filesystems…this is Oracle’s answer to a Support ticket asking about compatibility.

“This is a 3th party issue, we have our own solution which would be TDE tablespace encryption,
for any 3th party solution to properly work, it must be completely transparent to oracle,
the normal read / write OS calls oracle does must be redirected to the decrypt / encrypt code, it
is possible asynch_io can no longer work and you also may need to set parameter disk_asynch_io = false,
otherwise it is entirely up to the 3th party product being tested and certified to run with oracle
by the 3th party vendor.”

As the university started in full work mode for our FALL semester…performance issues worsened.  So I started looking in the alert logs for other errors I wasn’t capturing….now I start seeing a LOT of the 12170 ….so I modify my script to send those by email when they occur by adding the keyword FATAL .

Errors started to come in groups  I had seen the occasional ORA-3136 which I knew to ignore as an error related to logging in.
WARNING: inbound connection timed out (ORA-3136)
Fatal NI connect error 12170.

August 21 – Sept 5 – This is where I dread coming to work every day. I am spending 10 hours or so trying to figure out what is wrong, my ulcer also kicks into high gear.

Oracle Enterprise Graphs are looking unusual…..huge NETWORK waits especially for jobs that connect to other databases using database links.  See the following for how it appeared to me. BAD, BAD, BAD…..for everyone. So now the guessing goes into full swing…and I mean GUESSING because we don’t have much to go except lots of spurious ORA errors.  By the way this is an Oracle Forms App….so we don’t have expensive tools to trace sessions all the way back to the database. There is some functionality for tracing but they only indicated the hanging/freezing sessions were gone from the database after a point in time, but what disconnected them?


I surmised the freezing that people experienced with this FORMS app was due to the 30 network retries we have set on the FORMS configuration. We have been using this parameter for many years which attempts a reconnection for at least 30 times before giving up. So…this is the freezing as it takes time to do this 30 times. People end up closing their browser and that generates the 12170 errors.

I notice some patterns – these 12170 errors come all at once in batches through the alertlogs. I finally figured out to find the smallest error number is the important one – 110 .

Fatal NI connect error 12170.

TNS for Linux: Version – Production
Oracle Bequeath NT Protocol Adapter for Linux: Version – Production
TCP/IP NT Protocol Adapter for Linux: Version – Production
Time: 29-AUG-2014 18:14:10
Tracing not turned on.
Tns error struct:
ns main err code: 12535

TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505

TNS-00505: Operation timed out
nt secondary err code: 110
nt OS err code: 0
Client address: (ADDRESS=

See MOS Note

Alert Log Errors: 12170 TNS-12535/TNS-00505: Operation Timed Out (Doc ID 1628949.1)

We are pretty sure this is network-related/OS-related but how do we convince the Network guys something is wrong.  I am digging through MOS finding everything I can find on tracing and the like.  I start tracing on the database server generating huge amounts of logs (50 GB at a time). Trying to find error numbers that I haven’t seen.  I see a lot of the following and if you search through MOS you can find information on those errors.

[16-SEP-2014 18:11:01:606] nsfull_pkt_rcv: error exit
[16-SEP-2014 18:11:01:606] nioqer: entry
[16-SEP-2014 18:11:01:606] nioqer:  incoming err = 12151
[16-SEP-2014 18:11:01:606] nioqce: entry
[16-SEP-2014 18:11:01:606] nioqce: exit
[16-SEP-2014 18:11:01:606] nioqer:  returning err = 3113

Finally I find a document that mentions the 12151 and 3113 are really just spurious and not the real cause of our problems. Tracing on the database side didn’t really help….basically we determined the sessions were gone at that point and the traces just verified this…Oracle didn’t know what happened to them so the lack of information should be speaking volumes at this point.

Talking with network guys throughout all of this ….they can not find anything wrong.

We request them to turn off sqlnet packet inspection as per this MOS Doc:

Troubleshooting guide for ORA-12592 / TNS-12592: TNS: bad packet (Doc ID 373431.1)

Still the problems persist….what is wrong?  We are so desperate at this point we start second-guessing everything that was done during the switchover. And this is where we start heading down the road of too many mods! Guess what….the following list shows what DIDN’T HELP.

1. Switched back to the 1GB NIC

2. Switched hardware load balancers

3. Upgraded the database to because of an Oracle bug

10096945 Waits using DBLINKS & nested loops

We had to install two more patches on top of (plus CPU) to get rid of some issues associated with that version which produced ORA-904 errors…turning off query rewrite fixed one of the problems, flushed the shared pool.

17956707 ORA-904 executing SQL over a database link 19/May/2014
17551261 ORA-904 “from$_subquery$_003”.<column_name> with query rewrite 21/Feb/2014

4. Moved to listener

5. Installed patch to fix ORA-904 as a result of
Reinstalled recreatectxsyssyncrn.sql – bug in
6. Installed/ran OSWatcher on the database server. How does one even start to understand what is produced especially for the network stats.

7.  Added USE_NS_PROBES_FOR_DCD=TRUE in sqlnet.ora (this reverts to a 11g type of Dead Connection Detection)
8. Enabled a job to restart services on a different application that was losing its connections as well.
9. Removed one of the three Oracle Forms/Reports Servers from the load balancer….plan was to wipe it, reinstall to a fully patched version for redeployment.

10. Ran RDA against the forms server, using it for a contact on this issue. Lots of disconnects showing up in the logs.

11. Was it a recent JAVA desktop update…., was it the Java version on Weblogic?

12. Results of traces on the database
Lots of 12151 and 3113 errors which are spurious in nature
ora-12547 in  server_45148.trc
MOS Notes 1104673.1 , 1591874.1 & 1300824.1, 1531223.1,  461053.1

13. Recompiled all of the forms, rewrote bad application code

14. Reconfigured/rebooted tweaked all settings on the LOAD BALANCER

15.  OK this was the BAD thing we did….modified the Linux Operating System TCP keep alive parameters on the database host.  Haven’t ever done this before….didn’t know what we were doing. Upped it to two hours …and then upped it again to over 8 hours assuming that was giving a session exclusive access for that time period.

About this time….the network guys did realize the Network Intrusion System was listening on the connections between the servers….not the outside traffic, protected/firewalled interior traffic.  Oops….huge bottleneck because it was inadequately sized.  So they reconfigured it.

Life seemed OK….it was better in some respects. Some of our other applications that connected to the same database turned back to normal activity but the FORMS app was still choking.

Well my wonderful boss finally decided to start a SQLPLUS trace session from different vantage points to set what happened to try and determine if any kind of session was affected.

1. SQLPLUS from our desktops

2. SQLDEVELOPER from our desktops

3. SQPLUS from a server in the same subnet

4. SQLPLUS from other servers in different subnet

5. SQLPLUS from the app server different subnet not on the load balancer

6. SQLPLUS from the app servers still on the load balancer

Connected to the database in question and started the wait…..we realized the disconnects seemed to happen somewhere between one hour and two hours.  Isn’t that Network? Not necessarily…..we were seeings a ORA-3135 which is a completely different error number than anything I had seen.

Finally I get an error number that helps with searching on MOS, I start finding a lot of better information as it applies to Dead Connection Detection.  Due to the tracing I knew our DCD was in place and working!

[16-SEP-2014 15:24:00:384] niotns: Enabling CTO, value=180000 (milliseconds)
[16-SEP-2014 15:24:00:384] niotns: Enabling dead connection detection (10 min)
[16-SEP-2014 15:24:00:384] niotns:  listener bequeathed shadow coming to life…
[16-SEP-2014 15:24:00:384] nsinherit: entry

Ora-03135: Connection Lost Contact After Idle Time (Doc ID 1380261.1)
Troubleshooting ORA-3135/ORA-3136 Connection Timeouts Errors – Database Diagnostics (Doc ID 730066.1)
Troubleshooting ORA-3135 Connection Lost Contact (Doc ID 787354.1)

” Idle Connection Timeout

The most frequently occurring reason for this error is due to a Max Idle Time setting at the firewall.
If the client traverses the firewall to get to the server and is being terminated abruptly, this is likely the cause.

A relatively simple test to determine if this is a Firewall maximum idle time issue:

At the offending client, establish a SQL*Plus or OCI client connection to the server.

SQL*Plus username@TNS connect string

SQL>  <===Allow this client connection to sit idle for an hour.

Return after the hour is up and issue a simple query:

SQL>select * from dual;

If this connection is terminated with ANY error it is likely that your firewall will not allow a connection to remain idle for a lengthy period of time.

It is possible to trick the firewall with Dead Connection Detection packets in order to keep the connection alive. 

WHAT!…..I HAD DCD in place all along….it is still saying firewall but read on as there is a gotcha or caveat to DCD.

 DCD was never designed to be used as a “virtual traffic generator” as we are wanting to use it for here. This is merely a useful side-effect of the feature.
In fact, some later firewalls and updated firewall firmware may not see DCD packets as a valid traffic possibly because the packets that DCD sends are actually empty packets. Therefore, DCD may not work as expected and the firewall / switch may still terminate TCP sockets that are idle for the period monitored, even when DCD is enabled and working.
In such cases, the firewall timeout should be increased or users should not leave the application idle for longer than the idle time out configured on the firewall.”

At this point we are completely discouraged as we believe that there is no fix….we could implement PROFILES setting idle timeout on the database so people would have to relogin every least that would stop the freezing due to 30 network retries.

Finally the last clue we needed to figure out how to get rid of the ZOMBIES…….as part of a MOS document.

“The firewalls inactivity timer can be disabled for the affected host.

The host OS keep alive setting (tcp_keep_alive) can be modified to be less than the firewall inactivity timeout. This will cause the OS to send a test packet to the client when the timeout is reached and the client will respond with an ACK. To all intents and purposes this is the same as turning off the firewall inactivity timer for this host.”

We had reset the tcp_keep_alive to higher than the default setting of 1 hour for the firewall during all of the mods we tried…so we broke it while trying to fix the original connection issue that the IPS caused. Applications that only connected for a few seconds weren’t affected but all apps that required a sticky session for more than one hour were…

The horror story is now over…but I still have an ulcer, hopefully that will heal in time.  Now on to the database upgrade to ….as I see that DCD is handled completely different in that one…so I expect to experience this ZOMBIE stuff again. But at least I have more knowledge that what I started with.

So the final conclusion is that the network/switches/router upgrades that happened way back when this started… no longer recognizes Oracle’s DCD packets (they are zero length) but it does recognize the OS packets for keep alive (non zero length).

Errors seen when firewall is blocking connections – rule-based
Fatal NI connect error 12514, connecting to:
Fatal NI connect error 12514, connecting to:
Fatal NI connect error 12514, connecting to:

Posted in Uncategorized | 3 Comments

Deinstalling an ORACLE_HOME in 11gR2 DB = MORE WORK!

This is a post about using the deinstall script (it is a perl script on UNIX machines) without some testing on a non-production server. Why? It has the ability remove important database components that may still be needed for that server. Especially if you haven’t implemented database component locations as part of the Oracle Flexible Architecture (OFA) method.

I typically start up the listener in the highest-upper-level $ORACLE_HOME to service all databases for a particular server. Running the ORACLE-provided deinstall on an old decommissioned $ORACLE_HOME removed all listeners in the production $ORACLE_HOME. During the run of the deinstall script there didn’t seem to be any way around this task, I was unable to leave it blank or write in a wrong answer. So I created a bogus listener that was never going to be used for the script to remove. MORE WORK!

Specify all Single Instance listeners that are to be de-configured [LISTENER1,LISTENER2]: none

Invalid listener list [ none]. You can only specify a subset of the configured listeners.

At least one listener from the discovered listener list [LISTENER1, LISTENER2] is missing in the specified listener list [LISTENER1]. 
The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. 
If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]:

Basically it requests to remove all LISTENERS (they are automatically discovered during the run) and you have to choose at least one to remove.

At least it didn’t try to remove databases that weren’t in this ORACLE_HOME.

Specify the list of database names that are configured in this Oracle home []:

Apparently there are a lot of problems (ie bugs) with the deinstall scripts so Oracle recommends not to use them. The following note verifies that the deinstall doesn’t behave as it should! So very naughty. It recommends downloading and installing yet another utility to manage things. MORE WORK! The downloaded utility is specific to ORACLE versions (ex. and operating systems, so now I have to keep more software on hand to accomplish what used to be a simple task using the ORACLE UNIVERSAL INSTALLER.  While this type of script may be useful for certain test environments that depend on extensive scripting capabilities like the deinstall tool, I am wondering how to use it safely to remove software on a production server?

One workaround would be to create a new oraInventory for each $ORACLE_HOME, then you would have to juggle multiple copies of /etc/oraInst.loc (on UNIX) to do maintenance tasks such as patching and upgrades. This would involve some local documentation on your part to keep it all straight. This would allow you to just manually remove old $ORACLE_HOME (s) along with their associated inventory when they are no longer needed. Particularly because database releases are all now required to have their own $ORACLE_HOME. MORE WORK!

How To Deinstall/Uninstall Oracle Home In 11gR2 [ID 883743.1]

“De-installation from new OUI is desupported.

When you run the deinstall command, if the central inventory (oraInventory) contains no
other registered homes besides the home that you are deconfiguring and removing, then the deinstall command removes the following files and directory contents in the Oracle base directory of the Oracle Database installation owner:”

(In other words if this is the last ORACLE_HOME it will also remove the following directories under ORACLE_BASE)….so using a single oraInventory (s)/ORACLE_HOME and the deinstall script at the same time may be counterproductive. MORE WORK!


1) External de-install utility downloadable from OTN ***Recommended method***

It is advised to use the external De-install utility that is downloadable from OTN as currently there are some open bugs with the deinstall script.”

I downloaded the zip file, checked out the readme. It points you to the documentation on the standard use of deinstall. See the following output for how it was run:

>perl deinstall
Tool is being run outside the Oracle Home, -home needs to be set.
deinstall -home <Complete path of Oracle home>
 [ -silent ]
 [ -checkonly ]
 [ -local ]
 [ -paramfile <complete path of input parameter properties file> ]
 [ -params <name1=value[ name2=value name3=value ...]> ]
 [ -o <complete path of directory for saving files> ]
 [ -tmpdir <complete path of temporary directory to use> ]
 [ -help : Type -help to get more information on each of the above options. 
perl deinstall -home ORACLE_HOME_TO_BE_REMOVED  (simplest form of using this script)
###############CHECK OPERATION SUMMARY #######################
 Oracle Home selected for de-install is: /u01/app/oracle/product/11.2.0/dbhome_1
 Inventory Location where the Oracle home registered is: /u01/app/oraInventory
 Skipping Windows and .NET products configuration check
 Following Single Instance listener(s) will be de-configured: LISTENER1
 No Enterprise Manager configuration to be updated for any database(s)
 No Enterprise Manager ASM targets to update
 No Enterprise Manager listener targets to migrate
 Checking the config status for CCR
 Oracle Home exists with CCR directory, but CCR is not configured
 CCR check is finished
 Do you want to continue (y - yes, n - no)? [n]: y
########### CLEAN OPERATION START ########################
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2012-05-08_11-03-05-AM.log
Updating Enterprise Manager ASM targets (if any)
 Updating Enterprise Manager listener targets (if any)
 Enterprise Manager Configuration Assistant END
 Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2012-05-08_11-03-51-AM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2012-05-08_11-03-51-AM.log
De-configuring Single Instance listener(s): LISTENER1
De-configuring listener: LISTENER1
 Stopping listener: LISTENER1
 Listener stopped successfully.
 Deleting listener: LISTENER1
 Listener deleted successfully.
 Listener de-configured successfully.
De-configuring backup files...
 Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
 OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean_2012-05-08_11-03-05-AM.log
 Oracle Configuration Manager clean END
 Removing Windows and .NET products configuration END
 Oracle Universal Installer clean START
Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done
Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/oracle/agent12c/core/'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory '/tmp/deinstall2012-05-08_10-58-01AM' on node 'nodename'
Oracle install clean END
############ CLEAN OPERATION END #########################
########## CLEAN OPERATION SUMMARY #######################
 Following Single Instance listener(s) were de-configured successfully: LISTENER1
 Cleaning the config for CCR
 As CCR is not configured, so skipping the cleaning of CCR configuration
 CCR clean is finished
 Skipping Windows and .NET products configuration clean
 Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.
 Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.
 Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.

Ok…I kinda left this post in nowhere land. What does a person do with the proliferation of ORACLE_HOMEs?   Detach from oraInventory will allow you to safely manually remove the ORACLE_HOME binary files to reclaim space as you need some more to install the next version!  Easy to do, scriptable and reasonably safe task.

Deinstall by Detaching ORACLE_HOME


One of the easiest ways to remove an ORACLE_HOME that is no longer needed is to just detach it from the OraInventory. This is less disruptive and faster than running the Oracle-provided deinstall tool – some personal experiences/observations related to using this utility are mentioned after the code in this section. See the following MOS Document:  How To De-install Oracle Home Using runInstaller [ID 1070610.1]


./runInstaller -silent -detachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME=”/u01/app/oracle/product/11.2.0/dbhome_2″



> u01/app/oracle/product/11.2.0/dbhome_3″                                    <

Starting Oracle Universal Installer…


Checking swap space: must be greater than 500 MB.   Actual 35913 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

‘DetachHome’ was successful.




The following example is removing a client install:

./runInstaller -silent -debug -force \


FROM_LOCATION=/u03/jobsuser/patches/client/stage/products.xml \


UNIX_GROUP_NAME=jobsuser \


ORACLE_HOME=/u03/jobsuser/product/11.2/client_2 \


ORACLE_HOME_NAME=”OraClient11g_Home2″ \


ORACLE_BASE=/u03/jobsuser \




For more information see the MOS Document: Master Note For Cloning Oracle Database Server ORACLE_HOME’s Using the Oracle Universal Installer (OUI) [ID 1154613.1] Another document outlining the changes for Online Patching: RDBMS Online Patching Aka Hot Patching [ID 761111.1]


Posted in Uncategorized | 6 Comments