Oracle RAC集群刪除節點

環境介紹

手動備份OCR

刪除Oracle數據庫

1、刪除數據庫實例

2、檢查實例是否刪除

3、刪除Listener

5、刪除ORACLE軟件

6、更新保留節點清單

刪除GRID集群服務

1、刪除Grid服務

2、從集群刪除節點rac03信息

3、更新刪除節點集群清單

4、刪除Grid軟件

5、更新保留節點集群清單

6、檢驗

本教程刪除位於節點rac03上的節點

環境介紹

目前環境如下:

  • rac 集群有3個節點,主機名為rac01,rac02,rac03
  • 數據庫名為orcl
  • rac03上的數據庫實例名為orcl3

手動備份OCR

OCR中通常包含下列內容:

  • 節點成員信息
  • 數據庫實例,節點,以及其他的映射關係
  • ASM
  • 資源配置信息(vip,services等等)
  • 服務特性(Service characteristics)
  • Oracle集群中相關進程的信息
  • CRS控制的第三方應用程序信息

# cd /u01/app/11.2.0/grid/bin/

# ./ocrconfig -manualbackup

rac01 2019/09/11 16:15:21 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20190911_161521.ocr

[root@rac01 bin]# ./ocrconfig -showbackup

rac02 2019/09/09 14:08:11 /u01/app/11.2.0/grid/cdata/rac-cluster/backup00.ocr

rac02 2019/09/09 14:08:11 /u01/app/11.2.0/grid/cdata/rac-cluster/day.ocr

rac02 2019/09/09 14:08:11 /u01/app/11.2.0/grid/cdata/rac-cluster/week.ocr

rac01 2019/09/11 16:15:21 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20190911_161521.ocr

刪除Oracle數據庫

1、刪除數據庫實例

在rac01上運行DBCA工具刪除數據庫實例

# su - oracle

$ export DISPLAY=10.10.0.1:0.0

$ dbca

2、檢查實例是否刪除

$ srvctl config database -d orcl

Database unique name: orcl

Database name: orcl

Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1

Oracle user: oracle

Spfile: +ASMDATA/orcl/spfileorcl.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: orcl

Database instances: orcl1,orcl2

Disk Groups: ASMDATA

Mount point paths:

Services:

Type: RAC

Database is administrator managed

$ srvctl status database -d orcl

Instance orcl1 is running on node rac01

Instance orcl2 is running on node rac02

3、刪除Listener

在rac01節點使用oracle用戶,檢查監聽運行位置,在11gR2中監聽默認在GRID_HOME運行

$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home:

/u01/app/11.2.0/grid on node(s) rac01,rac02,rac03

End points: TCP:1521

如果監聽是在ORACLE_HOME運行,則必須執行下面命令禁用並停止監聽,這裡雖然是在GRID_HOME運行,但還是建議執行下面命令

禁用監聽

$ srvctl disable listener -l LISTENER -n rac03

停止監聽

$ srvctl stop listener -l LISTENER -n rac03

檢查狀態

$ srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): rac02,rac01

4、更新刪除節點清單

在要刪除的節點rac03上執行下面命令

# su - oracle

$ cd $ORACLE_HOME/oui/bin

$ ./runInstaller -updateNodelist ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac03}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

查看下清單文件

$ cd /u01/app/oraInventory/ContentsXML/

$ cat inventory.xml

<inventory>

<version>

<saved>11.2.0.4.0/<saved>

<minimum>2.1.0.6.0/<minimum>

<home>

<home>

<node>

<node>

<node>

<node>

<home>

<node>

<node>

<compositehome>

5、刪除ORACLE軟件

在要刪除的節點rac03上,使用oracle用戶執行下面命令

如果ORACLE_HOME是共享的執行

$ ./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

這裡是非共享的,執行下面的命令

$ cd $ORACLE_HOME/deinstall

$ ./deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0.4/dbhome_1

Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid

The following nodes are part of this cluster: rac03

Checking for sufficient temp space availability on node(s) : 'rac03'

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-09-11_05-10-14-PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-09-11_05-10-17-PM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2019-09-11_05-10-20-PM.log

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location : /u01/app/oraInventory/logs//ocm_check6856.log

Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac03

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac03', and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0.4/dbhome_1

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

Oracle Home exists with CCR directory, but CCR is not configured

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-09-11_05-10-08-PM.out'

Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-09-11_05-10-08-PM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2019-09-11_05-10-20-PM.log

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-09-11_05-11-22-PM.log

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

#######################################################################

Oracle deinstall tool successfully cleaned up temporary directories.

Oracle Universal Installer cleanup completed with errors.

Failed to delete directory '/u01/app/oracle' on the local node.

Successfully deleted directory '/u01/app/oracle/product/11.2.0.4/dbhome_1' on the local node.

Successfully detached Oracle home '/u01/app/oracle/product/11.2.0.4/dbhome_1' from the central inventory on the local node.

CCR clean is finished

As CCR is not configured, so skipping the cleaning of CCR configuration

Cleaning the config for CCR

####################### CLEAN OPERATION SUMMARY #######################

######################### CLEAN OPERATION END #########################

## [END] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2019-09-11_05-09-23PM' on node 'rac03'

## [START] Oracle install clean ##

Oracle Universal Installer clean END

Oracle Universal Installer cleanup completed with errors.

Delete directory '/u01/app/oracle' on the local node : Failed <<<<

Failed to delete the directory '/u01/app/oracle'. The directory is in use.

Delete directory '/u01/app/oracle/product/11.2.0.4/dbhome_1' on the local node : Done

Detach Oracle home '/u01/app/oracle/product/11.2.0.4/dbhome_1' from the central inventory on the local node : Done

Oracle Universal Installer clean START

Setting the force flag to cleanup the Oracle Base

Setting the force flag to false

Oracle Configuration Manager clean END

OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean6856.log

Oracle Configuration Manager clean START

Network Configuration clean config END

The network configuration has been cleaned up successfully.

Backup files de-configured successfully.

De-configuring backup files...

Local Net Service Names configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file...

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-09-11_05-11-22-PM.log

Network Configuration clean config START

6、更新保留節點清單

在任一保留節點執行。在rac01上執行

# su - oracle

$ cd $ORACLE_HOME/oui/bin

$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac01,rac02}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

此時rac03節點上$ORACLE_HOME裡面的內容已經被刪除

刪除GRID集群服務

1、刪除Grid服務

在rac01上使用gird用戶,檢查節點rac03是否pinned

# su - grid

$ olsnodes -t -s

rac01ActiveUnpinned

rac02ActiveUnpinned

rac03ActiveUnpinned

如果是pinned狀態,則使用下面的命令upinned

crsctl unpin css -n rac03

停止EM

$ emctl stop dbconsole

停止rac03節點上的crs集群服務,在rac01上使用root用戶執行

# /u01/app/11.2.0/grid/bin/crsctl stop cluster -n rac03

CRS-2673: Attempting to stop 'ora.crsd' on 'rac03'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac03'

CRS-2673: Attempting to stop 'ora.ASMDATA.dg' on 'rac03'

CRS-2673: Attempting to stop 'ora.FRAVOL.dg' on 'rac03'

CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac03'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac03'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.rac03.vip' on 'rac03'

CRS-2677: Stop of 'ora.FRAVOL.dg' on 'rac03' succeeded

CRS-2677: Stop of 'ora.ASMDATA.dg' on 'rac03' succeeded

CRS-2677: Stop of 'ora.rac03.vip' on 'rac03' succeeded

CRS-2672: Attempting to start 'ora.rac03.vip' on 'rac02'

CRS-2676: Start of 'ora.rac03.vip' on 'rac02' succeeded

CRS-2677: Stop of 'ora.OCR.dg' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac03'

CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'rac03'

CRS-2677: Stop of 'ora.ons' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'rac03'

CRS-2677: Stop of 'ora.net1.network' on 'rac03' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac03' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac03'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac03'

CRS-2673: Attempting to stop 'ora.asm' on 'rac03'

CRS-2677: Stop of 'ora.evmd' on 'rac03' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac03' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac03'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac03'

CRS-2677: Stop of 'ora.cssd' on 'rac03' succeeded

$ olsnodes -t -s

rac01ActiveUnpinned

rac02ActiveUnpinned

rac03InactiveUnpinned

在rac03節點用root用戶身份運行deconfig腳本

# cd /u01/app/11.2.0/grid/crs/install

# ./rootcrs.pl -deconfig -force

Using configuration parameter file: ./crsconfig_params

Network exists: 1/10.10.0.0/255.255.255.0/eth0, type static

VIP exists: /rac01-vip/10.10.0.70/10.10.0.0/255.255.255.0/eth0, hosting node rac01

VIP exists: /rac02-vip/10.10.0.71/10.10.0.0/255.255.255.0/eth0, hosting node rac02

VIP exists: /rac03-vip/10.10.0.72/10.10.0.0/255.255.255.0/eth0, hosting node rac03

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac03'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac03'

CRS-2673: Attempting to stop 'ora.ASMDATA.dg' on 'rac03'

CRS-2673: Attempting to stop 'ora.FRAVOL.dg' on 'rac03'

CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac03'

CRS-2677: Stop of 'ora.ASMDATA.dg' on 'rac03' succeeded

CRS-2677: Stop of 'ora.FRAVOL.dg' on 'rac03' succeeded

CRS-2677: Stop of 'ora.OCR.dg' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac03'

CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac03' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac03'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac03'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac03'

CRS-2673: Attempting to stop 'ora.asm' on 'rac03'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac03'

CRS-2677: Stop of 'ora.crf' on 'rac03' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac03' succeeded

CRS-2677: Stop of 'ora.evmd' on 'rac03' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac03'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac03' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac03'

CRS-2677: Stop of 'ora.cssd' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac03'

CRS-2677: Stop of 'ora.gipcd' on 'rac03' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac03'

CRS-2677: Stop of 'ora.gpnpd' on 'rac03' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Removing Trace File Analyzer

Successfully deconfigured Oracle clusterware stack on this node

如果要刪除集群中剩餘的最後一個節點(刪除所有節點時),則應加-lastnode選項

# ./rootcrs.pl -deconfig -force -lastnode

執行完成上面的命令,grid的安裝程序目錄已經被刪除

2、從集群刪除節點rac03信息

在node1上使用grid用戶,檢查

# su - grid

[grid@rac01 ~]$ olsnodes -s -t

rac01ActiveUnpinned

rac02ActiveUnpinned

rac03InactiveUnpinned

從其他運行的節點上刪除rac03,這裡在rac01上使用root運行下面的命令

# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac03

CRS-4661: Node rac03 successfully deleted.

# su - grid

$ olsnodes -t -s

rac01ActiveUnpinned

rac02ActiveUnpinned

3、更新刪除節點集群清單

在rac03節點使用gird用戶更新集群列表清單

# su - grid

$ cd $ORACLE_HOME/oui/bin

$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac03}" CRS=TRUE -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

4、刪除Grid軟件

在rac03上使用gird用戶執行下面命令,刪除GRID_HOME

# su - grid

$ cd $ORACLE_HOME/deinstall

$ ./deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2019-09-12_09-54-33AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/11.2.0/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: rac03

Checking for sufficient temp space availability on node(s) : 'rac03'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2019-09-12_09-54-33AM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "rac03"[rac03-vip]

>

回車

The following information can be collected by running "/sbin/ifconfig -a" on node "rac03"

Enter the IP netmask of Virtual IP "10.10.0.72" on node "rac03"[255.255.255.0]

>

回車

Enter the network interface name on which the virtual IP address "10.10.0.72" is active

>

回車

Enter an address or the name of the virtual IP[]

>

回車

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/netdc_check2019-09-12_09-57-18-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: 回車

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/asmcadc_check2019-09-12_09-57-21-AM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac03

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac03', and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/11.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1

Option -local will not modify any ASM configuration.

Do you want to continue (y - yes, n - no)? [n]: y 輸入y繼續

A log of this session will be written to: '/tmp/deinstall2019-09-12_09-54-33AM/logs/deinstall_deconfig2019-09-12_09-54-49-AM.out'

Any error messages from this session will be written to: '/tmp/deinstall2019-09-12_09-54-33AM/logs/deinstall_deconfig2019-09-12_09-54-49-AM.err'

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/asmcadc_clean2019-09-12_09-57-25-AM.log

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/netdc_clean2019-09-12_09-57-25-AM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

De-configuring listener: LISTENER

Stopping listener on node "rac03": LISTENER

Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1

Stopping listener on node "rac03": LISTENER_SCAN1

Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

De-configuring Naming Methods configuration file...

Naming Methods configuration file de-configured successfully.

De-configuring backup files...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac03". 新打開一個終端窗口,使用root用戶執行下面的命令,執行完畢後,按回車

/tmp/deinstall2019-09-12_09-54-33AM/perl/bin/perl -I/tmp/deinstall2019-09-12_09-54-33AM/perl/lib -I/tmp/deinstall2019-09-12_09-54-33AM/crs/install /tmp/deinstall2019-09-12_09-54-33AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-09-12_09-54-33AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

Remove the directory: /tmp/deinstall2019-09-12_09-54-33AM on node:

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2019-09-12_09-54-33AM' on node 'rac03'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware is stopped and successfully de-configured on node "rac03"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.

Successfully deleted directory '/u01/app/oraInventory' on the local node.

Successfully deleted directory '/u01/app/grid' on the local node.

Oracle Universal Installer cleanup was successful.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac03' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac03' at the end of the session.

Run 'rm -rf /etc/oratab' as root on node(s) 'rac03' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

腳本執行期間根據提示按回車,輸入yes,並在另外一個窗口使用root賬號運行,提示的腳本

# /tmp/deinstall2019-09-12_09-54-33AM/perl/bin/perl -I/tmp/deinstall2019-09-12_09-54-33AM/perl/lib -I/tmp/deinstall2019-09-12_09-54-33AM/crs/install /tmp/deinstall2019-09-12_09-54-33AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-09-12_09-54-33AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2019-09-12_09-54-33AM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterware home.

Start Oracle Clusterware stack and try again.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

################################################################

# You must kill processes or reboot the system to properly #

# cleanup the processes started by Oracle clusterware #

################################################################

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node

在rac03上刪除其他目錄

# rm -rf /etc/oraInst.loc

# rm -rf /opt/ORCLfmap/

# rm -rf /etc/oratab

5、更新保留節點集群清單

在剩下的任意集群節點上更新集群列表,這裡在rac01上執行

# su - grid

$ cd $ORACLE_HOME/oui/bin

$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac01,rac02}" CRS=TRUE -silent

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

6、檢驗

查看節點清單是否只有rac01,rac02

# cat /u01/app/oraInventory/ContentsXML/inventory.xml

查看rac03節點是否刪除成功

# su - grid

$ cluvfy stage -post nodedel -n rac03

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Node removal check passed

Post-check for node removal was successful.

備註:如果出現刪除失敗,請重新將rac03加入節點,然後再刪除。一般只用grid重新加入即可

參考:https://shijieqin.github.io/2017/12/05/RAC%E5%88%A0%E9%99%A4%E8%8A%82%E7%82%B9/

http://www.findcopypaste.com/11g/deleting-a-node-from-11gr2-rac/

https://docs.oracle.com/cd/E11882_01/rac.112/e41959/adddelclusterware.htm#CWADD90995


分享到:


相關文章: