环境介绍
手动备份OCR
删除Oracle数据库
1、删除数据库实例
2、检查实例是否删除
3、删除Listener
5、删除ORACLE软件
6、更新保留节点清单
删除GRID集群服务
1、删除Grid服务
2、从集群删除节点rac03信息
3、更新删除节点集群清单
4、删除Grid软件
5、更新保留节点集群清单
6、检验
本教程删除位于节点rac03上的节点
环境介绍
目前环境如下:
- rac 集群有3个节点,主机名为rac01,rac02,rac03
- 数据库名为orcl
- rac03上的数据库实例名为orcl3
手动备份OCR
OCR中通常包含下列内容:
- 节点成员信息
- 数据库实例,节点,以及其他的映射关系
- ASM
- 资源配置信息(vip,services等等)
- 服务特性(Service characteristics)
- Oracle集群中相关进程的信息
- CRS控制的第三方应用程序信息
# cd /u01/app/11.2.0/grid/bin/
# ./ocrconfig -manualbackup
rac01 2019/09/11 16:15:21 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20190911_161521.ocr
[root@rac01 bin]# ./ocrconfig -showbackup
rac02 2019/09/09 14:08:11 /u01/app/11.2.0/grid/cdata/rac-cluster/backup00.ocr
rac02 2019/09/09 14:08:11 /u01/app/11.2.0/grid/cdata/rac-cluster/day.ocr
rac02 2019/09/09 14:08:11 /u01/app/11.2.0/grid/cdata/rac-cluster/week.ocr
rac01 2019/09/11 16:15:21 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20190911_161521.ocr
删除Oracle数据库
1、删除数据库实例
在rac01上运行DBCA工具删除数据库实例
# su - oracle
$ export DISPLAY=10.10.0.1:0.0
$ dbca
2、检查实例是否删除
$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1
Oracle user: oracle
Spfile: +ASMDATA/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups: ASMDATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
$ srvctl status database -d orcl
Instance orcl1 is running on node rac01
Instance orcl2 is running on node rac02
3、删除Listener
在rac01节点使用oracle用户,检查监听运行位置,在11gR2中监听默认在GRID_HOME运行
$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:
/u01/app/11.2.0/grid on node(s) rac01,rac02,rac03
End points: TCP:1521
如果监听是在ORACLE_HOME运行,则必须执行下面命令禁用并停止监听,这里虽然是在GRID_HOME运行,但还是建议执行下面命令
禁用监听
$ srvctl disable listener -l LISTENER -n rac03
停止监听
$ srvctl stop listener -l LISTENER -n rac03
检查状态
$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac02,rac01
4、更新删除节点清单
在要删除的节点rac03上执行下面命令
# su - oracle
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodelist ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac03}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
查看下清单文件
$ cd /u01/app/oraInventory/ContentsXML/
$ cat inventory.xml
<inventory>
<version>
<saved>11.2.0.4.0/<saved>
<minimum>2.1.0.6.0/<minimum>
<home>
<home>
<node>
<node>
<node>
<node>
<home>
<node>
<node>
<compositehome>
5、删除ORACLE软件
在要删除的节点rac03上,使用oracle用户执行下面命令
如果ORACLE_HOME是共享的执行
$ ./runInstaller -detachHome ORACLE_HOME=Oracle_home_location
这里是非共享的,执行下面的命令
$ cd $ORACLE_HOME/deinstall
$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0.4/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: rac03
Checking for sufficient temp space availability on node(s) : 'rac03'
## [END] Install check configuration ##
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-09-11_05-10-14-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-09-11_05-10-17-PM.log
Database Check Configuration END
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2019-09-11_05-10-20-PM.log
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check6856.log
Oracle Configuration Manager check END
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac03
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac03', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0.4/dbhome_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-09-11_05-10-08-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-09-11_05-10-08-PM.err'
######################## CLEAN OPERATION START ########################
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2019-09-11_05-10-20-PM.log
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-09-11_05-11-22-PM.log
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
#######################################################################
Oracle deinstall tool successfully cleaned up temporary directories.
Oracle Universal Installer cleanup completed with errors.
Failed to delete directory '/u01/app/oracle' on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0.4/dbhome_1' on the local node.
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0.4/dbhome_1' from the central inventory on the local node.
CCR clean is finished
As CCR is not configured, so skipping the cleaning of CCR configuration
Cleaning the config for CCR
####################### CLEAN OPERATION SUMMARY #######################
######################### CLEAN OPERATION END #########################
## [END] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2019-09-11_05-09-23PM' on node 'rac03'
## [START] Oracle install clean ##
Oracle Universal Installer clean END
Oracle Universal Installer cleanup completed with errors.
Delete directory '/u01/app/oracle' on the local node : Failed <<<<
Failed to delete the directory '/u01/app/oracle'. The directory is in use.
Delete directory '/u01/app/oracle/product/11.2.0.4/dbhome_1' on the local node : Done
Detach Oracle home '/u01/app/oracle/product/11.2.0.4/dbhome_1' from the central inventory on the local node : Done
Oracle Universal Installer clean START
Setting the force flag to cleanup the Oracle Base
Setting the force flag to false
Oracle Configuration Manager clean END
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean6856.log
Oracle Configuration Manager clean START
Network Configuration clean config END
The network configuration has been cleaned up successfully.
Backup files de-configured successfully.
De-configuring backup files...
Local Net Service Names configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file...
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-09-11_05-11-22-PM.log
Network Configuration clean config START
6、更新保留节点清单
在任一保留节点执行。在rac01上执行
# su - oracle
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac01,rac02}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
此时rac03节点上$ORACLE_HOME里面的内容已经被删除
删除GRID集群服务
1、删除Grid服务
在rac01上使用gird用户,检查节点rac03是否pinned
# su - grid
$ olsnodes -t -s
rac01ActiveUnpinned
rac02ActiveUnpinned
rac03ActiveUnpinned
如果是pinned状态,则使用下面的命令upinned
crsctl unpin css -n rac03
停止EM
$ emctl stop dbconsole
停止rac03节点上的crs集群服务,在rac01上使用root用户执行
# /u01/app/11.2.0/grid/bin/crsctl stop cluster -n rac03
CRS-2673: Attempting to stop 'ora.crsd' on 'rac03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac03'
CRS-2673: Attempting to stop 'ora.ASMDATA.dg' on 'rac03'
CRS-2673: Attempting to stop 'ora.FRAVOL.dg' on 'rac03'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac03'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac03'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.rac03.vip' on 'rac03'
CRS-2677: Stop of 'ora.FRAVOL.dg' on 'rac03' succeeded
CRS-2677: Stop of 'ora.ASMDATA.dg' on 'rac03' succeeded
CRS-2677: Stop of 'ora.rac03.vip' on 'rac03' succeeded
CRS-2672: Attempting to start 'ora.rac03.vip' on 'rac02'
CRS-2676: Start of 'ora.rac03.vip' on 'rac02' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac03'
CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac03'
CRS-2677: Stop of 'ora.ons' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac03'
CRS-2677: Stop of 'ora.net1.network' on 'rac03' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac03' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac03'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac03'
CRS-2673: Attempting to stop 'ora.asm' on 'rac03'
CRS-2677: Stop of 'ora.evmd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac03'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac03'
CRS-2677: Stop of 'ora.cssd' on 'rac03' succeeded
$ olsnodes -t -s
rac01ActiveUnpinned
rac02ActiveUnpinned
rac03InactiveUnpinned
在rac03节点用root用户身份运行deconfig脚本
# cd /u01/app/11.2.0/grid/crs/install
# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/10.10.0.0/255.255.255.0/eth0, type static
VIP exists: /rac01-vip/10.10.0.70/10.10.0.0/255.255.255.0/eth0, hosting node rac01
VIP exists: /rac02-vip/10.10.0.71/10.10.0.0/255.255.255.0/eth0, hosting node rac02
VIP exists: /rac03-vip/10.10.0.72/10.10.0.0/255.255.255.0/eth0, hosting node rac03
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac03'
CRS-2673: Attempting to stop 'ora.ASMDATA.dg' on 'rac03'
CRS-2673: Attempting to stop 'ora.FRAVOL.dg' on 'rac03'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac03'
CRS-2677: Stop of 'ora.ASMDATA.dg' on 'rac03' succeeded
CRS-2677: Stop of 'ora.FRAVOL.dg' on 'rac03' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac03'
CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac03' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac03'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac03'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac03'
CRS-2673: Attempting to stop 'ora.asm' on 'rac03'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac03'
CRS-2677: Stop of 'ora.crf' on 'rac03' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac03'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac03' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac03'
CRS-2677: Stop of 'ora.cssd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac03'
CRS-2677: Stop of 'ora.gipcd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac03'
CRS-2677: Stop of 'ora.gpnpd' on 'rac03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
如果要删除集群中剩余的最后一个节点(删除所有节点时),则应加-lastnode选项
# ./rootcrs.pl -deconfig -force -lastnode
执行完成上面的命令,grid的安装程序目录已经被删除
2、从集群删除节点rac03信息
在node1上使用grid用户,检查
# su - grid
[grid@rac01 ~]$ olsnodes -s -t
rac01ActiveUnpinned
rac02ActiveUnpinned
rac03InactiveUnpinned
从其他运行的节点上删除rac03,这里在rac01上使用root运行下面的命令
# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac03
CRS-4661: Node rac03 successfully deleted.
# su - grid
$ olsnodes -t -s
rac01ActiveUnpinned
rac02ActiveUnpinned
3、更新删除节点集群清单
在rac03节点使用gird用户更新集群列表清单
# su - grid
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac03}" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
4、删除Grid软件
在rac03上使用gird用户执行下面命令,删除GRID_HOME
# su - grid
$ cd $ORACLE_HOME/deinstall
$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-09-12_09-54-33AM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac03
Checking for sufficient temp space availability on node(s) : 'rac03'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2019-09-12_09-54-33AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac03"[rac03-vip]
>
回车
The following information can be collected by running "/sbin/ifconfig -a" on node "rac03"
Enter the IP netmask of Virtual IP "10.10.0.72" on node "rac03"[255.255.255.0]
>
回车
Enter the network interface name on which the virtual IP address "10.10.0.72" is active
>
回车
Enter an address or the name of the virtual IP[]
>
回车
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/netdc_check2019-09-12_09-57-18-AM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: 回车
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/asmcadc_check2019-09-12_09-57-21-AM.log
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac03
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac03', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y 输入y继续
A log of this session will be written to: '/tmp/deinstall2019-09-12_09-54-33AM/logs/deinstall_deconfig2019-09-12_09-54-49-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-09-12_09-54-33AM/logs/deinstall_deconfig2019-09-12_09-54-49-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/asmcadc_clean2019-09-12_09-57-25-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2019-09-12_09-54-33AM/logs/netdc_clean2019-09-12_09-57-25-AM.log
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
De-configuring listener: LISTENER
Stopping listener on node "rac03": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN1
Stopping listener on node "rac03": LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac03". 新打开一个终端窗口,使用root用户执行下面的命令,执行完毕后,按回车
/tmp/deinstall2019-09-12_09-54-33AM/perl/bin/perl -I/tmp/deinstall2019-09-12_09-54-33AM/perl/lib -I/tmp/deinstall2019-09-12_09-54-33AM/crs/install /tmp/deinstall2019-09-12_09-54-33AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-09-12_09-54-33AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
Remove the directory: /tmp/deinstall2019-09-12_09-54-33AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/11.2.0/grid' on the local node : Done
Delete directory '/u01/app/oraInventory' on the local node : Done
Delete directory '/u01/app/grid' on the local node : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2019-09-12_09-54-33AM' on node 'rac03'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac03"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac03' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac03' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac03' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
脚本执行期间根据提示按回车,输入yes,并在另外一个窗口使用root账号运行,提示的脚本
# /tmp/deinstall2019-09-12_09-54-33AM/perl/bin/perl -I/tmp/deinstall2019-09-12_09-54-33AM/perl/lib -I/tmp/deinstall2019-09-12_09-54-33AM/crs/install /tmp/deinstall2019-09-12_09-54-33AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-09-12_09-54-33AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2019-09-12_09-54-33AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
在rac03上删除其他目录
# rm -rf /etc/oraInst.loc
# rm -rf /opt/ORCLfmap/
# rm -rf /etc/oratab
5、更新保留节点集群清单
在剩下的任意集群节点上更新集群列表,这里在rac01上执行
# su - grid
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac01,rac02}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
6、检验
查看节点清单是否只有rac01,rac02
# cat /u01/app/oraInventory/ContentsXML/inventory.xml
查看rac03节点是否删除成功
# su - grid
$ cluvfy stage -post nodedel -n rac03
Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful.
备注:如果出现删除失败,请重新将rac03加入节点,然后再删除。一般只用grid重新加入即可
参考:https://shijieqin.github.io/2017/12/05/RAC%E5%88%A0%E9%99%A4%E8%8A%82%E7%82%B9/
http://www.findcopypaste.com/11g/deleting-a-node-from-11gr2-rac/
https://docs.oracle.com/cd/E11882_01/rac.112/e41959/adddelclusterware.htm#CWADD90995
閱讀更多 愛分享ishare 的文章