Skip Headers
Oracle® Grid Infrastructure Installation Guide
11g Release 2 (11.2) for Solaris Operating System

Part Number E10816-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Installing Oracle Grid Infrastructure for a Cluster

This chapter describes the procedures for installing Oracle grid infrastructure for a cluster. Oracle grid infrastructure consists of Oracle Clusterware and Automatic Storage Management. If you plan afterward to install Oracle Database with Oracle Real Application Clusters (Oracle RAC), then this is phase one of a two-phase installation.

This chapter contains the following topics:

4.1 Preparing to Install Oracle Grid Infrastructure with OUI

Before you install Oracle grid infrastructure with the installer, use the following checklist to ensure that you have all the information you will need during installation, and to ensure that you have completed all tasks that must be done before starting your installation. Check off each task in the following list as you complete it, and write down the information needed, so that you can provide it during installation.

4.2 Installing Grid Infrastructure

This section provides you with information about how to use the installer to install Oracle grid infrastructure. It contains the following sections:

4.2.1 Running OUI to Install Grid Infrastructure

Complete the following steps to install grid infrastructure (Oracle Clusterware and Automatic Storage Management) on your cluster. At any time during installation, if you have a question about what you are being asked to do, click the Help button on the OUI page.

  1. Change to the /Disk1 directory on the installation media, or where you have downloaded the installation binaries, and start the runInstaller command. For example:

    $ cd /home/grid/oracle_sw/Disk1
    $ ./runInstaller
    
  2. Select Typical or Advanced installation.

  3. Provide information or run scripts as root when prompted by OUI. If root.sh fails on any of the nodes, then you can fix the problem and follow the steps in Section 6.4, "Deconfiguring Oracle Clusterware Without Removing Binaries," rerun root.sh on that node, and continue.

    Note:

    If you encounter an error when you run a fixup script, then you may need to delete projects created for the installation user by the fixup script before you run it again. See "projadd: Duplicate project name "user.grid"" in Appendix A, "Troubleshooting the Oracle Grid Infrastructure Installation Process."

    If you need assistance during installation, click Help. Click Details to see the log file.

    Note:

    You must run the root.sh script on the first node and wait for it to finish. If your cluster has four or more nodes, then root.sh can be run concurrently on all nodes but the first and last. As with the first node, the root.sh script on the last node must be run separately.
  4. After you run root.sh on all the nodes, OUI runs Net Configuration Assistant (netca) and Cluster Verification Utility. These programs run without user intervention.

  5. Automatic Storage Management Configuration Assistant (asmca) configures Oracle ASM during the installation.

When you have verified that your Oracle grid infrastructure installation is completed successfully, you can either use it to maintain high availability for other applications, or you can install an Oracle database.

If you intend to install Oracle Database 11g release 2 (11.2) with Oracle RAC, then refer to Oracle Real Application Clusters Installation Guide for Solaris Operating System.

See Also:

Oracle Real Application Clusters Administration and Deployment Guide for information about using cloning and node addition procedures, and Oracle Clusterware Administration and Deployment Guide for cloning Oracle grid infrastructure

4.2.2 Installing Grid Infrastructure Using a Cluster Configuration File

During installation of grid infrastructure, you are given the option either of providing cluster configuration information manually, or of using a cluster configuration file. A cluster configuration file is a text file that you can create before starting OUI, which provides OUI with information about the cluster name and node names that it requires to configure the cluster.

Oracle suggests that you consider using a cluster configuration file if you intend to perform repeated installations on a test cluster, or if you intend to perform an installation on many nodes.

To create a cluster configuration file manually:

  1. On the installation media, navigate to the directory /response.

  2. Using a text editor, open the response file crs_install.rsp.

  3. Follow the directions in that section for creating a cluster configuration file.

4.3 Installing Grid Infrastructure Using a Software-Only Installation

Note:

Oracle recommends that only advanced users should perform the software-only installation, as this installation method provides no validation of the installation, and as this installation option requires manual postinstallation steps to enable the grid infrastructure software.

A software-only installation consists of installing Oracle grid infrastructure for a cluster on one node, configuring it, and then running the installer on each node that you want to have as a cluster member node, and then joining other nodes to the cluster.

To perform a software-only installation:

4.3.1 Installing the Software Binaries

  1. Start the runInstaller command from the relevant directory on the Oracle Database 11g release 2 (11.2) installation media or download directory. For example:

    $ cd /home/grid/oracle_sw/Disk1
    $ ./runInstaller
    
  2. Complete a software-only installation of Oracle grid infrastructure on the first node.

  3. When the software has been installed, run the orainstRoot.sh script when prompted.

  4. The root.sh script output provides information about how to proceed, depending on the configuration you plan to complete in this installation. Make note of this information. If root.sh fails on any of the nodes, then you can fix the problem and follow the steps in Section 6.4, "Deconfiguring Oracle Clusterware Without Removing Binaries," rerun root.sh on that node, and continue.

    However, ignore the instruction to run the roothas.pl script. You must not run this script until completing relink, installing software on other nodes, and completing other required cluster configuration steps.

  5. To relink Oracle Clusterware with the Oracle RAC option enabled, run commands similar to the following (in this example, the Grid home is /u01/app/grid/11.2.0):

    $ cd /u01/app/grid/11.2.0
    $ setenv ORACLE_HOME pwd
    $ cd rdbms/lib
    $ make -f ins_rdmbs.mk rac_on ioracle
    
  6. On each remaining node, verify that the cluster node meets installation requirements using the command runcluvfy.sh stage -pre crsinst. Ensure that you have completed all storage and server preinstallation requirements.

  7. Use Oracle Universal Installer as described in steps 1 through 4 to install the Oracle grid infrastructure software on every remaining node that you want to include in the cluster, and complete a software-only installation of Oracle grid infrastructure on every node.

  8. If required relink the Oracle RAC binaries as described in step 5 on every node where you installed the Oracle grid infrastructure software.

4.3.2 Configuring the Software Binaries

When you install or copy Oracle grid infrastructure software on any node, you can defer configuration for a later time. This section provides the procedure for completing configuration after the software is installed or copied on nodes.

To configure and activate software-only grid infrastructure for a cluster installations, complete the following tasks:

  1. Using a text editor, modify the template file /Grid_home/install/crs/crsconfig_params to create a parameter file for the installer to use to configure the cluster. For example:

    ORACLE_OWNER=grid
    ORA_DBA_GROUP=oinstall
    ORA_ASM_GROUP=asm
    LANGUAGE_ID='AMERICAN_AMERICA.WE8ISO8859P1'
    ORACLE_HOME=/u01/crs
    ORACLE_BASE=/u01/crsbase
    OCR_LOCATIONS=/u02/stor1/ocr,/u03/stor2/ocr
    CLUSTER_NAME=example_cluster
    HOST_NAME_LIST=node1,node2
    NODE_NAME_LIST=node1,node2
    VOTING_DISKS=/u02/stor1/vdsk,/u03/stor2/vdsk,/u04/stor3/vdsk
    CRS_STORAGE_OPTION=2
    CRS_NODEVIPS='node1-vip/255.255.252.0/eth0,node2-vip/255.255.252.0/eth0'
    NODELIST=node1,node2
    NETWORKS="eth0"/192.0.2.64:public,"eth1"/192.0.2.65:cluster_interconnect
    SCAN_NAME=example-scan.domain
    SCAN_PORT=1522
    
  2. On all nodes, place the crsconfig_params file in the path Grid_home/crs/install/crsconfig_params, where Grid_home is the path to the Oracle grid infrastructure home for a cluster. For example:

    $ cp crsconfig_params /u01/app/11.2.0/grid/crs/install/crsconfig_params
    
  3. After configuring the crsconfig_params file, log in as root, and run the script Grid_home/crs/install/rootcrs.pl, using the following syntax:.

    Grid_home/perl/lib/perl -IGRID_HOME/perl/lib -IGrid_home/crs/install Grid_home/crs/install/rootcrs.pl

    For example:

    # /u01/app/grid/11.2.0/perl/lib/perl -I/u01/app/grid/11.2.0/perl/lib \
    -I/u01/app/grid/11.2.0/crs/install /u01/app/grid/11.2.0/crs/install/rootcrs.pl
    
  4. Using the information you noted from the root.sh script output in Section 4.3.1, "Installing the Software Binaries," follow the output of the root.sh script, and run the command Grid_home/crs/install/roothas.pl or Grid_home/crs/install/rootcrs.pl as required. For example:

    $ cd /u01/app/grid/11/2.0/crs/install
    $ perl rootcrs.pl
    

    Use Grid_home/crs/install/roothas.pl to configure Oracle Grid Infrastructure for a standalone server. Use Grid_home/crs/install/rootcrs.pl to configure Oracle Grid Infrastructure for a cluster.

    Note:

    Oracle grid infrastructure can be used for standalone servers and for clusters. However, if you first configure Oracle grid infrastructure for a standalone server, and then decide you want to configure Oracle grid infrastructure for a cluster, then you must re-link the Oracle software before you run rootcrs.pl to configure Oracle grid infrastructure for a clusters. The Install Grid Infrastructure Software Only installation option does not assume a cluster configuration, and therefore does not automatically link the Oracle RAC option.
  5. Change directory to Grid_home/oui/bin, where Grid_home is the path of the Grid Infrastructure home on each cluster member node.

  6. Enter the following command syntax, where Grid_home is the path of the Grid Infrastructure home on each cluster member node, and node_list is a comma-delimited list of nodes on which you want the software enabled:

    runInstaller -updateNodeList ORACLE_HOME=Grid_home -defaultHomeName

    For example

    $ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid -defaultHomeName
    "CLUSTER_NODES={node_list}" CRS=TRUE -local
    

    To enable the Oracle Clusterware installation on the local node only, enter the following command, where Grid_home is the Grid home on the local node, and node_list is a comma-delimited list of nodes on which you want the software enabled:

    runInstaller -updateNodeList ORACLE_HOME=Grid_home -defaultHomeName "CLUSTER_NODES={node_list}" CRS=TRUE -local

    For example:

    $ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid -defaultHomeName
    "CLUSTER_NODES={node_list}" CRS=TRUE -local
    

4.4 Confirming Oracle Clusterware Function

After installation, log in as root, and use the following command syntax on each node to confirm that your Oracle Clusterware installation is installed and running correctly:

crsctl check crs

For example:

$ crsctl check crs
 
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Caution:

After installation is complete, do not remove manually or run cron jobs that remove /tmp/.oracle or /var/tmp/.oracle or its files while Oracle Clusterware is up. If you remove these files, then Oracle Clusterware could encounter intermittent hangs, and you will encounter error CRS-0184: Cannot communicate with the CRS daemon.

4.5 Confirming Oracle ASM Function for Oracle Clusterware Files

If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running:

srvctl status asm

For example:

$ srvctl status asm
ASM is running on node1,node2

Oracle ASM is running only if it is needed for Oracle Clusterware files. If you have not installed OCR and voting disks files on Oracle ASM, then the Oracle ASM instance should be down.

Note:

To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or later installations, use the srvctl binary in the Oracle grid infrastructure home for a cluster (Grid home). If you have Oracle Real Application Clusters or Oracle Database installed, then you cannot use the srvctl binary in the database home to manage Oracle ASM or Oracle Net.

As an Amazon Associate I earn from qualifying purchases.