This article describes the "hands-off" installation of Oracle Database 21c RAC on Oracle Linux 7 using VirtualBox and Vagrant with no additional shared disk devices.
◉ TL;DR
If you are comfortable with VirtualBox, Vagrant and RAC you might want to jump straight to the GitHub repository and use the basic instructions here.
◉ Introduction
One of the biggest obstacles preventing people from setting up test RAC environments is the requirement for shared storage. In a production environment, shared storage is often provided by a SAN or high-end NAS device, but both of these options are very expensive when all you want to do is get some experience installing and using RAC. A cheaper alternative is to use virtualization to fake the shared storage.
Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks, overcoming the obstacle of expensive shared storage.
◉ Required Software
◉ Clone Repository
Pick an area on your PC file system to act as the base for this git repository and issue the following command.
git clone https://github.com/oraclebase/vagrant.git
Copy the Oracle software under the "..../software/" directory. From the "rac/ol7_21" subdirectory, the structure should look like this.
$ tree
.
+--- config
| +--- install.env
| +--- vagrant.yml
+--- dns
| +--- scripts
| | +--- root_setup.sh
| | +--- setup.sh
| +--- Vagrantfile
+--- node1
| +--- scripts
| | +--- oracle_create_database.sh
| | +--- oracle_db_software_installation.sh
| | +--- oracle_grid_software_config.sh
| | +--- oracle_grid_software_installation.sh
| | +--- oracle_user_environment_setup.sh
| | +--- root_setup.sh
| | +--- setup.sh
| +--- Vagrantfile
+--- node2
| +--- scripts
| | +--- oracle_user_environment_setup.sh
| | +--- root_setup.sh
| | +--- setup.sh
| +--- Vagrantfile
+--- README.md
+--- shared_scripts
| +--- configure_chrony.sh
| +--- configure_hostname.sh
| +--- configure_hosts_base.sh
| +--- configure_hosts_scan.sh
| +--- configure_shared_disks.sh
| +--- install_os_packages.sh
| +--- prepare_u01_disk.sh
+--- software
| +--- LINUX.X64_213000_db_home.zip
| +--- LINUX.X64_213000_grid_home.zip
| +--- put_software_here.txt
$
When you clone the repository on Windows it is important you maintain the line terminators. All ".sh" scripts are run inside the Linux VMs, so they need UNIX style line terminators. If your Git client is set to convert all files to Windows style line terminators on a clone/pull, you will run into problems when those scripts are called from Linux.
◉ Amend File Paths
The "config" directory contains a "install.env" and a "vagrant.yml" file. The combination of these two files contain all the config used for this build. You can alter the configuration of the build here, but remember to make sure the combination of the two stay consistent.
At minimum you will have to amend the following paths in the "vagrant.yml" file, providing suitable paths for the shared disks on your PC.
asm_crs_disk_1: /u05/VirtualBox/shared/ol7_21_rac/asm_crs_disk_1.vdi
asm_crs_disk_2: /u05/VirtualBox/shared/ol7_21_rac/asm_crs_disk_2.vdi
asm_crs_disk_3: /u05/VirtualBox/shared/ol7_21_rac/asm_crs_disk_3.vdi
asm_crs_disk_size: 2
asm_data_disk_1: /u05/VirtualBox/shared/ol7_21_rac/asm_data_disk_1.vdi
asm_data_disk_size: 40
asm_reco_disk_1: /u05/VirtualBox/shared/ol7_21_rac/asm_reco_disk_1.vdi
asm_reco_disk_size: 20
For example, if you were working on a Windows PC, you might create a path called "C:\VirtualBox\shared\ol7_21_rac" and use the following settings.
asm_crs_disk_1: C:\VirtualBox\shared\ol7_21_rac\asm_crs_disk_1.vdi
asm_crs_disk_2: C:\VirtualBox\shared\ol7_21_rac\asm_crs_disk_2.vdi
asm_crs_disk_3: C:\VirtualBox\shared\ol7_21_rac\asm_crs_disk_3.vdi
asm_crs_disk_size: 2
asm_data_disk_1: C:\VirtualBox\shared\ol7_21_rac\asm_data_disk_1.vdi
asm_data_disk_size: 40
asm_reco_disk_1: C:\VirtualBox\shared\ol7_21_rac\asm_reco_disk_1.vdi
asm_reco_disk_size: 20
If you don't alter them, they will get written to "C:\u05\VirtualBox\shared\ol7_21_rac".
◉ Build the RAC
The following commands will leave you with a functioning RAC installation.
Start the DNS server.
cd dns
vagrant up
Start the second node of the cluster. This must be running before you start the first node.
cd ../node2
vagrant up
Ignore the final "default: Host key verification failed." message at the end. That's fine.
Start the first node of the cluster. This will perform all of the installations operations. Depending on the spec of the host system, this could take a long time. On one of my servers it took about 3.5 hours to complete.
cd ../node1
vagrant up
◉ Turn Off RAC
Perform the following to turn off the RAC cleanly.
cd node2
vagrant halt
cd ../node1
vagrant halt
cd ../dns
vagrant halt
◉ Remove Whole RAC
The following commands will destroy all VMs and the associated files, so you can run the process again.
cd node2
vagrant destroy -f
cd ../node1
vagrant destroy -f
cd ../dns
vagrant destroy -f
◉ Description of the Build
From here on we will describe the sections in the build process. Remember, all parameters come from the following two files.
◉ DNS Server Build
The DNS server build is really simple. You can see an example of the vagrant output I received here. This VM took about 5 minutes to build.
The Vagrantfile contains the definition of the VirtualBox VM that will be built, using the parameters from the vagrant.yml file. The default values produce a VM with the following characteristics.
- OS : Oracle Linux 7
- VM Name : ol7_21_dns
- CPUs : 1
- RAM : 1G
- NIC 1 : NAT Network
- NIC 2 : 192.168.56.100 - An IP on the "public" network used by the cluster nodes.
- Disk 1 : The box disk with the operating system pre-installed.
The last stage of the VM build it to run the setup.sh script, which simply runs the root_setup.sh script.
The root_setup.sh script does the following.
- Sources the install.env file to make sure all environment variables for the build are present.
- Gets the latest Yum repository file.
- Adds entries to the "/etc/hosts" file in the VM for the public, private and virtual IP addresses of each RAC node, as well as the SCAN IP addresses. These are built using environment variables.
- Installs and enables Dnsmasq. This presents the contents of the "/etc/hosts" file as a DNS server, which is a much simpler alternative to using BIND for the DNS setup. It's not going to be perfect for every DNS solution, but it is really good for this.
Once the vagrant up command completes you will be left with a functioning DNS server ready for use with your RAC.
If you have any problems with the DNS build, don't continue with the RAC nodes until you fix them.
◉ RAC Node 2 Build
The basic setup of the OS for the RAC nodes are very similar, but node 2 doesn't perform any installation actions, so we need this in place before starting node 1. You can see an example of the vagrant output I received here. This VM took about 5-10 minutes to build.
The Vagrantfile contains the definition of the VirtualBox VM that will be built, using the parameters from the vagrant.yml file. The default values produce a VM with the following characteristics.
- OS : Oracle Linux 7
- VM Name : ol7_21_rac2
- CPUs : 4
- RAM : 6144
- NIC 1 : NAT Network
- NIC 2 : 192.168.56.102 - An IP on the "public" network used by the cluster nodes.
- NIC 3 : 192.168.1.102 - An IP on the "private" network used by the cluster nodes.
- Disk 1 : The box disk with the operating system pre-installed.
- Disk 2 : A disk which will be local to the VM, used for the "/u01" mount point.
- Disk 3 : CRS disk 1.
- Disk 4 : CRS disk 2.
- Disk 5 : CRS disk 3.
- Disk 6 : DATA disk.
- Disk 7 : RECO disk.
The last stage of the VM build it to run the setup.sh script, which simply runs the root_setup.sh.
The root_setup.sh script does the following.
- Sources the install.env file to make sure all environment variables for the build are present.
- Run prepare_u01_disk.sh to partition the "/dev/sdb" local disk, create a file system on the resulting "/dev/sdb1" partition, add an entry into the "/etc/fstab" so it mounts automatically in future, and manually mount it for the first time.
- Run install_os_packages.sh to configure the Yum repositories and install the OS packages needed for the RAC node. This includes the "oracle-database-preinstall-19c" package, which does some of the OS setup for us. It also includes all packages listed in the documentation, even though some are already installed by the preinstall package. Some extra packages are included just because they are needed by the automation of the setup.
- Set the passwords for the "root" and "oracle" users. This is pretty stupid, but we need this to make the whole process silent.
- Run configure_hosts_base.sh to add the public, private and virtual addresses to the "/etc/hosts" file. We don't need the SCAN addresses as they are provided by the DNS. We could use the DNS to provide the local IP addresses too, but I prefer to have these here also.
- Configure the "/etc/resolv.conf" to point to the DNS server VM.
- Run configure_chrony.sh to configure Chrony, so we don't get NTP errors during the installation.
- Run configure_shared_disks.sh to configure the shared disks used by ASM. If the ASM disks aren't already partitioned, it partitions them. It then configures UDEV, so the disks are mounted correctly in this VM after each restart.
- Run oracle_user_environment_setup.sh to create the environment scripts for the "oracle" user, create the GRID_HOME and DB_HOME directories, and partially prepare password-less authentication (user equivalence) for the "oracle" user. This process will be completed by node 1.
- Source the "/home/oracle/scripts/setEnv.sh" script the previous script created.
- Run configure_hostname.sh to configure the host name for this VM.
- Partially prepare password-less authentication (user equivalence) for the "root" user. This process will be completed by node 1. This is not necessary for RAC to function, but it is necessary for the automation of this installation.
-Once the vagrant up command completes you will be left with a prepared RAC node 2.
If you have any problems with the node 2 build, don't continue with the node 1 build until you fix them.
◉ RAC Node 1 Build
The basic setup of the OS for the RAC nodes are very similar, but unlike node 2, the node 1 setup also includes the software installation and configuration actions. Remember, the DNS and node 2 VMs should be running before starting this node. You can see an example of the vagrant output I received here. This VM took about 1 hour to build.
The Vagrantfile contains the definition of the VirtualBox VM that will be built, using the parameters from the vagrant.yml file. The default values produce a VM with the following characteristics.
- OS : Oracle Linux 7
- VM Name : ol7_21_rac1
- CPUs : 4
- RAM : 7168M
- NIC 1 : NAT Network
- NIC 2 : 192.168.56.101 - An IP on the "public" network used by the cluster nodes.
- NIC 3 : 192.168.1.101 - An IP on the "private" network used by the cluster nodes.
- Disk 1 : The box disk with the operating system pre-installed.
- Disk 2 : A disk which will be local to the VM, used for the "/u01" mount point.
- Disk 3 : CRS disk 1.
- Disk 4 : CRS disk 2.
- Disk 5 : CRS disk 3.
- Disk 6 : DATA disk.
- Disk 7 : RECO disk.
The last stage of the VM build it to run the setup.sh script, which simply runs the root_setup.sh.
The root_setup.sh script does all of the same actions as the node 2 build described above, but it also includes the installation and configuration steps. Rather than repeat the explanations of the common steps we will just focus on the differences here.
- In addition to repeating the password-less authentication (user equivalence) steps from node 2 for the "oracle" and "root" users, the node 1 root_setup.sh script uses sshpass and ssh-copy-id to complete the password-less authentication.
- Install the "cvuqdisk" package on node 1 and node 2.
- Run oracle_grid_software_installation.sh to perform a silent installation of the Grid Infrastructure software.
- Run "orainstRoot.sh" on node 1 then node 2.
- Run "root.sh" on node 1 then node 2.
- Run oracle_grid_software_config.sh to perform a silent configuration of the Grid Infrastructure, then show the cluster status.
- Run oracle_db_software_installation.sh to perform a silent installation of the database software.
- un "root.sh" on node 1 then node 2.
- Run oracle_create_database.sh to perform a silent creation of a database using DBCA. Once complete it displays the status of the installed services.
- Once the vagrant up command completes you will be left with a fully configured and running 2 node RAC.
Source: oracle.com
0 comments:
Post a Comment