Wednesday, September 9, 2009

Networking Requirements for Oracle Grid Infrastructure

With the release of Oracle 11gR2, Oracle has provided a whole new concept for the Grid Infrastructure. One of the most eye-catching requirements is a new Network Infrastructure requirement. It is no longer possible (maybe with some hacking and tricking) to just have a couple of Network Interface Cards and a bunch of IP addresses configured through the hosts file in /etc. I will try to summarize the requirements that Oracle has defined, and how to implement them:

Network Hardware Requirements

Oracle states the following requirements:

  • Each node must have at least two network adapters or network interface cards (NICs): one for the public network interface, and one for the private network interface (the interconnect).If you use multiple NICs for the public network or for the private network, the recommendation is to use NIC bonding. Public and private bonds should be separated
  • The interface names Oracle will use for the network adapters for each network must be the same on all nodes.In other words: The public network adapter should, for example, be called eth0 on all nodes and the private adapter eth1 on all nodes.
  • The public network adapter must support TCP/IP.
  • The private network adapter must support the user datagram protocol (UDP)High-speed network adapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet) are required for the private network.

IP Address Requirements
Before starting the installation, you must have at least two interfaces configured on each node: One for the private IP address and one for the public IP address.

Reading the installation manual for Grid Infrastructure (of which btw this article is a summary) a DNS server and a DHCP server are now required to setup an Oracle Grid Infrastructure.

You can configure IP addresses with one of the following options:

Oracle Grid Naming Service (GNS)
Using a static public node address and dynamically (DHCP) allocated IP addresses for the Oracle Clusterware provided VIP addresses. The hostnames will be resolved using a multicast domain name server, which is configured as a part of Oracle Clusterware, within the cluster. If you plan to use GNS, then you are required to have a DHCP service running on the public network for the cluster which will be able to provide an IP address for each node's virtual IP, and 3 IP addresses for the cluster used by the Single Client Access Name (SCAN) for the cluster.

Fixed IP addresses, assigned on a DNS Server for each node.
In other words, you can no longer trick around by defining the hosts in the /etc/hosts file. DNS is a requirement, as well as DHCP. Well, that may not be the case, but I will have to test this still...

IP Address Requirements with Grid Naming Service
If you enable Grid Naming Service (GNS), name resolution for any node in the cluster is delegated to the GNS server, listening on the GNS virtual IP address. You have to define this address in the DNS domain before you start the installation. The DNS must be configured to delegate resolution requests for cluster names (any names in the subdomain delegated to the cluster) to the GNS. When a request is made, GNS processes the requests and responds with the appropriate addresses for the name requested.
In order to use GNS, the DNS administrator must establish a DNS Lookup to direct DNS resolution of an entire subdomain to the cluster. This should be done before you install Oracle Grid Infrastructure! If you enable GNS, then you must also have a DHCP service on the public network that allows the cluster to dynamically allocate the virtual IP addresses as required by the cluster.

IP Address Requirements for Manual Configuration
If you choose not to enable GNS, then the public and virtual IP addresses for each node must be static IP addresses, but they should not be in use before you start the installation. Public and virtual IP addresses must be on the same subnet, but that should be no news, since that has been the case since the days VIPs were introduced (Oracle 10g).
The cluster must have the following IP addresses:

  • A public IP address for each node
  • A virtual IP address for each node
  • A single client access name (SCAN) configured on the domain name server (DNS) for Round Robin resolution to at least one address (three addresses is recommended).
The single client access name (SCAN) is a name used to provide service access for clients to the cluster. One of the major benefits of SCAN is that the SCAN enables you to add or remove nodes from the cluster without the need to reconfigure clients This is because the SCAN is associated with the cluster as a whole, rather than to a particular node. Another benefit is location independence for the databases. Client configuration does not have to depend on which nodes are running a particular database. Clients can however continue to access the cluster like with previous releases. The recommendation is to use SCAN.
The SCAN addresses must be on the same subnet as virtual IP addresses and public IP addresses. For high availability and scalability, you should configure the SCAN to use Round Robin resolution to three addresses. The name for the SCAN cannot begin with a numeral. For installation to succeed, the SCAN must resolve to at least one address.
It is best not to use the hosts file to configure SCAN VIP addresses. In stead, use DNS resolution for SCAN VIPs. If you use the hosts file to resolve SCANs, then you will only be able to resolve to one IP address. The result is that you will have only one SCAN address. It could possibly be a workaround for lab circumstances, but again, I didn't test it this way yet.

DNS Configuration for Domain Delegation to Grid Naming Service
If you plan to use GNS, then before grid infrastructure installation, you must configure your DNS server to forward requests for the cluster subdomain to GNS.
In order to establish this, you must use so-called delegation. This is how you configure delegation: In the DNS, create an entry for the GNS virtual IP address. For example: (The address you provide must be routable)
In the DNS, create an entry similar to the following for the delegated domain, where is the subdomain you want to delegate: NS
You must also add the DNS Servers to the resolv.conf on the nodes in the cluster

Make sure that the nis entry in /etc/nsswitch.conf is at the end of the search list. For example:
hosts: files dns nis

Grid Naming Service Configuration Example
When nodes are added to the cluster, the DHCP server can provide addresses for these nodes dynamically. These addresses are then registered automatically in GNS, and GNS provides delegated resolution within the subdomain to cluster node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with GNS, additional configuration is no longer required. Oracle Clusterware provides dynamic network configuration as nodes are added to or removed from the cluster. The following example should make things a bit clear (click on the image to get a larger view):

The above table reflects a two node cluster with GNS configured. The cluster name is "cluster", the GNS parent domain is, the subdomain is, 192.0.2 in the IP addresses represent the cluster public IP address network, and 192.168.0 represents the private IP address subnet.

Manual IP Address Configuration Example
If you decide not to use GNS, then you must configure public, virtual, and private IP addresses before installation. Also, check that the default gateway is reachable.
For example, with a two node cluster where each node has one public and one private interface, and you have defined a SCAN domain address to resolve on your DNS to one of three IP addresses, you might have the configuration shown in the following table for your network interfaces:

You do not need to provide a private name for the interconnect. If you want name resolution for the interconnect, then you can configure private IP names in the hosts file or the DNS. However, Oracle Clusterware assigns interconnect addresses on the interface defined during installation as the private interface (eth1, for example), and to the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so they are not fixed to a particular node. To enable VIP failover, the configuration shown in the preceding table defines the SCAN addresses and the public and VIP addresses of both nodes on the same subnet, 192.0.2.

Network Interface Configuration Options
An important thing to be aware of is when you use NAS for RAC and this storage is connected through an Ethernet network. In this case, you must have a third network interface for NAS I/O, otherwise you can face serious performance issues.

Enable Name Service Cache Daemon
You should enable nscd (Name Service Cache Daemon), to prevent network failures with RAC databases using Network Attached Storage or NFS mounts.
Use chkconfig --list nscd
To change the configuration, enter one of the following command (as root):
# chkconfig --level 35 nscd on

If you meet either one of the above method requirements (GNS or Manual configuration), you are safe to install Oracle Grid Infrastructure.

Source: Oracle 11gR2 Grid Infrastructure Installation Guide


  1. images are not clicable :/

  2. I know, I am working on it.
    My previous post had clickable images, as I found out, used the same upload method, but somehow, these are not...

  3. Pictorial IP explanation between GNS and manual setup is helpful.


  4. First: I'm talking about a GNS setup (DHCP + DNS on public network).

    Second: it's clear to me that the Public Virtual IP for each node (that's needed by the cluster infrastructure) get picked up by dhcp and get resolved via GNS. But in the oracle documentation is not so clearly explained if the Public phisical IP address for each node should be mandatory setted as a static IP (obviously same subnet of VIP, GNS, and SCAN) or if it may be assigned via DHCP as well.

    Any glue?

    Ciao, Dino.
    -- dAm2K

  5. What if you have three networks: A private for the interconnect, a public for the VIPs and connectivity from applicatoin servers, and a management network for DHCP, DNS, backups, SSH/maintenance, monitoring etc (managment network being the default gateway)

    Do you know if it possible for GNS to get DHCP addresses and DNS resolution from a different netork than the public network, or to define multiple public interfaces but distinguish between the interface for VIPs and the interface for management. 11gR1 and 10gR2 had a property that could be set to specify the interface to use for VIPs, regardless of the default gateway.

  6. It sure is possible to acquire DNS servers and DHCP addresses from servers in a different subnet.
    For DHCP you should use BOOTP relay agents, which normally is provided by switches and routers.

  7. @Dino Ciuffetti: The whole idea about this new Grid Plug 'n Play is that it doesn't matter how the servers are configured network-wise. In other words, you don't need static IP's any longer.

  8. peter schlaegerMay 3, 2010 at 6:43 PM

    Dear Arnoud

    I still have problems setting up Network for dns ( I am just an oracle dba ). Can you publish your dns und dhcp configuration files and the resolv.conf and ifcfg-eth0 and ifcfg-eth1 file or send it me per mail ?


  9. Hi Peter,
    I have removed my testing environment long time ago. Therefore, unfortunately, I cannot share this information anymore. However, do please check out Martin Bach's weblog. He has an incredible 5-part weblog series on 11.2 RAC installation, including the DHCP and DNS part.

    The above link is part 2 in the series. It might help you on the issues you have.