How to enable DNFS and set hugepages

Configuring HugePages for Oracle on Linux (x86-64)


The steps in this section are for configuring HugePages on a 64-bit Oracle Linux system running one or more Oracle Database instances.

To configure HugePages:Verify that the soft and hard values in kilobytes of memlock that are configured in /etc/security/limits.conf are slightly smaller than the amount of installed memory. For example, if the system has 64GB of RAM, the values shown here would be appropriate:

  1. soft memlock 60397977
    hard memlock 60397977

  2. Log in as the Oracle account owner (usually oracle) and use the following command to verify the value of memlock:

    $ ulimit -l
    60397977

  3. If your system is running Oracle Database 11g or later, disable AMM by setting the values of both of the initialization parameters memory_target and memory_max_target to 0.

    If you start the Oracle Database instances with a server parameter file, which is the default if you created the database with the Database Configuration Assistant (DBCA), enter the following commands at the SQL prompt:

    SQL> alter system set memory_target=0;
    System altered.
    SQL> alter system set memory_max_target=0;
    System altered.

    If you start the Oracle Database instances with a text initialization parameter file, manually edit the file so that it contains the following entries:

    memory_target = 0
    memory_max_target = 0

  4. Verify that all the Oracle Database instances are running (including any Automatic Storage Management (ASM) instances) as they would run on the production system.

  5. Create the file hugepages_settings.sh with the following content (taken from the My Oracle Support (MOS) note 401749.1).

    #!/bin/bash
    #
    # hugepages_settings.sh
    #
    # Linux bash script to compute values for the
    # recommended HugePages/HugeTLB configuration
    #
    # Note: This script does calculation for all shared memory
    # segments available when the script is run, no matter it
    # is an Oracle RDBMS shared memory segment or not.
    # Check for the kernel version
    KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
    # Find out the HugePage size
    HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
    # Start from 1 pages to be on the safe side and guarantee 1 free HugePage
    NUM_PG=1
    # Cumulative number of pages required to handle the running shared memory segments
    for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
    do
       MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
       if [ $MIN_PG -gt 0 ]; then
          NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
       fi
    done
    # Finish with results
    case $KERN in
       '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
              echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
       '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
        *) echo "Unrecognized kernel version $KERN. Exiting." ;;
    esac
    # End  

  6. Make the file executable, and run it to calculate the recommended value for the vm.nr_hugepages kernel parameter.

    $ chmod u+x ./hugepages_setting.sh  
    $ ./hugepages_settings.sh
    .
    .
    .
    Recommended setting: vm.nr_hugepages = 22960

  7. As root, edit the file /etc/sysctl.conf and set the value of the vm.nr_hugepages parameter to the recommended value.

    vm.nr_hugepages = 22960

  8. Stop all the database instances and reboot the system.

After rebooting the system, verify that the database instances (including any ASM instances) have started, and use the following command to display the state of the huge pages.

# grep ^Huge /proc/meminfo
HugePages_Total:   22960
HugePages_Free:     2056
HugePages_Rsvd:     2016
HugePages_Surp:        0
Hugepagesize:       2048 kB

The value of HugePages_Free should be smaller than that of HugePages_Total, and the value of HugePages_Rsvd should be greater than zero. As the database instances allocate pages dynamically and proactively as required, the sum of the Hugepages_Free and HugePages_Rsvd values is likely to be smaller than the total SGA size.

If you subsequenty change the amount of system memory, add or remove any database instances, or change the size of the SGA for a database instance, use hugepages_settings.sh to recalculate the value of vm.nr_hugepages, readjust the setting in /etc/sysctl.conf, and reboot the system.

Below steps are in short need to follow:


First of all shutdown database and listener.

Following steps would be required:

chmod root:root $ORACLE_HOME/bin/oradism

chmod 4755 $ORACLE_HOME/bin/oradism

cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk dnfs_on

Below steps for configure Huge pages:

$cat /etc/secuirity/limit.conf | grep oracle |grep memlock

oracle soft memlock 6293504

oracle hard memlock 6293504

Need to tweak value to 839066     .

DB bounce is required.



Enabling and Disabling Direct NFS Client Control of NFS

Use these commands to enable or disable Direct NFS Client Oracle Disk Manager Control of NFS:

By default, Direct NFS Client is installed in a disabled state with single-instance Oracle Database installations. Before enabling Direct NFS, you must configure an oranfstab file.

Enabling Direct NFS Client Control of NFS

  1. Change the directory to $ORACLE_HOME/rdbms/lib.

  2. Enter the following command:


make -f ins_rdbms.mk dnfs_on

  Disabling Direct NFS Client Control of NFS

  1. Log in as the Oracle software installation owner, and disable Direct NFS Client using the following commands:

         cd $ORACLE_HOME/rdbms/lib

          make -f ins_rdbms.mk dnfs_off


     2. Remove the oranfstab file.


If you remove an NFS path that an Oracle Database is using, then you must restart the database for the change to take effect.


Verifying the use of Oracle Direct NFS client

1) If dNFS is enabled, the alert.log will show the following entry when the database is started.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

2) Query the dNFS server information from v$dnfs_servers view inside the database.

SQL> select svrname, dirname, mntport, nfsport, wtmax, rtmax from v$dnfs_servers;

SVRNAME            DIRNAME                 MNTPORT    NFSPORT      WTMAX      RTMAX
------------------ -------------------- ---------- ---------- ---------- ----------
fb-dnfs-test-02    /rman01                    2049       2049     524288      524288

Even though dNFS is enabled, Oracle only mounts the volume/filesystem and opens the files when they are accessed. If no data files are accessed, then the above view will return no rows.

How to troubleshoot oranfstab issues

Please make sure the svrname from the v$dnfs_servers show the server name from the oranfstab file.  If not, generally there is some misconfiguration in the oranfstab file.  Look for any dangling entries and clean up.

For example, in the following output the SVRNAME is not correct as it should be the actual server name from the oranfstab for all the exports under the given server but we see the IP addresses of the paths.  This means the oranfstab is not configured correctly.

SQL> SELECT svrname, dirname, mntport, nfsport, wtmax, rtmax FROM v$dnfs_servers;

Server                    Export Name                                                         MNTPORT    NFSPORT      WTMAX      RTMAX  
------------------------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------  
192.168.1.100            /orcldata01                                                             2049       2049     524288     524288  
192.168.1.101            /orcldata02                                                             2049       2049     524288     524288 
192.168.1.102            /redo01                                                                2049       2049     524288     524288  

Looking at the oranfstab file, we see the following. 

server: cd-dnfs-01
local: 192.168.1.50
path: 192.168.1.100
local: 192.168.1.50
path: 192.168.1.101
local: 192.168.1.50
path: 192.168.1.102
local: 192.168.1.50
nfs_version: nfsv3
export: /orcldata01 mount: /u02
export: /orcldata02 mount: /u03
export: /redo01    mount: /u04

After rearranging them and removing the dangling local entry from the oranfstab file and restarting the database shows the expected output.

server: cd-dnfs-01
local: 192.168.1.50 path: 192.168.1.100
local: 192.168.1.50 path: 192.168.1.101
local: 192.168.1.50 path: 192.168.1.102

nfs_version: nfsv3
export: /orcldata01 mount: /u02
export: /orcldata02 mount: /u03
export: /redo01    mount: /u04
SQL> SELECT svrname, dirname, mntport, nfsport, wtmax, rtmax FROM v$dnfs_servers;

Server                    Export Name                                                         MNTPORT    NFSPORT      WTMAX      RTMAX 
------------------------- ---------------------------------------------------------------- ---------- ---------- ---------- ---------- 
cd-dnfs-01                /orcldata01                                                             2049       2049     524288     524288  
cd-dnfs-01                /orcldata02                                                             2049       2049     524288     524288
cd-dnfs-01                /redo01                                                                2049       2049     524288     524288

Comments

Popular posts from this blog

How to drop index and before dropping it how to get the DDL.

PRVG-11250 : The check "RPM Package Manager database" was not performed because

ORA-00257:archiver error, connect internal only until freed

Verifying Daemon “Avahi-Daemon” Not Configured And Running …FAILED (PRVG-1360)

Linux OL7/RHEL7: PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm

SKIP DNS RESLOV.CONF CHECK DURING RAC CONFIGURATION

CPU Patch Analysis

How to write to a CSV file using Oracle SQL*Plus

How to troubleshoot Long Running Concurrent Request in EBS 12.2

How To Manage Space of The FRA in the Oracle DB