how to create password less connections to the server.
Step 1: Create Authentication SSH-Keygen Keys on – (Source)
ssh-keygen -t rsa
key will be generated in /user/.ssh/rsa_id/ folder
use without a passcode.
Step 2: Upload SSH Key to – (Target)
ssh-copy-id user@i.p/hostname
Step 3: Test SSH Passwordless Login from Source to target.
ssh user@servername
For Cluster Setup
==============
Verify SSH Software is Installed
The supported version of SSH for Linux distributions is OpenSSH.
OpenSSH should be included in the Linux distribution minimal installation.
To confirm that SSH packages are installed, run the following command on nodes:
[root@node1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep ssh openssh-askpass-4.3p2-36.el5 (x86_64) openssh-clients-4.3p2-36.el5 (x86_64) openssh-4.3p2-36.el5 (x86_64) openssh-server-4.3p2-36.el5 (x86_64)
If you do not see a list of SSH packages, then install those packages for Linux distribution
[root@node1 ~]# mount -r /dev/cdrom /media/cdrom
[root@node1 ~]# cd /media/cdrom/Server
[root@node1 ~]# rpm -Uvh openssh-*
Checking Existing SSH Configuration on the System
To determine if SSH is installed and running, enter the following command:
[sid@node1 ~]$ pgrep sshd 2535 19852
If SSH is running, then the response to this command is a list of process ID number(s).
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol,
while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA.
Configuring Passwordless SSH on Cluster Nodes
To configure passwordless SSH, you must first create RSA or DSA keys on each cluster node,
and then copy all the keys generated on all cluster node members into an authorized keys file
that is identical on each node. Note that the SSH files must be readable only by root and by the
software installation user ( grid, oracle), as SSH ignores a private key file if it is accessible by others.
In the examples that follow, the DSA key is used.
You must configure passwordless SSH separately for each Oracle software installation owner that you intend
to use for installation ( grid, oracle).
To configure passwordless SSH, complete the following:
Create SSH Directory, and Create SSH Keys On Each Node
Complete the following steps on each node:
Log in as the software owner (in this example, the grid user).
[root@node1 ~]# su - grid
[grid@node1 ~]$ mkdir ~/.ssh
[grid@node1 ~]$ chmod 700 ~/.ssh
[grid@racnode1 ~]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): [Enter]
Enter passphrase (empty for no passphrase): [Enter]
Enter same passphrase again: [Enter]
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is: 7b:e9:e8:47:29:37:ea:10:10:c6:b6:7d:d2:73:e9:03
This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file.
Never distribute the private key to anyone not authorized to perform Oracle software installations.
Repeat steps for all remaining nodes that you intend to make a member of the cluster, using the DSA key ( node2).
Add All Keys to a Common authorized_keys File
Now that nodes contain a public and private key for DSA, you will need to create an authorized key file
( authorized_keys) on one of the nodes. An authorized key file is nothing more than a single file that contains a
copy of everyone's (every node's) DSA public key. Once the authorized key file contains all of the public keys,
it is then distributed to all other nodes in the cluster.
The user's ~/.ssh/authorized_keys file on every node must contain the contents from all of the ~/.ssh/id_dsa.pub
files that you generated on all cluster nodes.
Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file.
From node1 (the local node) determine if the authorized key file ~/.ssh/authorized_keys already exists in the .ssh
directory of the owner's home directory. In most cases this will not exist
[grid@node1 ~]$ touch ~/.ssh/authorized_keys
[grid@node1 ~]$ ls -l ~/.ssh total 8 -rw-r--r-- 1 grid oinstall 0 Nov 12 12:34 authorized_keys -rw------- 1 grid oinstall 668 Nov 12 09:24 id_dsa -rw-r--r-- 1 grid oinstall 603 Nov 12 09:24 id_dsa.pub
In the .ssh directory, you should see the id_dsa.pub keys that you have created, and the blank file authorized_keys.
On the local node ( node1), use SCP (Secure Copy) or SFTP (Secure FTP) to copy the content of the ~/.ssh/id_dsa.pub public key
from both nodes in the cluster to the authorized key file just created ( ~/.ssh/authorized_keys). Again,
this will be done from node1. You will be prompted for the user account password for both nodes accessed.
The following example is being run from node1 and assumes a two-node cluster, with nodes ode1 and node2:
[grid@node1 ~]$ ssh node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'racnode1 (192.168.1.151)' can't be established.
RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5.
Are you sure you want to continue connecting (yes/no)? yes Warning:
Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hosts.
grid@node1's password: xxxxx
[grid@node1 ~]$ ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'racnode2 (192.168.1.152)' can't be established.
RSA key fingerprint is 97:ab:db:26:f6:01:20:cc:e0:63:d0:d1:73:7e:c2:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hosts.
grid@node2's password: xxxxx
The first time you use SSH to connect to a node from a particular system, you will see a message similar to the following:
The authenticity of host 'racnode1 (192.168.1.151)' can't be established.
RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5.
Are you sure you want to continue connecting (yes/no)? yes
Enter yes at the prompt to continue. The public hostname will then be added to the
known_hosts file in the ~/.ssh directory and you will not see this message again when you connect from this system to the same node.
At this point, we have the DSA public key from every node in the cluster in the authorized key file ( ~/.ssh/authorized_keys)
on node1:
[grid@node1 ~]$ ls -l ~/.ssh total 16 -rw-r--r-- 1 grid oinstall 1206 Nov 12 12:45
authorized_keys -rw------- 1 grid oinstall 668 Nov 12 09:24
id_dsa -rw-r--r-- 1 grid oinstall 603 Nov 12 09:24
id_dsa.pub -rw-r--r-- 1 grid oinstall 808 Nov 12 12:45
known_hosts
We now need to copy it to the remaining nodes in the cluster. In our two-node cluster example, the only remaining node is node2.
Use the scp command to copy the authorized key file to all remaining nodes in the cluster:
[grid@node2 ~]$ scp ~/.ssh/authorized_keys
node2:.ssh/authorized_keys
grid@node2's password: xxxxx authorized_keys 100% 1206 1.2KB/s 00:00
Change the permission of the authorized key file for both nodes in the cluster by logging into the node and running the following:
[grid@node1 ~]$ chmod 600 ~/.ssh/authorized_keys
Enable SSH User Equivalency on Cluster Nodes
After you have copied the authorized_keys file that contains all public keys to each node in the cluster,
complete the steps in this section to ensure passwordless SSH connectivity between all cluster member nodes is
configured correctly.
[root@node1 ~]# su - grid
If SSH is configured correctly, you will be able to use the ssh and scp commands without being prompted
for a password or pass phrase from the terminal session:
[grid@ode1 ~]$ ssh node1 "date;hostname"
Fri Nov 13 09:46:56 EST 2009 racnode1
[grid@ode1 ~]$ ssh node2 "date;hostname" Fri Nov 13 09:47:34 EST 2009 node2
Perform the same actions above from the remaining nodes in the cluster ( node2)
to ensure they too can access all other nodes without being prompted for a password or passphrase and get
added to the known_hosts file:
[grid@racnode1 ~]$ export DISPLAY=melody:0
[grid@racnode1 ~]$ ssh racnode2 hostname Warning: No xauth data; using fake authentication data for X11 forwarding. node2
Note that having X11 Forwarding enabled will installation to fail. To correct this problem,
create a user-level SSH client configuration file for the oracle OS user account that disables X11 Forwarding:
Using a text editor, edit or create the file ~/.ssh/config
Make sure that the ForwardX11 attribute is set to no. For example, insert the following into the ~/.ssh/config file:
Host * ForwardX11 no
Comments
Post a Comment