Membangun Cluster di Ubuntu

Page last edited 4,141 days ago
From Agung Alfiansyah
Jump to navigation Jump to search

Purpose: Menjelaskan tentang bagaimana membangun sebuah cluser komputer cluster bertype Beowulf dengan menggunakan sistem operasi Linux. Cluster Beowulf adalah "a cluster of identical (or nearly identical)computers linked together in such a way as to permit parallel processing".

Hardware:

Langkah-langkanya adalah sebagai berikut:


The first step was to do a fresh install of Ubuntu Server 11.04 on two blank machines. As part of the install process, you are always prompted to create at least one user for the system, and this user will have administrative access via the sudo command. The root account is locked out by default, so everything will be done with this default user account. Keep in mind that Ubuntu Server is a command line only system, so all this will be done from a text prompt. You also need to be sure you have identical user accounts on all machines for this to work.

I am using NFS to provide a common directory to all machines, and it also means I only need to configure and install the MPICH2 software in one location. I am also using a router with DHCP to configure the network interfaces. If you use a switch or crossover cables. you will probably need to configure network interfaces manually.

There are some things that need to be set up on each machine.

/etc/hosts

Open the /etc/hosts file of each machine and add the ip address and hostname of every machine. Also take out the line which associates 127.0.0.1 with the machines hostname. Leave the one which says 127.0.0.1 localhost. This ensures that the loopback address only refers to the local machine.

Ex.:

127.0.0.1 localhost 192.168.1.105 master_node 192.168.1.103 slave_node

There may be other lines in this file, but these must be there for the cluster to function. Replace these ip addresses with your own.

Also modify the /etc/hosts.allow file and add the line ALL : ALL

With those steps out of the way, we can move on.

To configure the machines that will act as slave nodes:

Install package nfs-common Install package openssh-server


If these were already installed, Ubuntu will simply tell you they are already the newest version. Running the install commands just to be sure doesn't hurt anything.

Once nfs and the ssh server are installed, configuration of the slave nodes can proceed.

The first thing to do is set up an nfs mount in the /etc/fstab file. I am using the home directory of the default user on my main node as an nfs share, which is one reason why user accounts need to be identical across all machines. This way, every machine has the exact same home directory.

Change to the /etc directory and modify the fstab file.

You want to append a line like this to the end of the file for the nfs directory to be mounted

master_node:/home/mpi /home/mpi nfs user,exec


That sets up the slave nodes to receive the directory exported by the master node.

Change back to your home directory.

Now set up a public key for passwordless ssh logins. The reason for passwordless ssh is to be able to use all the nodes without having to login to each one every time you run MPICH2. The running of the cluster should be automatic across all nodes.

ssh-keygen -t dsa will generate a private/public key pair

When prompted for a passphrase, hit enter to leave it blank.

change to the .ssh directory and do the following:

cat id_dsa.pub >> authorized_keys

Now configure openssh by modifying the /etc/ssh/sshd_config file.

Change to the /etc/ssh directory and open the sshd_config file Make the following changes to these lines:

RSAAuthentication yes PubKeyAuthentication yes

Uncomment this line: AuthorizedKeyfile %h/.ssh/authorized_keys

Uncomment this line and change yes to no: PasswordAuthentication yes

Set the UsePAM line to no Also set StrictModes to no

issue sudo /etc/init.d/ssh restart to restart the ssh server.

If you do this on each machine you intend to use as a slave node, they should be all set.


Configuring the master node.

Install the NFS server on the master node and configure the export:

Install package nfs-kernel-server


add this line to the /etc/exports file:

/home/mpi *(rw,insecure,sync)

sudo exportfs -r to export the directory to all slave nodes

Installing MPICH2

Make sure you install the package build-essential on the main node, otherwise you will have no build tools or compilers.

Download MPICH2

http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads

I used stable version 1.4.1p1

unpack the tar file.

There are actually two different ways to build MPICH2. The typical ./configure,make,sudo make install works just fine, but MPICH2 docs recommend this:

./configure 2>1& | tee c.txt make 2>1& | tee m.txt make install 2>1& | tee mi.txt

It makes no difference to the software ,but will if you have trouble building MPICH2 and seek support from the main site. Runnnig the command this way generates the files c.txt, m.txt, and mi.txt, which they will expect you to have in order to determine the problem.

Make a new directory in your home directory for the mpich2 install. I used mpich2

change into the mpich2-1.4.1p1 directory and run ./configure with the options you need.

For my system the command was

./configure --prefix=/home/mpi/mpich2 --disable-f77 --disable-fc

The fc and f77 options were used since I don't have Fortran compilers installed. The prefix tells install where to place the program files. Other options can be found by viewing the mpich2 README file.


Once the configuration is done, simply do make followed by sudo make install, or use the alternate above if you wish.

Everything should now be in place.

The last steps to setting everything up are to put the mpich2 folder on the path so that it can be found by the system

export PATH=/home/mpi/mpich2/bin:$PATH export PATH LD_LIBRARY_PATH="/home/mpi/mpich2/lib:$LD_LIBRARY_PATH" export LD_LIBRARY_PATH

sudo echo /home/mpi/mpich2/bin >> /etc/environment

Everything should now be installed and ready to go.

To test this, use the following commands:

which mpirun which mpiexec

Very last thing to do is set up a hosts file for the cluster in the user directory.

The file should be named hosts and should be set up as follows;

One line for each machine in the network listed by hostname

Ex:

master_node slave_node

To test the cluster, run the example program cpi

mpiexec -f hosts -n 2 ./mpich2-1.4.1p1/examples/cpi

-f hosts tells mpich2 which host file and thus which machines to use. -n is the number of processes to run. This is usually equal to the number of machines available, but doesn't have to be.

If all has gone well, there should be a listing of each process and where it ran followed by the program output.

If not, let me know and I can help you work it out.

These steps should work as is on any recent version of Ubuntu and probably most other Debian based distributions. Other distributions will differ in some details, but I can provide advice for many distributions.