MPI Cluster
The beowulf cluster article on Wikipedia describes the Beowulf cluster as follows:
- “Beowulf according to Wiki” – Wikipedia, Beowulf cluster, 28 February 2011.
OSCAR[1]
NPACI Rocks toolkit[2],.
Beowulf cluster.[3]
What's a Beowulf Cluster?
True beowulf as a cluster of computers interconnected with a network with the following characteristics:
- The nodes are dedicated to the beowulf cluster.
- The network on which the nodes reside are dedicated to the beowulf cluster.
- The nodes are Mass Market Commercial-Off-The-Shelf (M2COTS) computers.
- The network is also a COTS entity.
- The nodes all run open source software.
- The resulting cluster is used for High Performance Computing (HPC).
Building physical cluster
The cluster consists of the following hardware parts:
- Network
- Server / Head / Master Node (common names for the same machine)
- Compute Nodes
- Gateway
All nodes (including the master node) run the following software:
- GNU/Linux OS
- Ubuntu Desktop/Server Edition (if the node is dedicated, use the Server Edition)
- Network File System (NFS)
- Secure Shell (SSH)
- Message Passing Interface (MPI)
- MPICH2, a high-performance and widely portable implementation of the MPI Standard, designed to implement all of
Configuring the Nodes
Add the nodes to the hosts file
Edit the hosts file (sudo nano /etc/hosts
) like below and remember that you need to do this for all nodes,
127.0.0.1 localhost 192.168.1.6 node0 192.168.1.7 node1 192.168.1.8 node2 192.168.1.9 node3
Bukan seperi ini:
127.0.0.1 localhost 127.0.1.1 node0 192.168.1.7 node1 192.168.1.8 node2 192.168.1.9 node3
apalagi seperti ini:
127.0.0.1 localhost 127.0.1.1 node0 192.168.1.6 node0 192.168.1.7 node1 192.168.1.8 node2 192.168.1.9 node3
Coba kita uji koneksinya,
$ ping node0 PING node0 (192.168.1.6) 56(84) bytes of data. 64 bytes from node0 (192.168.1.6): icmp_req=1 ttl=64 time=0.032 ms 64 bytes from node0 (192.168.1.6): icmp_req=2 ttl=64 time=0.033 ms 64 bytes from node0 (192.168.1.6): icmp_req=3 ttl=64 time=0.027 ms
Defining a user for running MPI jobs
Create the user like this, $ sudo adduser mpiuser --uid 999
You may use a different user ID (as long as it is the same for all MPI users). Enter a password for the user when prompted. It's recommended to give the same password on all nodes so you have to remember just one password.
Setup passwordless SSH for communication between nodes
First install the SSH server on all nodes:
$ sudo apt-get install ssh
Generate a SSH key for the MPI user on all nodes:
$ su mpiuser $ ssh-keygen -t rsa
When asked for a passphrase, leave it empty (hence passwordless SSH).
Run the following commands on the master node as user "mpiuser",
mpiuser@node0:~$ ssh-copy-id node1 mpiuser@node0:~$ ssh-copy-id node2 mpiuser@node0:~$ ssh-copy-id node3
You should now be able to login on the compute nodes from the master node without having to enter a password,
mpiuser@node0:~$ ssh node1 mpiuser@node1:~$ echo $HOSTNAME node1
You should now be logged in on node1 via SSH. Make sure you're able to login to the other nodes as well.
Install and setup the Network File System
To install NFS, run the following command on all nodes:
$ sudo apt-get install nfs-kernel-server
Once installed, create a directory which will be shared with all nodes (we'll use /mirror in this tutorial). Run the following command on all nodes to create the directory,
$ sudo mkdir /mirror
Make sure this directory is owned by the MPI user, so that all MPI users can access this directory (again run this command on all nodes),
$ sudo chown mpiuser:mpiuser /mirror
Now we share the contents of the /mirror directory on the master node with all other nodes. For this the file /etc/exports
on the master node needs to be edited. Add the following line to this file,
/mirror *(rw,sync,no_subtree_check)
You can read the man page to learn more about the exports file (man exports
). Then restart the NFS server to load the new configuration,
node0:~$ sudo /etc/init.d/nfs-kernel-server restart
The /mirror directory should now be shared through NFS. All data files and programs that will be used for running a MPI job must be placed in this directory on the master node. The other nodes will then be able to access these files through NFS.
The firewall is by default enabled on Ubuntu. The firewall will block access when a client tries to access a NFS directory. So you need to add a rule with UFW (a tool for managing the firewall) to allow access from a specific subnet. If the IP addresses in your network start with 192.168.1, then 192.168.1.0 is the subnet. Run the following command to allow incoming access from a specific subnet,
node0:~$ sudo ufw allow from 192.168.1.0/24
You need to run this on the master node. And replace "192.168.1.0" by the subnet for your network.
You can then manually mount the /mirror directory on the compute nodes. Run the following commands to test this,
node1:~$ sudo mount node0:/mirror /mirror node2:~$ sudo mount node0:/mirror /mirror node3:~$ sudo mount node0:/mirror /mirror
But it's easier to have the /mirror directory automatically mounted when the nodes are booted. For this the file /etc/fstab
needs to be edited. Add the following line to the fstab file of all compute nodes,
node0:/mirror /mirror nfs
Again, read the man page of fstab if you want to know the details (man fstab
). List the contents of the /mirror directory on each compute node to check if you have access to the data on the master node,
$ ls /mirror
If this lists the files from the /mirror directory on the master node, then it's working.
Setting up MPD
MPD is the MPICH2 deamon which takes care of running jobs on the nodes. Before we can start any MPI jobs, we need to configure MPD. Two files need to be created on the master node. Make sure you're logged in as the MPI user,
mpiuser@node0:~$ touch ~/mpd.hosts mpiuser@node0:~$ touch ~/.mpd.conf
Add the names of all nodes to the mpd.hosts
file,
node0 node1 node2 node3
Because node0 was added to this file, the master node is also used as a compute node.
The configuration file .mpd.conf
needs to be accessible to the MPI user only (in fact, MPD refuses to work if you don't do this),
mpiuser@node0:~$ chmod 600 ~/.mpd.conf
Then add a line with a secret passphrase to the configuration file (you can use a random password generator to generate a random string of characters for the passphrase),
secretword=random_text_here
All nodes need to have a .mpd.conf
file in the home folder of mpiuser with the same passphrase. We can use scp
to securely copy this file from the master node to the other nodes and at the same time preserve the file permissions,
mpiuser@node0:~$ scp -p .mpd.conf node1:/home/mpiuser/ mpiuser@node0:~$ scp -p .mpd.conf node2:/home/mpiuser/ mpiuser@node0:~$ scp -p .mpd.conf node3:/home/mpiuser/
Normally, the above scp
commands would ask you for a password, but since we set up passwordless SSH, it won't.
The nodes should now be configured correctly. Run the following command on the master node to start the mpd deamon on all nodes,
mpiuser@node0:~$ mpdboot -n 4
Replace 4 by the number of compute nodes in your cluster. If this was successful, all nodes should now be running the mpd daemon. Run the following command to check if all nodes entered the ring (and are thus running the mpd daemon),
mpiuser@node0:~$ mpdtrace -l
This command should display a list of all nodes that entered the ring. Nodes listed here are running the mpd daemon and are ready to accept MPI jobs. This means that your cluster is now set up and ready to rock!
Running jobs on the cluster
Once you've used the mpdtrace
command to verify that the nodes have joined the ring, it's time to actually run some MPI jobs.
Running MPICH2 example applications on the cluster
The MPICH2 package comes with a few example applications that you can run on your cluster. To obtain these examples, download the MPICH2 source package from the MPICH website and extract the archive to a directory. The directory to where you extracted the MPICH2 package should contain an "examples" directory. This directory contains the source codes of the example applications. You need to compile these yourself.
$ cd mpich2-1.3.2p1/ $ ./configure $ make $ cd examples/
The example application cpi
is compiled by default, so you can find the executable in the "examples" directory. Optionally you can build the other examples as well,
$ make hellow $ make pmandel ...
Once compiled, place the executables of the examples somewhere inside the /mirror directory on the master node. It's common practice to place executables in a "bin" directory, so create the directory /mirror/bin and place the executables in this directory. The executables should now be available on all nodes.
We're going to run a MPI job using the example application cpi
. Make sure you're logged in as the MPI user on the master node,
$ su mpiuser
And run the job like this,
mpiuser@node0:~$ mpiexec -n 4 /mirror/bin/cpi
Replace 4 by the number of nodes on which you want to run the job. It's important that you use the absolute path to the executable in the above command, because only then mpd
knows where to look for the executable on the compute nodes. The absolute path used should thus be correct for all nodes. But since /mirror is the NFS directory, all nodes have access to this path and the files within it.
The example application cpi
is useful for testing, because it shows on which nodes each sub process is running and how long it took to run the job. This application is however not useful to test performance because this is a very small application which takes only a few milliseconds to run. As a matter of fact, I don't think it actually computes pi. If you look at the source, you'll find that the value of pi is hard coded into the program.
References
- ↑ OpenClusterGroup. OSCAR.
- ↑ Philip M. Papadopoulos, Mason J. Katz, and Greg Bruno. NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters. October 2001, Cluster 2001: IEEE International Conference on Cluster Computing.
- ↑ Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi. Clustering Tutorial.
- ↑ Ubuntu Wiki. Setting Up an MPICH2 Cluster in Ubuntu.
- ↑ Linux.com. Building a Beowulf Cluster in just 13 steps.