Page Contents
This tutorial provides step-by-step instructions on how to deploy a pool of Grid Appliances configured with MPICH-2, an open-source implementation of the MPI (Message Passing Interface) framework. The Grid Appliance/MPICH2 virtual clusters can be used as a basis for running a variety of MPI tutorials.
su griduser
cd ~/examples/mpi
./setup.sh -m32
/mnt/local/mpich2/bin/mpicc -m32 -o HelloWorld HelloWorld.c
Before proceeding, check that your machine is connected to the public pool by using ``condor_status`` command. You may need to wait for a few minutes before it becomes fully operational.
./mpi_submit.py -n 2 HelloWorld
If all goes well, you will see an output similar to the output below; if an error occurs and the MPI ring fails to start, you can retry the command above.
griduser@C111197176:~/examples/mpi$ ./mpi_submit.py -n 2 HelloWorld
serv ipop ip = 5.111.197.176
submit condor with file tmpo21uIeMh/submit_mpi_vanilla_o21uIeMh
Submitting job(s).
Logging submit event(s).
1 job(s) submitted to cluster 5.
Waiting for 1 workers to response .... finished
host list:
['C079184183', '1', 'cndrusr1', '45556', '/var/lib/condor/execute/dir_2824']
MPD trace:
C111197176_60625 (5.111.197.176)
C079184183_43364 (5.79.184.183)
Processor 0 of 2: Hello World!
Processor 1 of 2: Hello World!
If you would like to re-create your own MPI virtual cluster outside FutureGrid (e.g. in a local cluster, or on student desktops), the overall steps to accomplish this are: