Linux User Quick Start

From OpenM++
Jump to: navigation, search

Where is OpenM++

It is recommended to start from desktop version of openM++.

You need to use cluster version of openM++ to run the model on multiple computers in your network, in cloud or HPC cluster environment. OpenM++ is using MPI to run the models on multiple computers. Please check Model Run: How to Run the Model page for more details.

Run on Linux computer

  • download and unpack openM++, i.e.:
wget http://sourceforge.net/projects/ompp/files/2018_02_05/openmpp_centos_20180205.tar.gz
tar xzf openmpp_centos_20180205.tar.gz
  • run modelOne model with single subsample on local machine:
cd openmpp_centos_20180205/models/bin/
./modelOne
2017-06-06 19:24:53.0747 modelOne
2017-06-06 19:24:53.0755 One-time initialization
2017-06-06 19:24:53.0763 Run: 105
2017-06-06 19:24:53.0763 Reading Parameters
2017-06-06 19:24:53.0764 Running Simulation
2017-06-06 19:24:53.0765 Writing Output Tables
2017-06-06 19:24:53.0790 Done.
  • run modelOne model with 16 subsamples and 4 threads:
./modelOne -OpenM.Subvalues 16 -OpenM.Threads 4
2017-06-06 19:25:38.0721 modelOne
2017-06-06 19:25:38.0728 One-time initialization
2017-06-06 19:25:38.0735 Run: 106
2017-06-06 19:25:38.0735 Reading Parameters
........................
2017-06-06 19:25:38.0906 Done.
  • run other models (i.e. NewCaseBased, NewTimeBased, RiskPaths):
./NewCaseBased -OpenM.Subvalues 32 -OpenM.Threads 4
  • run modelOne to compute modeling task "taskOne":
./modelOne -OpenM.Subvalues 16 -OpenM.Threads 4 -OpenM.TaskName taskOne
2017-06-06 19:27:08.0401 modelOne
2017-06-06 19:27:08.0413 One-time initialization
2017-06-06 19:27:08.0421 Run: 107
2017-06-06 19:27:08.0421 Reading Parameters
........................
2017-06-06 19:27:08.0593 Run: 108
2017-06-06 19:27:08.0593 Reading Parameters
........................
2017-06-06 19:27:08.0704 Writing Output Tables
2017-06-06 19:27:08.0812 Done.
  • in case if previous model run fail, for example, due to power outage, then it can be "restarted":
./modelOne -OpenM.RestartRunId 1234

output may vary depending on the stage where previous modelOne run failed, but still similar to above.

Run on multiple computers over network, in HPC cluster or cloud

  • make sure you have MPI and g++ >= 4.8 run-time installed. For example, on RedHat (CentOS) you may need to load it by following commands:
module load mpi/openmpi-x86_64
  • download and unpack cluster version of openM++, i.e.:
wget http://sourceforge.net/projects/ompp/files/2018_02_05/mpi/openmpp_centos_mpi_20180205.tar.gz
tar xzf openmpp_centos_mpi_20180205.tar.gz

please notice name of cluster version archive has _mpi_ in it, i.e. openmpp_centos_mpi_20180205.tar.gz

  • run modelOne model with single subsample on local machine:
cd openmpp_centos_mpi_20180205/models/bin/
./modelOne_mpi
2017-06-06 19:30:52.0684 One-time initialization
2017-06-06 19:30:52.0690 Run: 105
2017-06-06 19:30:52.0690 Reading Parameters
2017-06-06 19:30:52.0691 Running Simulation
2017-06-06 19:30:52.0691 Writing Output Tables
2017-06-06 19:30:52.0716 Done.

Note: RedHat 7.3 have a bug which cause an error like: ...The /dev/hfi1_0 device failed to appear after 15.0 seconds: Connection timed out... According to RedHat it is going to be fixed in 7.4.

  • run two instances of modelOne to compute 16 subsamples and 4 threads:
mpiexec -n 2 modelOne_mpi -OpenM.Subvalues 16 -OpenM.Threads 4
2017-06-06 19:43:01.0486 modelOne
2017-06-06 19:43:01.0487 modelOne
2017-06-06 19:43:01.0742 Parallel run of 2 modeling processes, 4 thread(s) each
2017-06-06 19:43:01.0742 One-time initialization
2017-06-06 19:43:01.0742 One-time initialization
2017-06-06 19:43:01.0750 Run: 106
2017-06-06 19:43:01.0750 Reading Parameters
2017-06-06 19:43:01.0750 Run: 106
2017-06-06 19:43:01.0750 Reading Parameters
..........
2017-06-06 19:43:01.0800 Writing Output Tables
2017-06-06 19:43:01.0878 Done.
2017-06-06 19:43:01.0880 Done.
  • run other models (i.e. NewCaseBased, NewTimeBased, RiskPaths):
mpiexec -n 8 NewCaseBased_mpi -OpenM.Subvalues 64 -OpenM.Threads 4

It is recommended to install SLURM or Torque to simplify your computational resources management rather than using mpiexec as above. It is also possible to use Google Cloud, Amazon or even Microsoft Azure cloud where compute nodes available for you on demand.

<metadesc>OpenM++: open source microsimulation platform</metadesc>