1. Download the XMaaS Controller and Computer images here.
  2. Create one bootable CD or USB key (instructions) containing the Controller image and as many bootable CDs/USBs containing Compute images as needed. If you do not have extra computers for XMaaS, you can also create multiple virtual machines and attach XMaaS images to them.

    Linux: Use UNetbootin to burn the image to your USB key. It will also allow you to set up persistence on the USB key, or you can set up persistence using instructions in step 2.

    Windows: One way to create a bootable USB key on Windows way is to use the USB installer provided at pendrivelinux.com, which you can download here.

  3. Start the XMaaS Controller node. If you created a bootable medium containing the XMaaS Controller image, insert that medium into the dedicated computer and boot from the medium. If you created a virtual machine for the Controller node, attach the XMaaS Controller image to it and start the virtual machine.
  4. Set up persistence Create a dedicated etx3 partition labeled casper-rw. For instructions click here.

    To set up persistence in your virtual machine (instructions can be applied to a physical machine as well), run

    sudo fdisk /dev/sda
    where /dev/sda is the hard drive, where you want to create a new partition.

    In the initial menu press n to create a new partition. If you agree with default choices, which create a single partition spanning the whole hard drive, keep pressing ENTER until returned to the initial menu.

    To write the created partition to the hard drive press w.

    All we have to do now is to format and label the new partition. Run

    sudo mkfs.ext3 -L casper-rw /dev/sda1
    where /dev/sda1 is the newly created partition.

  5. Reboot Controller node On reboot the node will be bootstrapped into a Juju environment. You can verify this by running
    watch juju status
    and wait.. a bit longer.. bootstrapping a node takes a bit of time. On successfull bootstrap you should see something like this:
    	  environment: "null"
    	  machines: 
    	    "0":
    	    agent-state: started
    	    agent-version: 1.17.0.1
    	    dns-name: 10.0.0.1
    	    instance-id: 'manual:'
    	    series: precise
    	    hardware: arch=amd64 cpu-cores=2 mem=1995M
    	  services: {}
    		
  6. Start your Compute nodes. If you created bootable mediums containing the XMaaS Compute image, insert those mediums into dedicated computers and make sure each computer boots from the medium.
  7. Set up persistence on Compute nodes. Create a dedicated etx3 partition labeled casper-rw on each machine. For instructions click here.

    To set up persistence in your virtual machine (instructions can be applied to a physical machine as well), run

    sudo fdisk /dev/sda
    where /dev/sda is the hard drive, where you want to create a new partition.

    In the initial menu press n to create a new partition. If you agree with default choices, which create a single partition spanning the whole hard drive, keep pressing ENTER until returned to the initial menu.

    To write the created partition to the hard drive press w.

    All we have to do now is to format and label the new partition. Run

    sudo mkfs.ext3 -L casper-rw /dev/sda1
    where /dev/sda1 is the newly created partition.

  8. Reboot the compute nodes. On reboot each machine will automatically add itself to the Juju environment. You can verify that all nodes are corectlly added to the Juju by running
    juju status
  9. Deploy OpenFOAM to your XMaaS Juju environment. We will deploy one unit of openfoam-controller Juju charm to the Controller node and multiple units of openfoam Juju charm to the Compute nodes. So let's dig in.. Run
    juju deploy cs:~alesstimec/openfoam-controller --to 0
    to deploy the openfoam-controller to node 0, which is the Controller node.

    Then we deploy the openfoam charm to the first Compute node by running

    juju deploy cs:~alesstimec/openfoam --to 1
    And to deploy the openfoam charm to the remaining compute nodes we run
    juju add-unit openfoam --to <N>
    where <N> is the number of the machine we deploy the charm to.

    Finally we connect the two services enabling the openfoam-controller to utilize all openfoam machines

    juju add-relation openfoam-controller openfoam

  10. Run an OpenFOAM test case on the deployed XMaaS OpenFOAM infrastructure. Download the test case we prepared (click here for download), go to http://<CONTROLLER NODE IP>/openfoam/cgi-bin/upload.pl (instructions how to find the CONTROLLER NODE IP), click Choose file, browse to the downloaded file and click Submit, sit back and watch your XMaaS HPC cluster work its magic and present you with a download link for the result.
    To find out the IP of the Controller node go to the Controller node terminal and run
    ifconfig eth0 | grep "inet addr"
    which will output the ip address of the Controller node.