Skip to content

SWQU Workshop Installation Instructions

Windows WSL Ubuntu 22.04


Prerequisites:

  • MUST HAVE ACCESS TO INTERNET
  • MUST BE LOGGED IN AN ADMIN ACCOUNT

WARNING:

If you have an operating system installed using the VirtualBox software, converting to WSL may destroy your VM. Make sure you back up your files if you are using that software.

Install WSL Ubuntu 22.04:

Open a powershell command window and enter:

wsl --install Ubuntu-22.04

You should be prompted to make an account username and password.

Sometimes it will be useful to access files inside your Linux operating system from Windows (e.g. to transfer files, etc.).

To access the root directory of your WSL operating system(s) through the Windows Explorer, you can type \\wsl$ into the Windows Explorer.

This will lead to a directory containing subdirectories for each WSL Linux distribution you have installed. Each of these directories is the root directory of that Linux distribution.

It is very convenient to create a shortcut to the home/<username> directory of your Linux distribution:
- Navigate to \\wsl$/<distribution>/home/ in the Windows Explorer
- Right click the <username> directory
- Choose “show more options” → “create shortcut”
- You will be prompted to create the shortcut on your desktop.

Update System Package

In your WSL Ubuntu shell, enter following commands (one at a time):

sudo apt update
sudo apt upgrade
sudo apt update
sudo apt upgrade

Install System Packages:

sudo apt install bash make build-essential gfortran tar openmpi-bin libopenmpi-dev git libhdf5-dev libhdf5-openmpi-dev python3-pip hdf5-tools ffmpeg htop libblas-dev liblapack-dev qtbase5-dev xterm x11-apps

Install Python Packages:

pip3 install numpy matplotlib h5py scipy zenodo_get pandas astropy sunpy

Setup SWQU Installation


Create the main installation folder:

mkdir swqu_workshop
mkdir swqu_workshop/cpu
cd swqu_workshop/cpu

Create a startup script:

touch load_swqu_cpu
chmod +x load_swqu_cpu

Set number of CPU threads

Edit the startup file load_swqu_cpu using a text editor (such as nano or vim) and insert this line:

export OMP_NUM_THREADS=[NUMBER_OF_CPU_THREADS]

You can use tools like lscpu and/or htop to determine [NUMBER_OF_CPU_THREADS].

Source the startup script with the command:

. load_swqu_cpu

Install Open-source Flux Transport (OFT)


Create the OFT installation folder:

mkdir oft
cd oft

Clone the git repository:

git clone --recursive https://github.com/predsci/oft.git/ .

Enter the HipFT directory and build HipFT:

cd hipft
./build_examples/build_gcc_ubuntu20.04_cpu.sh

Test the HipFT installation:

cd testsuite
./run_test_suite.sh -mpicall='mpirun -bind-to socket -np'
cd ..

Return to the swqu_workshop/cpu folder:

cd ../../

Install Solar Wind Generator (SWiG)


Create the SWiG installation folder:

mkdir swig
cd swig

Clone the git repository:

git clone --recursive https://github.com/predsci/swig.git/ .

Enter the MapFL directory and build MapFL:

cd mapfl
./build_examples/build_cpu_multithread_gcc_ubuntu20.04.sh

Test the MapFL installation:

cd example
../bin/mapfl mapfl_get_simple_open_field_map.in

If the run is sucessful, there should be a file called ofm.h5 in the directory.

Enter the POT3D directory and build POT3D

cd ../../pot3d
./build_examples/build_cpu_mpi-only_gcc_ubuntu20.04.sh

Test the POT3D installation:

./validate.sh

The validation script will indicate if the run has passed.


Add the following lines to the load_swqu_cpu startup script:

export PATH="<DIR>/swqu_workshop/cpu/oft/bin:$PATH"
export PATH="<DIR>/swqu_workshop/cpu/oft/hipft/bin:$PATH"
export PATH="<DIR>/swqu_workshop/cpu/swig:$PATH"
export PATH="<DIR>/swqu_workshop/cpu/swig/bin:$PATH"
export PATH="<DIR>/swqu_workshop/cpu/swig/pot3d/bin:$PATH"
export PATH="<DIR>/swqu_workshop/cpu/swig/pot3d/scripts:$PATH"

where <DIR> is the path to the directory that swqu_workshop was made in.




NVIDIA GPU Installation (ADVANCED)




Some documentation: https://docs.nvidia.com/cuda/wsl-user-guide/index.html

Create the main installation folder:

Assuming you are in the directory you started it at the top of these instructions:

mkdir swqu_workshop/gpu
cd swqu_workshop/gpu

Create a startup script:

touch load_swqu_gpu  
chmod +x load_swqu_gpu  

Install the NVIDIA HPC SDK compiler:

curl https://developer.download.nvidia.com/hpc-sdk/ubuntu/DEB-GPG-KEY-NVIDIA-HPC-SDK | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-hpcsdk-archive-keyring.gpg
echo 'deb [signed-by=/usr/share/keyrings/nvidia-hpcsdk-archive-keyring.gpg] https://developer.download.nvidia.com/hpc-sdk/ubuntu/amd64 /' | sudo tee /etc/apt/sources.list.d/nvhpc.list
sudo apt-get update -y
sudo apt-get install -y nvhpc-24-3

Edit the startup file load_swqu_gpu using a text editor (such as nano or vim) and insert these lines:

#!/bin/bash

version=24.3
NVARCH=`uname -s`_`uname -m`; export NVARCH
NVCOMPILERS=/opt/nvidia/hpc_sdk; export NVCOMPILERS
MANPATH=$MANPATH:$NVCOMPILERS/$NVARCH/$version/compilers/man; export MANPATH
PATH=$NVCOMPILERS/$NVARCH/$version/compilers/bin:$PATH; export PATH

export PATH=$NVCOMPILERS/$NVARCH/$version/comm_libs/mpi/bin:$PATH
export MANPATH=$MANPATH:$NVCOMPILERS/$NVARCH/$version/comm_libs/mpi/man
. /opt/nvidia/hpc_sdk/Linux_x86_64/${version}/comm_libs/12.3/hpcx/hpcx-2.17.1/hpcx-mt-init-ompi.sh
hpcx_load

# If there are any issues with MPI+CUDA, one can try usiing the old OpenMPI v3.
# Simply comment the above 4 lines and uncomment the following 2.  
# Then, re-compile the code and try running again.

#export PATH=$NVCOMPILERS/$NVARCH/$version/comm_libs/openmpi/openmpi-3.1.5/bin:$PATH
#export MANPATH=$MANPATH:$NVCOMPILERS/$NVARCH/$version/comm_libs/openmpi/openmpi-3.1.5/man

export CC="pgcc"
export CXX="pgc++"
export FC="pgfortran"
export F90="pgfortran"
export F77="pgfortran"
export CPP=cpp

export LD_LIBRARY_PATH=/usr/lib/wsl/lib:${LD_LIBRARY_PATH}

export OMP_NUM_THREADS=[NUMBER_OF_CPU_THREADS]

Source the startup script with the command:

. load_swqu_gpu

This should work: nvfortran --version

Compile HDF5 library with NV:

The HDF5 library must be compiled with the same compiler as the codes, so we need to compile from source:

mkdir hdf5
cd hdf5
wget https://docs.hdfgroup.org/archive/support/ftp/HDF5/prev-releases/hdf5-1.8/hdf5-1.8.21/src/hdf5-1.8.21.tar.gz
tar -xf hdf5-1.8.21.tar.gz
cd hdf5-1.8.21
export CFLAGS="$CFLAGS -fPIC"
./configure --enable-fortran --enable-production
make -j <NUM_CPU_THREADS>
make install
mv hdf5/* ../

Add the following line to your load_swqu_gpu startup script:

export LD_LIBRARY_PATH=[HDF5_INSTALL_DIR]/lib:${LD_LIBRARY_PATH}

where [HDF5_INSTALL_DIR] is the full path to the directory where the NV HDF5 library is installed (swqu_workshop/gpu/hdf5)

Source the startup script:
. load_swqu_gpu


Install Open-source Flux Transport (OFT):


Create the OFT installation folder:

mkdir swqu_workshop/gpu/oft
cd swqu_workshop/gpu/oft

Clone the git repository:

git clone --recursive https://github.com/predsci/oft.git/ .

Enter the HipFT directory and build HipFT:

cd hipft

Copy the GPU build script: cp build_examples/build_nvidia_gpu.sh .

Open build_nvidia_gpu.sh and set the HDF5 lines to be:

HDF5_INCLUDE_DIR="[HDF5_INSTALL_DIR]/include"

HDF5_LIB_DIR="[HDF5_INSTALL_DIR]/lib"

Also, set the correct "cc" in the GPU lines (there are two). See the CUDA wikipedia article for a list: https://en.wikipedia.org/wiki/CUDA

Build HipFT:

./build_nvidia_gpu.sh

Test the HipFT installation:

cd testsuite
./run_test_suite.sh
cd ..

You can use the "nvidia-smi" command in another terminal to make sure its computing on the GPU.

Return to the swqu_workshop/gpu folder:

cd ../../

Install Solar Wind Generator (SWiG):


Create the SWiG installation folder:

mkdir swig
cd swig

Clone the git repository:

git clone --recursive https://github.com/predsci/swig.git/ .

Enter the MapFL directory and build MapFL:

cd mapfl
Copy the NVIDIA CPU build script: cp build_examples/build_cpu_multithread_nvidia_ubuntu20.04.sh .

Open build_cpu_multithread_nvidia_ubuntu20.04.sh and set the correct HDF5 lines as done above.

./build_cpu_multithread_nvidia_ubuntu20.04.sh

Test the MapFL installation:

cd example
../bin/mapfl mapfl_get_simple_open_field_map.in

If the run is sucessful, there should be a file called ofm.h5 in the directory.

Enter the POT3D directory and build POT3D:

cd ../pot3d

Copy the GPU build script: cp build_examples/build_gpu_nvidia_ubuntu20.04.sh .

Open the build_gpu_nvidia_ubuntu20.04.sh and set the correct HDF5 lines as done above.

Set the correct "cc" in the GPU lines (there are two). See the CUDA wikipedia article for a list: https://en.wikipedia.org/wiki/CUDA

Also, insert -L/usr/lib/wsl/lib into to the HDF5_LIB_FLAGS line.

Build POT3D:

./build_gpu_nvidia_ubuntu20.04.sh

Test the POT3D installation:

./validate.sh

The validation script will indicate if the run has passed.

NOTE! It is possible the second validation run will fail, even if the first one passes.
If this happens, than everything is good and POT3D should work fine for SWQU.


Add the following lines to the load_swqu_gpu startup script:

export PATH="<DIR>/swqu_workshop/gpu/oft/bin:$PATH"
export PATH="<DIR>/swqu_workshop/gpu/oft/hipft/bin:$PATH"
export PATH="<DIR>/swqu_workshop/gpu/swig:$PATH"
export PATH="<DIR>/swqu_workshop/gpu/swig/bin:$PATH"
export PATH="<DIR>/swqu_workshop/gpu/swig/pot3d/bin:$PATH"
export PATH="<DIR>/swqu_workshop/gpu/swig/pot3d/scripts:$PATH"

where <DIR> is the path to the directory that swqu_workshop was made in.