Using BinAC¶
Login¶
There is one gateway that redirects to any of the login nodes in a load-balanced way.
To log into a specific login node, use ssh option -p <port>
and provide the
port number that corresponds
to the desired login node.
Hostname |
Node type |
---|---|
|
login to one of the three BinAC login nodes |
Host key fingerprint:
Algorithm |
Fingerprint (SHA256) |
---|---|
RSA |
|
ECDSA |
|
ED25519 |
|
Your username for the cluster will be your ICP ID with an st_
prefix.
For example, if your ID is ac123456
, then your BinAC username will be st_ac123456
.
More details can be found in BinAC2/Login.
Building dependencies¶
Python¶
# last update: September 2025
module load compiler/gnu/14.2
mkdir cpython-build
cd cpython-build
CLUSTER_PYTHON_VERSION=3.12.4
curl -L https://www.python.org/ftp/python/${CLUSTER_PYTHON_VERSION}/Python-${CLUSTER_PYTHON_VERSION}.tgz | tar xz
cd Python-${CLUSTER_PYTHON_VERSION}/
./configure --enable-optimizations --with-lto --prefix="${HOME}/bin/cpython-${CLUSTER_PYTHON_VERSION}"
make -j 4
make install
make clean
cd "${HOME}"
rm -rf cpython-build
CUDA¶
# last update: September 2025
module load devel/cuda/12.8
mkdir -p "${HOME}/bin/cuda-12.8/lib"
ln -s "${CUDA_HOME}/targets/x86_64-linux/lib/stubs/libcuda.so" "${HOME}/bin/cuda-12.8/lib/libcuda.so.1"
Building software¶
ESPResSo¶
Release 5.0-dev:
# last update: September 2025
module load compiler/gnu/14.2 \
mpi/openmpi/4.1-gnu-14.2 \
lib/boost/1.88.0-openmpi-4.1-gnu-14.2 \
lib/hdf5/1.12-gnu-14.2-openmpi-4.1 \
numlib/fftw/3.3.10-openmpi-4.1-gnu-14.2
module load devel/cuda/12.8
CLUSTER_PYTHON_VERSION=3.12.4
export PYTHON_ROOT="${HOME}/bin/cpython-${CLUSTER_PYTHON_VERSION}"
export LD_LIBRARY_PATH="${HOME}/bin/cuda-12.8/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
export PATH="${PYTHON_ROOT}/bin${PATH:+:$PATH}"
python3 -m venv "${HOME}/venv"
source "${HOME}/venv/bin/activate"
git clone --recursive --branch python --origin upstream \
https://github.com/espressomd/espresso.git espresso-5.0-dev
cd espresso-5.0-dev
python3 -m pip install -c "requirements.txt" cython setuptools numpy scipy vtk "cmake>=4.1"
sed -i 's/Development NumPy/Development/' CMakeLists.txt
mkdir build
cd build
cp ../maintainer/configs/maxset.hpp myconfig.hpp
sed -i "/ADDITIONAL_CHECKS/d" myconfig.hpp
CC="${GNU_BIN_DIR}/gcc" CXX="${GNU_BIN_DIR}/g++" cmake .. \
-D CUDAToolkit_ROOT="${CUDA_HOME}" -D CMAKE_PREFIX_PATH="${FFTW_MPI_HOME}" \
-D Boost_DIR="${BOOST_HOME}/lib/cmake/Boost-1.88.0" \
-D CMAKE_BUILD_TYPE=Release \
-D ESPRESSO_BUILD_WITH_CUDA=ON -D CMAKE_CUDA_ARCHITECTURES="80;90" \
-D ESPRESSO_BUILD_WITH_CCACHE=OFF -D ESPRESSO_BUILD_WITH_WALBERLA=ON \
-D ESPRESSO_BUILD_WITH_SCAFACOS=OFF -D ESPRESSO_BUILD_WITH_HDF5=ON \
-D ESPRESSO_BUILD_WITH_FFTW=OFF \
-D ESPRESSO_BUILD_WITH_SHARED_MEMORY_PARALLELISM=OFF
make -j 4
sed -i 's|/src/python"|/src/python:$VIRTUAL_ENV/lib/python3.12/site-packages"|' ./pypresso
Submitting jobs¶
To show which nodes are idle:
sinfo_t_idle
sinfo
Batch command:
sbatch job.sh
Job script:
#!/bin/bash
#SBATCH --partition=compute
#SBATCH --constraint=ib
#SBATCH --job-name=test
#SBATCH --ntasks=4
#SBATCH --ntasks-per-core=1
#SBATCH --time=00:10:00
#SBATCH --output %j.stdout
#SBATCH --error %j.stderr
# last update: September 2025
module load compiler/gnu/14.2 \
mpi/openmpi/4.1-gnu-14.2 \
lib/boost/1.88.0-openmpi-4.1-gnu-14.2 \
lib/hdf5/1.12-gnu-14.2-openmpi-4.1 \
numlib/fftw/3.3.10-openmpi-4.1-gnu-14.2
CLUSTER_PYTHON_VERSION=3.12.4
export PYTHON_ROOT="${HOME}/bin/cpython-${CLUSTER_PYTHON_VERSION}"
export PATH="${PYTHON_ROOT}/bin${PATH:+:$PATH}"
source "${HOME}/venv/bin/activate"
mpiexec --bind-to core --map-by core --report-bindings ./pypresso ../maintainer/benchmarks/lb.py
For communication-bound CPU jobs without GPU acceleration on partition compute
,
use --constraint=ib
to get nodes with fast interconnect,
or --constraint=eth
to get nodes with ethernet.
The desired partition needs to be specified via #SBATCH --partition
command,
without which your job will not be allocated any resources.
BinAC has following partitions available:
Partition |
Default Configuration |
Limit |
---|---|---|
|
ntasks=1, time=00:10:00, mem-per-cpu=1gb |
time=14-00:00:00 |
|
ntasks=1, time=00:10:00, mem-per-cpu=1gb |
time=30-00:00:00, nodes=10 |
|
ntasks=1, time=00:10:00, mem-per-cpu=1gb |
time=14-00:00:00, gres/gpu:a100=4, gres/gpu:a30=8, gres/gpu:h200=4, MaxJobsPerUser:8 |
Source:
scontrol show partition
Refer to BinAC2/Slurm for more details on submitting job scripts on BinAC.