ANSYS Fluent on Rocky 8

During the system maintenace in January 2025, the OS was upgraded to Rocky 8 Linux and the software stack was updated, as well. These changes cause compatibility issues with some version of ANSYS Fluent. Please follow the steps below to resolve these issues and run this software on CARC’s Discovery or Endeavour clusters.

  1. Install the ANSYS Fluent locally, preferably in your /project directory. Please note that you need to have an active license key to use this software. CARC does not provide a license. However, you or your PI, may reach out to your department’s IT (for instance, at Viterbi’s School of Engineering IT), to inquire about access.

    If you have already installed this software, there is no need to re-install it.

  2. You need to load these modules:

    module load usc 
    module load mesa
    export LD_PRELOAD=/apps/generic/gcc/13.3.0/lib64/libstdc++.so.6
    
  3. You also need to make some changes to resolve the compatibility issues:

  • In your project directory, where ANSYS Fluent is installed, create an empty folder for compatibility libraries. For example:

    mkdir -p /project/ttrojan_123/ansys/libfluent

  • inside this folder, we then create a series of symbolic links:

    cd /project/ttrojan_123/ansys/libfluent
    ln -s /usr/lib64/libnsl.so.2 libnsl.so.1
    ln -s $OPENMPI_ROOT/lib/libmpi.so.40 libmpi.so.40
    ln -s $OPENMPI_ROOT/lib/libopen-pal.so.80 libopen-pal.so.40
    ln -s $OPENMPI_ROOT/lib/libprrte.so libopen-rte.so.40
    
  • Then, add this folder to your LD_LIBRARY_PATH, so for example:

    export LD_LIBRARY_PATH=/project/ttrojan_123/ansys/libfluent:\
    $LD_LIBRARY_PATH
    
  1. Please refer to the slurm file below as an example to submit a multi-node ANSYS Fluent job:
#!/bin/sh -l
# FILENAME: ANSYS-Fluent-Test

#SBATCH --job-name="CFD-Fluent"
#SBATCH --export=ALL
#SBATCH -p epyc-64    #partition
#SBATCH -N 8          #number of nodes
#SBATCH -n 256        #total number of cpus
#SBATCH -t 48:00:00   #time-limit
#SBATCH --mem 120G    #memory per node
#SBATCH --exclusive   #if you need the entire node's resources

### BEGINNING OF EXECUTION ###
nprocs=$SLURM_NTASKS
 
### Commands to launch parallel ANSYS fluent
### Here we add the address to the compatibility layer

module load usc 
module load mesa
export LD_PRELOAD=/apps/generic/gcc/13.3.0/lib64/libstdc++.so.6
export LD_LIBRARY_PATH=/project/ttrojan_123/ansys/libfluent:$LD_LIBRARY_PATH

export ansys_fluent=/home1/username/ansys/v212/fluent/bin/fluent 

### Set resource limits
ulimit -s unlimited
ulimit -l unlimited

### Generate our own nodefile for Fluent.
MY_NODEFILE="hostlist-job$SLURM_JOB_ID"

if [ "$SLURM_PROCID" == "0" ]; then
    # Create a temporary file with raw node list
    srun hostname -f > tmp_nodefile
    awk '{count[$0]++} END{for (node in count) print node":"count[node]}' tmp_nodefile > $MY_NODEFILE
    
    rm tmp_nodefile  # Clean up temporary file

### LAUNCHING ANSYS FLUENT ###
$ansys_fluent 3ddp -t$nprocs -cnf=$MY_NODEFILE -mpi=openmpi -g -i auto.jou >& output.out
  
rm $MY_NODEFILE #Cleaning up
fi
  • Please make sure to modify the addresses mentioned in the above template to reflect your installation directories

If you have any questions about the installation, licenses, or making the necessary changes for compatibility with the Rocky 8 OS, please send an email to carc-support@usc.edu