Low CPU Efficiency reported with R doParallel


I have a question about CPU Efficiency. I received the below info from one of my jobs, and I’m curious that it says my job had 0% CPU Efficiency. My slurm and R script are also pasted further below.

The job timed out, but returned results (saved to file) as expected.

Any help fine-tuning the slurm or R code would be greatly appreciated.



Job info email:

Job ID: 11354345
Cluster: discovery
User/Group: mdonohue/mdonohue
State: TIMEOUT (exit code 0)
Nodes: 1
Cores per node: 16
CPU Utilized: 00:00:01
CPU Efficiency: 0.00% of 8-00:03:28 core-walltime
Job Wall-clock time: 12:00:13
Memory Utilized: 38.66 GB
Memory Efficiency: 141.37% of 27.34 GB


#SBATCH --output=MCI.out
#SBATCH --job-name=“MCI Simulations”
#SBATCH --mail-user=xxx@usc.edu
#SBATCH --mail-type=END,FAIL
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=16
#SBATCH --mem-per-cpu=1750
#SBATCH --time=1:00:00
#SBATCH --account=mdonohue_396

module purge
module load gcc/8.3.0
module load openblas/0.3.8
module load r/4.0.0

cd /project/mdonohue_396/cLDA-NCS-PA-2/simulations/MCI

Rscript --no-save --no-restore mci-simulations2.R

R code snippet

registerDoParallel(cores = 16)
foreach(i = TODO) %dopar% { … }

Looks like this job ran out of memory and stalled until timeout, causing idle CPUs. See output of jobinfo 11354345, for example. So try increasing the memory request to at least 40 GB.