A new command named nodeinfo
is now available on the login nodes. It is a convenience script based on Slurm’s sinfo
command and is intended to provide an overview of partitions, nodes, and their current state.
$ nodeinfo -h
nodeinfo / Display node information by partition, CPU/GPU model, and state
Options:
-p Filter partitions
* Default value shows all partitions
* Multiple partitions can be specified with comma separation
-s Filter node states
* Default value shows all nodes (in any state)
* Common node state options include the following:
alloc,down,drain,fail,idle,maint,mix
* Multiple options can be specified with comma separation
Examples:
nodeinfo
nodeinfo -h
nodeinfo -p epyc-64
nodeinfo -s idle
nodeinfo -s idle,mix
nodeinfo -p largemem -s idle
nodeinfo -p main,gpu -s idle,mix
Example for Discovery:
$ nodeinfo
---------------------------------------------------------------------------------------------------------------------
Partition CPU model GPU model State Nodes CPUs Memory(MB) Timelimit Nodelist
------------ ------------- ------------------ ------ ----- ---- ---------- ----------- ------------------------------
debug xeon-2640 gpu:k20:2(S:0-1) drain 1 16 63000 1:00:00 a02-26
debug xeon-2640v4 gpu:p100:2(S:0-1) drain 1 20 128000 1:00:00 e23-02
debug xeon-2650v2 (null) idle 4 16 63400 1:00:00 e05-[42,76,78,80]
debug xeon-2640v3 gpu:k40:2(S:0-1) idle 1 16 63400 1:00:00 e09-18
epyc-64 epyc-7513 (null) mix 23 64 256000 2-00:00:00 a01-[02,04-05],a02-[12-14,17-1
epyc-64 epyc-7542 (null) mix 18 64 256000 2-00:00:00 b22-[01-11,13,27-32]
epyc-64 epyc-7513 (null) alloc 38 64 256000 2-00:00:00 a01-[03,07-09,11-14,16-19],a02
epyc-64 epyc-7542 (null) alloc 14 64 256000 2-00:00:00 b22-[12,14-26]
main* xeon-2640v3 (null) drain 2 16 63400 2-00:00:00 e06-24,e10-12
main* xeon-4116 (null) mix 42 24 94000+ 2-00:00:00 d05-[09,11,13,27,29-31,33,40],
main* xeon-2640v4 (null) mix 65 20 63400+ 2-00:00:00 d17-[03,05-13,17-26,28-30,33-4
main* xeon-2640v3 (null) mix 32 16 63400 2-00:00:00 e06-[13-20],e11-[26-27,29,45,4
main* xeon-2640v3 gpu:k40:2(S:0-1) mix 11 16 63400 2-00:00:00 e07-[03-13]
main* xeon-2640v4 gpu:k40:2(S:0-1) mix 38 20 63400 2-00:00:00 e16-[01-04,06,09-24],e17-[01-0
main* xeon-4116 (null) alloc 38 24 94000+ 2-00:00:00 d05-[06-08,10,12,14-15,26,28,3
main* xeon-2640v4 (null) alloc 15 20 64000 2-00:00:00 d17-[04,14-16,27,31-32],d18-[0
main* xeon-2640v3 (null) alloc 50 16 63400 2-00:00:00 e06-[01-12,21-22],e13-[26,28-4
main* xeon-2640v3 gpu:k40:2(S:0-1) alloc 6 16 63400 2-00:00:00 e07-[01-02,14-16,18]
main* xeon-2640v4 gpu:k40:2(S:0-1) alloc 3 20 63400 2-00:00:00 e16-[05,07-08]
gpu epyc-7282 gpu:a40:2(S:0-1) mix 5 32 256000 2-00:00:00 a02-[01,06,20],a03-01,a04-06
gpu xeon-6130 gpu:v100:2(S:0-1) mix 7 32 191000 2-00:00:00 d11-[02-04],d13-[02-05]
gpu xeon-2640v4 gpu:p100:2(S:0-1) mix 17 20 128000 2-00:00:00 d23-10,e21-[01-16]
gpu epyc-7513 gpu:a100:2(S:0-7) alloc 12 64 256000 2-00:00:00 a01-[01,06,15,20],b01-[01,06,1
gpu epyc-7282 gpu:a40:2(S:0-1) idle 7 32 256000 2-00:00:00 a02-15,a03-[06,15,20],a04-[01,
gpu xeon-6130 gpu:v100:2(S:0-1) idle 22 32 191000 2-00:00:00 d13-[06-11],d14-[03-18]
gpu xeon-2640v4 gpu:p100:2(S:0-1) idle 21 20 128000 2-00:00:00 d23-[13-16],e22-[01-16],e23-01
oneweek xeon-2650v2 (null) mix 9 16 128000+ 7-00:00:00 e01-76,e02-[54,61,71-74,76,80]
oneweek xeon-2650v2 (null) alloc 26 16 128000+ 7-00:00:00 e01-[46,48,52,60,62,64],e02-[4
oneweek xeon-2650v2 (null) idle 12 16 128000+ 7-00:00:00 e02-[42-46,55-60,79]
largemem xeon-4850 (null) drain* 3 40 1031600 7-00:00:00 a16-[02-04]
largemem epyc-7513 (null) mix 2 64 1024000 7-00:00:00 a02-10,a03-10
largemem epyc-7513 (null) alloc 1 64 1024000 7-00:00:00 a01-10
largemem epyc-7513 (null) idle 1 64 1024000 7-00:00:00 a04-10