Oops, that was a typo
.
Even if I specify the correct node name (a11-02) and specify all the device IDs, only GPUs of the most recent job are visible in ssh session.
$ squeue -u $USER -S +i -o "%7i %8u %4P %30j %2t %10R %10b"
JOBID USER PART NAME ST NODELIST(R TRES_PER_N
2747454 tnarayan isi jobname1 R a11-02 gpu:a40:5
2747461 tnarayan isi jobname2 R a11-02 gpu:a40:1
$ ssh a11-02 'nvidia-smi -i 0,1,2,3,4,5,6,7'
Thu Dec 16 18:38:12 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A40 Off | 00000000:A1:00.0 Off | 0 |
| 0% 64C P0 261W / 300W | 38258MiB / 45634MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 5674 C .../envs/rtg-py39/bin/python 38255MiB |
+-----------------------------------------------------------------------------+