We have resolved the issue of degraded file system performance on CARC systems. File systems should be operating at normal speeds again. However, per-user job submission limits will remain reduced until further notice.
As a reminder, it is important not to include too many small file operations in your job submissions. Such operations include, but are not limited to:
- Opening and closing many files
- Reading from and writing to many files in one directory
- Searching through many files
These operations are referred to as file or metadata input/output (I/O) and can easily overwhelm CARC’s file systems, reducing speed and performance or crashing it altogether.
Some solutions for minimizing I/O include:
- Utilizing the local /tmp directory for small-scale I/O
- Utilizing the local /dev/shm directory for large-scale I/O
- Using I/O libraries and file formats like HDF5 or NetCDF. This works by reading and writing from one file instead of working with many files at once.
- Using a database, such as Redis
If you suspect that your code is I/O intensive and you need help modifying it to use any of the solutions listed above, please reach out to CARC staff by emailing carc-support@usc.edu. You can also refer to our user guide on File Input/Output best practices for further information.
If you continue to experience degraded file system performance, please reach out to carc-support@usc.edu.