Parallel BeeGFS File System

no/neinStudents yes/jaEmployees yes/jaFaculties no/neinStud. Unions

The caucluster is connected to a BeeGFS file system to provide applications with high I/O demands with fast work space. The file system is mounted at /work_beegfs on all login and all compute nodes and is accessible via the environment variable $WORK. The BeeGFS file system has the following basic properties:

  • 350TB usable space (space managed by user quotas),
  • hosted on four files ervers for parallel access,
  • separate metadata server hosting the metadata on fast SSDs,
  • access via Infiniband.


Hints and rules

  • Space and number of files on $WORK will be managed via user quotas.
  • Initially all users will get a quota of 1 TB and 1 million "chunks" which is ~250k files (see also section 'quotas' below).
  • If you need more, limits can be increased on request, provided we are not running low on free space.
  • There will be no backup of data stored on the BeeGFS file system.
  • The file system is designed as fast work space for "hot" data - it is not intended as permanent storage for large amounts of "cold" data. Please, try to move huge input or output files you do not need for active calculations/projects off the cluster work space - either to the tape library or your local resources.
  • File system I/O is a 'cluster shared resource', meaning that no matter on which part of the cluster you are using it, it will affect everyone else. Therefore, transfer speeds can be much lower if someone else is doing lots of I/O at the same time as you.



The space on the BeeGFS file system will be accounted and limited by user quotas. The default limit for all users initially is 1 TB, which can be increased upon request. The number of files is also limited by the number of "chunks" you are allowed to create - BeeGFS by default spreads your files over four different storage targets, so a single file will usually exist as four "chunks" on the physical disks (Very small files might need less than four, as the minimum size of a chunk is 1 MB, so files smaller than 3 MB need 3 or less). The default limit of 1 Million chunks will normally allow 250k files. Again, this can be increased upon request.

To check your currently used quota use the command




beegfs-ctl --getquota --uid <username>
Note, that the quota checks only run every few minutes - so you might overshoot your quota if you do lots of I/O while crossing your limit. For the same reason it can take a few minutes for you to be able to write again after you have reduced your usage below your qouta.


Tips for I/O and data handling

  • BeeGFS will not perform very well if you need to access huge amount of small files, or if you repeatedly open and close files for reading/writing small amounts of data.
  • For optimal performance you should try to do I/O only to a few files and in large blocks (i.e., do not write a single result value to several different files each iteration, but try to collect results and write them to a single file).
  • If your workload is very I/O intensive, do not hesitate to check back with us ( to see if there are tuning options available to optimize the performance of your application.