Requesting Storage on SeaWulf
On the SeaWulf cluster, data storage is a finite resource. Anticipating that user demand for data often changes, we allot storage space as needed. Requests for storage beyond the default amount are submitted through the ticketing system.
Summary
Location | Size | Backed up? | Shareable? | Cleared? |
/gpfs/home/<netid> | 20 GB | Yes | No | never |
/gpfs/scratch/<netid> | 20 TB | No | No | 30 days |
/gpfs/projects/<your_group>* | up to 10TB* | No | Yes | per request* |
*Project spaces are created upon request. The size of this directory and the duration of data storage are defined on creation.
Home Directory
Each user is given 20 GB of disk space for their home directory, which is only accessible by them. Permission changes to this directory and files within will automatically be reverted. The home directory for user jsmith is /gpfs/home/jsmith. This space is backed up.
Scratch Directory
In addition to the home directory, users have access to a scratch space. This file space is intended to be used when users need to run jobs that produce a large amount of intermediary data. It is not intended for long-term storage. For this reason, files older than 30 days will be automatically deleted. The scratch space for the user jsmith is /gpfs/scratch/jsmith. The limit is currently set at 20TB/user, and no more than 10 million files. This space is NOT backed up.
Project space
A group's PI can request a collaborative storage space which can be shared by all members of a project. Project space for the “Smith Project” can be found in /gpfs/projects/SmithGroup. The project space allocation is by default 1TB. This space is NOT backed up.
The group's PI can request up to 10TB of space through the ticketing system or the initial project request, including the following in description:
- The size of the data, or space requested
- How it will be used/processed (e.g. how often it will be accessed, bandwidth requirements, etc.)
- The duration of storage (Due to the cost of the high-performance enterprise storage system used on the cluster, we discourage use of it as an archive.)
- Confirmation that the user understands that the data located in the project space is NOT backed up and that backing up of the data and insuring its integrity is the sole responsibility of the user.
Requests for more than 10TB will require more detail and, if granted, will likely only be satisfied for a specific period of time.