ODU ITS operates and manages a high-capacity mass storage intended to store research data on a long-term basis. This page provides an overview of the system, as well as instructions on how to use the storage.
This system provides a total capacity of over 2 PB, shared among all researchers and HPC users at ODU. For HPC users, this storage provide their long-term archival storage, not the users' home directories.
Features:
Each user is allocated a default space quota of 1 TB on the mass storage. If additional space is required, please contact rcc@odu.edu with the new total disk space requested and the justification (reason) for the request. If a student or a postdoc makes the request, we reserve the right to discuss the request with his/her research advisor.
If you encounter any issues accessing storage, please contact the ITS help desk at rcc@odu.edu.
The mass research storage has been mounted on all HPC clusters operated by ODU (currently: Wahab and Turing). Users can access mass storage from the login node of both the Turing and Wahab clusters. The research storage directories are mounted to:
/RC/home
(individual user storage)/RC/group
(group project storage)Each user has an individual directory on the storage system located at /RC/home/$USER
, where $USER
refer to his/her user ID (usually MIDAS ID).
A faculty member may request a group storage area, which will become a location for files (data, codes, etc.) that are commonly shared among his/her group members. This storage will be located at /RC/group/$PROJECT_NAME
where $PROJECT_NAME
will be the name of the research group/project. Access to this group area is limited to the designated group members, as directed by the faculty. By default, a group storage also has a 1 TB quota unless additional space is requested. Please email us at rcc@odu.edu
to request a group storage, storage increase, adding or removal of group members.
In HPC, heavy computations and data processing are performed on compute nodes. In general, mass storage is not accessible from the compute node, so as to conserve its limited data transfer bandwidth. For jobs that requires data from /RC, one has to stage the necessary data from /RC
manually to the relevant scratch partition before running the computation. Some potential workarounds:
Manual copying/moving on the login node. This is ok if the data size is relatively small (a few GBs at most). For large datasets, this could aversely impact the user's experience on the login node. Please refrain from doing this, if it is feasible at all.
From a compute node, use an ssh key to perform automated scp / rsync, pull the necessary data from the login node to the scratch partition. This has less impact on the login node's responsiveness but can be slower due to in-flight encryption.
Access to the mass research storage is also available to users' workstations or laptops running Windows operating system. This section provides instructions to access user's and/or group's data from a Windows computer.
Important: The mass research storage is only accessible from on-campus workstations or computers on the campus VPN. If your computer is located off-campus or on the campus wi-fi network, please login to ODU VPN before proceeding.
From an ITS-managed workstation on the ts.odu.edu
domain, the research storage system is mapped to the R:
drive. On the R:
drive, users will see their individual user directories as the R:\home\MIDAS_ID
subdirectory. The shared group subdirectories can be accessed from R:\group
. A user can only enter into subdirectories which he/she has access rights to.
If you are not using an ITS managed workstation, or while on the campus VPN, the storage can be accessed by mapping a drive to \\research1.ts.odu.edu\RC
.
To complete this on a Windows workstation, follow these steps:
Win
+E
).\\research1.ts.odu.edu\RC
for the folder to be mapped.odunet\MIDAS_ID
(replace MIDAS_ID
with your own MIDAS user ID) in the "User" box, and provide your MIDAS password.The mass research storage resides on an Isilon storage system comprised of multiple storage nodes, internally connected with a 40 Gbps Infiniband fabric. As of year 2022, this system provides over 2 PB of available storage, shared by all researchers. The entire system can serve researchers with an aggregate speed up 10 Gbps (the actual performance strongly depends on the user's device network location and pathways).