By default, each user on Lawrencium is entitled to a 10 GB home directory which receives regular backups; in addition, each Condo-using research group receives up to 200 GB of project space to hold research specific application software shared among the group’s users. All users also have access to the large Lawrencium high performance Lustre scratch filesystem for working with non-persistent data.
This Condo Storage service supplements the above offerings by letting researchers purchase approve storage disks to go into our storage infrastructure. It is intended to provide a cost-effective, persistent storage solution for users or research groups that need to import and store large data sets over a long period of time to support their use of Lawrencium or dedicated research clusters. (Note that data stored in Condo Storage should be copied to scratch when actually carrying out your compute.) The table below compares the cost of various storage alternatives and shows the cost advantage of this service:
Cost comparison for 84 TB over 5 yrs
|Model/service||Details of cost||Total cost||Cost/TB/yr|
|UC Berkeley IST Performance tier||84TB x $1680/TB/yr x 5 yrs||$705,600||$1680|
|UC Berkeley IST Utility tier||84TB x $600/TB/yr x 5 yrs||$252,000||$600|
|UCLA Cloud Archive Storage Service (CASS)1||84TB x $119.12/TB/yr x 5 yrs||$50,030||$119|
|AWS Glacier2||84TB x $84/TB/yr x 5 yrs||$35,280||$84|
|Lawrencium Condominium Storage||$9349||$9349||$22|
Similar in concept to the Condo Computing program, Science IT provides the storage infrastructure consisting of a Hitachi 4060 HNAS storage server and covers the cost of supporting the system. Researchers can purchase set of Hitachi disks and have them hosted in the Hitachi storage infrastructure. This allows researchers to 1) pay a one-time upfront cost for the storage without having to incur ongoing support costs; and 2) pay only for the incremental cost of the storage shelf and disk drives because the cost of the storage servers and disk controllers are covered by Science IT.
Storage shelves are purchased and maintained on a 5-year life cycle. At the end of 5 yrs, Condo Storage contributors who wish to continue participating in the program will be offered one of the following options, and possibly both: (1) purchase new storage; or (2) cover the cost of annual vendor hardware maintenance for their existing storage shelves. Which of these options are available will depend on when the contributor’s storage was purchased, and when the Science IT Program determines that server hardware for the storage infrastructure must be upgraded.
Please note that the Condominium Storage is accessible via NFS to only the Lawrencium and dedicated research clusters within the Supercluster infrastructure. The HPC firewall prevents the NFS exporting of this storage to other Lab buildings or to any system outside of the HPC infrastructure. We encourage users to use Globus Online to transfer data in/out of the Condominium Storage.
Backups are the responsibility of the owner of the storage. Faculty can work with Science IT staff to obtain regular backups or to set up regular filesystem snapshots.
Users who wish to test the performance of Condo Storage can run their tests using their home or group directories, which use the same hardware. We request that users copy their data to /global/scratch to perform computation, especially computation with heavy I/O usage, rather than running it directly on Condo Storage.
Interested faculty and PIs should contact us at firstname.lastname@example.org for further information. Science IT will assist with entering a storage shelf purchase requisition in increments of 84TB, and initiate the procurement. The entire process from start to production is anticipated to be 6 weeks.
The Condo Storage service is available only to Lab researchers who are Lawrencium Condo Computing Service contributors.
1Cloud Archival Storage Service (https://idre.ucla.edu/cass). Included for price comparison only, as CASS would not be performant enough for many Savio use-cases. Purchase w/ federal funds incurs a 26% overhead.
2Based upon current AWS pricing of $.007/GB/mo; does not include data egress charges. Included for price comparison only, as Glacier would not be performant enough for many Savio use-cases.