File shares
Provisioning file shares for your instances
File shares provide convenient, high-performance file storage for your instances. File shares selected on instance creation are automatically mounted to your instances as a directory under /mnt
, and are readable and writable from multiple instances concurrently.
Provisioning file share storage
To provision file share storage, go to Storage, click + Create storage and select File share for the type field.
Select the region you would like to provision in.
Select the size for your disk. Currently, each file share has a maximum size of 15TB.
When creating a file share, you must provide a unique Name which will be used for the folder where it will be automatically mounted inside your instance. For example, if you named your file share "training-model" it will be automatically mounted on the /mnt/training-model
folder after your instance is launched.
File share storage is currently available in the following regions:
us-central1-b
NVIDIA H100
us-central2-a
NVIDIA H100
Attach storage to new reservations and spot bids
Provisioned storage is available for selection while creating a reservation or spot bid. You can attach as many storage options as needed from the same region. If you select a region that does not have provisioned storage, it will not appear as an option.
Attaching new file shares to existing instances
You can attach new files shares to existing instances when they are in a Paused state. If the instance has already booted up once when you do this, the new file shares will not automatically be mounted for you. To mount the new file shares in the standard format, you can use the following commands:
FILESHARE_NAME=<insert-name-here>
sudo mkdir /mnt/$FILESHARE_NAME
sudo chmod 777 /mnt/$FILESHARE_NAME
echo "$FILESHARE_NAME /mnt/$FILESHARE_NAME virtiofs defaults,nofail 0 1" | sudo tee -a /etc/fstab
sudo mount -a
Accessing the file share from your instance
By default, attached file shares are automatically mounted at /mnt/<FILESHARE_NAME>
. You can list all mounted file shares using this command:
$ mount | grep virtiofs
Please note /mnt/local
is reserved for ephemeral storage which is wiped out on instance preemption, relocation, or reboot.
Performance & benchmarks
Actual performance for your file share is likely to vary greatly depending on your workload and instance configuration; however, below are representative benchmarks for a single 8x H100 instance:
5600MB/s
60k
2250MB/s
45k
The numbers above are a snapshot meant for guidance. We are constantly making improvements to optimize performance. You can also run the benchmarks yourself from your instances using the following:
export TEST_DIR=/mnt/filesharename/
export OUTPUT_DIR=/tmp
# sequential write bw
fio --name=write_throughput --directory=$TEST_DIR --numjobs=8 \
--size=5G --time_based --runtime=1m --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=1 --rw=write \
--group_reporting=1 | tee "$OUTPUT_DIR/0-write_throughput.txt"
# randwrite iops
fio --name=write_iops --directory=$TEST_DIR --numjobs=8 --size=5G \
--time_based --runtime=1m --ramp_time=2s --ioengine=libaio --direct=1 \
--verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1 | tee "$OUTPUT_DIR/1-write_iops.txt"
# sequential read bw
fio --name=read_throughput --directory=$TEST_DIR --numjobs=8 \
--size=5G --time_based --runtime=1m --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=1 --rw=read \
--group_reporting=1 | tee "$OUTPUT_DIR/2-read_throughput.txt"
# randread iops
fio --name=read_iops --directory=$TEST_DIR --size=5G --numjobs=8 \
--time_based --runtime=1m --ramp_time=2s --ioengine=libaio --direct=1 \
--verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1 | tee "$OUTPUT_DIR/3-read_iops.txt"
Quotas
Each project has a total storage capacity quota that accumulates usage across all regions. Contact your account team via Slack or email [email protected] to increase your quota.
Resizing file share volumes
If you need to resize your file share volume, please contact your account team via slack or email [email protected].
Last updated