7.6. Using SSD Drives¶
Virtuozzo Storage supports SSD drives formatted to the ext4 filesystem and optionally mounted with TRIM support enabled.
Virtuozzo Storage SSD usage scenario does not generate TRIM commands. Also, modern drives like Intel SSD DC S3700 do not need TRIM at all.
Along with using SSD drives for storing data chunks, Virtuozzo Storage also supports the use of such drives for write journaling. You can attach an SSD drive to a chunk server and configure the drive to store a write journal. By doing so, you can boost the performance of write operations in the cluster by up to two and more times.
For example, if you have a 100GB SSD and four chunk servers on four 1TB HDDs, divide the SSD space as follows:
- 20 GB reserved for checksums and emergency needs and also to prevent the SSD from filling up completely (because its performance would then degrade),
- 80 GB for write journals, i.e. 20 GB per HDD/chunk server.
Checksums require 4 bytes of space for each 4KB page (1:1000 ratio). For example, 4 TB of storage will require 4 GB of space for checksums.
To understand how much space should be allocated for write journals in a specific cluster configuration, run the
vstorage advise-configuration command. Using cluster parameters as input data, the command will output suggestions on how to optimize the cluster performance, set up the host, and mount the cluster via
/etc/fstab (see the following sections for examples).
In general, optimization for write journals means that about 70% of SSD space should be used for said journals. Also some space should be reserved for checksums and such (the amount to reserve is also suggested by the
vstorage advise-configuration command).
Finally, the table below will help you understand how many SSDs you will need for your cluster.
|SSD Type||Number of SSDs|
|Intel SSD 320 Series, Intel SSD 710 Series, Kingston SSDNow E enterprise series, or other SATA 3Gbps SSD models providing 150-200MB/s of sequential write of random data.||1 SSD per up to 3 HDDs|
|Intel SSD DC S3700 Series, Samsung SM1625 enterprise series, or other SATA 6Gbps SSD models providing at least 300MB/sec of sequential write of random data.||1 SSD per up to 5-6 HDDs|
The following sections provide detailed information on configuring SSD drives for write journaling and data caching.
- Not all solid-state drives obey flush semantics and commit data in accordance with the protocol. This may result in arbitrary data loss or corruption in case of a power failure. Always check your SSDs with the
vstorage-hwflush-checktool (for more information, see Checking Data Flushing).
- We recommend using Intel SSD DC S3700 drives. However, you can also use Samsung SM1625, Intel SSD 710, Kingston SSDNow E or any other SSD drive with support for data protection on power loss. Some of the names of this technology are: Enhanced Power Loss Data Protection (Intel), Cache Power Protection (Samsung), Power-Failure Support (Kingston), Complete Power Fail Protection (OCZ). For more information, see SSD Drives Ignore Disk Flushes.
- The minimal recommended SSD size is 30 GB.
7.6.1. Configuring SSD Drives for Write Journaling¶
Using SSD drives for write journaling can help you reduce write latencies, thus improving the overall cluster performance.
To determine how much SSD space you will need for CS journals, use the
vstorage advise-configuration command with the
-w option. For example:
# vstorage -c stor1 advise-configuration -w --cs /vstorage/stor1-cs1 --cs /vstorage/stor1-cs2 --cs /vstorage/stor1-cs3 --cs /vstorage/stor1-cs4 --ssd /vstorage/stor1-ssd -m /vstorage/stor1 You have the following setup: CS on /vstorage/stor1-cs4 -- Total disk space 1007.3GB CS on /vstorage/stor1-cs3 -- Total disk space 1007.3GB CS on /vstorage/stor1-cs2 -- Total disk space 1007.3GB CS on /vstorage/stor1-cs1 -- Total disk space 1007.3GB SSD on /vstorage/stor1-ssd -- Total disk space 251.8GB Proposed server configuration optimized for writes: - 155.9GB (61%) for CS journals, 29.1GB (11%) reserved (including 3.9GB checksums for 3.9TB of data) - CS journal sizes: 38.9GB for /vstorage/stor1-cs4 at /vstorage/stor1-ssd 38.9GB for /vstorage/stor1-cs3 at /vstorage/stor1-ssd 38.9GB for /vstorage/stor1-cs2 at /vstorage/stor1-ssd 38.9GB for /vstorage/stor1-cs1 at /vstorage/stor1-ssd How to setup the node: vstorage -c stor1 make-cs -r /vstorage/stor1-cs4/cs -j /vstorage/stor1-ssd/cs4-stor1-journal -s 39914 vstorage -c stor1 make-cs -r /vstorage/stor1-cs3/cs -j /vstorage/stor1-ssd/cs3-stor1-journal -s 39914 vstorage -c stor1 make-cs -r /vstorage/stor1-cs2/cs -j /vstorage/stor1-ssd/cs2-stor1-journal -s 39914 vstorage -c stor1 make-cs -r /vstorage/stor1-cs1/cs -j /vstorage/stor1-ssd/cs1-stor1-journal -s 39914 vstorage-mount -c stor1 /vstorage/stor1 -C /vstorage/stor1-ssd/vstorage-stor1-cache -R 68424 Mount option for automatic cluster mount from /etc/fstab: vstorage://stor1 /vstorage/stor1 fuse.vstorage cache=/vstorage/stor1-ssd/vstorage-stor1-cache, cachesize=68424 0 0
In this example, the suggestion is to allocate 61% of SSD space for CS journals to achieve optimal cluster performance.
- If you have multiple chunk servers on a single host, create a separate SSD journal for each CS, making sure that the SSD has enough space for all CS journals.To modify the size of existing CS journals use the
- When deciding on a journal size without using the
vstorage advise-configurationcommand, make sure there is 1GB of SSD space per each 1TB of HDD space for checksums.
188.8.131.52. Setting Up a Chunk Server with a Journal on SSD¶
To set up a chunk server that stores a journal on an SSD drive, do the following:
Log in to the Node you want to act as a chunk server as root or as a user with root privileges. The Node must have at least one hard disk drive (HDD) and one solid state drive (SSD).
Download and install the following RPM packages:
These packages are available in the Virtuozzo remote repository (this repository is automatically configured for your system when you install Virtuozzo) and can be installed with this command:
# yum install vstorage-chunk-server
Make sure that cluster discovery is configured for the server. For details, see Configuring Cluster Discovery.
Authenticate the server in the cluster, if it is not yet authenticated:
# vstorage -c stor1 auth-node
If required, prepare the SSD as described in Preparing Disks for Virtuozzo Storage.
Create the chunk server configuration, repository, and the journal, for example:
# vstorage -c stor1 make-cs -r /vstorage/stor1-cs -j /ssd/stor1/cs1 -s 30720
- Makes the
/vstorage/stor1-csdirectory on your computer’s hard disk drive and configures it for storing data chunks.
- Configures your computer as a chunk server and joins it to the
stor1Virtuozzo Storage cluster.
- Creates the journal in the
/ssd/stor1/cs1directory on the SSD drive and allocates 30 GB of disk space to this journal.
When choosing a directory for the journal and deciding on its size, allocate the required space for the journal and make sure there is 1GB of SSD space per each 1TB of HDD space for checksums.
- Makes the
Start the chunk server management service
vstorage-csdand configure it to start automatically on the chunk server boot:
# systemctl start vstorage-csd.target # systemctl enable vstorage-csd.target
184.108.40.206. Adding, Destroying, and Configuring Chunk Server Journals in Live Virtuozzo Storage Clusters¶
To obtain CS repository paths, use the
vstorage list-services -C command.
Adding Chunk Server Journals
To add a new journal to a chunk server, use the
vstorage configure-cs -a -s command. For example, to add a 2048MB journal to the chunk server CS#1 and place it in a directory on an SSD drive mounted to
# vstorage -c stor1 configure-cs -r /vstorage/stor1-cs1/data -a /ssd/stor1-cs1-journal -s 2048
Destroying Chunk Server Journals
To destroy a chunk server journal, use the
vstorage configure-cs -d command. For example:
# vstorage -c stor1 configure-cs -r /vstorage/stor1-cs1/data -d
Moving Chunk Server Journals
To change the chunk server journal directory, do the following using the commands above:
- Destroy the existing journal
- Add a new journal with the required size at the required location.
Resizing Chunk Server Journals
To resize a chunk server journal, use the
vstorage configure-cs -s command. For example, to resize a CS journal to 4096MB:
# vstorage -c stor_1 configure-cs -r /vstorage/stor_1-cs1/data -s 4096
220.127.116.11. Disabling Checksumming¶
Using checksumming, you can provide better reliability and integrity of all data in the cluster. When checksumming is enabled, Virtuozzo Storage generates checksums each time some data in the cluster is modified. When this data is then read, the checksum is computed once more and compared with the already existing value.
By default, data checksumming is automatically enabled for newly created chunk servers that use journaling. If necessary, you can disable this functionality using the
-S option when you set up a chunk server, for example:
# vstorage -c stor1 make-cs -r /vstorage/stor1-cs -j /ssd/stor1/cs1 -s 30720 -S
18.104.22.168. Configuring Data Scrubbing¶
Data scrubbing is the process of checking data chunks for durability and verifying their contents for readability and correctness. By default, Virtuozzo Storage is set to examine two data chunks per minute on each chunk server in the cluster. If necessary, you can configure this number using the
vstorage utility, for example:
# vstorage -c stor1 set-config mds.wd.verify_chunks=3
This command sets the number of chunks to be examined on each chunk server in the
stor1 cluster to 3.
7.6.2. Configuring SSD Drives for Data Caching¶
Another way of improving the overall cluster performance is to create a local cache on a client’s SSD drive. Once you create the cache, all cluster data accessed two or more times will be put to that cache.
The table below lists the main features specific to a local cache:
|Quick access time||Data in the local cache can be accessed much faster (up to 10 times and more) as compared to accessing the same data stored on chunk servers in the cluster.|
|No network bandwidth consumption||Cluster network bandwidth is not consumed because the data is accessed locally.|
|Special boot cache||Local cache uses a special boot cache to store small amounts of data on file openings. This significantly speeds up the process of starting virtual machines and Containers.|
|Cache survivability||Local cache is persistent and can survive a graceful system shutdown; however, it is dropped when the system crashes.|
|Sequential access filtering||
Only randomly accessed data is cached.
Data backup applications may generate a huge amount of sequential IO. Preventing such IO from being cached helps to avoid stressing the cache.
To determine how much SSD space you will need for the cache, use the
vstorage advise-configuration command with the
-r option. For example:
# vstorage -c stor1 advise-configuration -r --cs /vstorage/stor1-cs1 --cs /vstorage/stor1-cs2 --cs /vstorage/stor1-cs3 --cs /vstorage/stor1-cs4 --ssd /vstorage/stor1-ssd -m /vstorage/stor1 You have the following setup: CS on /vstorage/stor1-cs1 -- Total disk space 1007.3GB CS on /vstorage/stor1-cs2 -- Total disk space 1007.3GB CS on /vstorage/stor1-cs3 -- Total disk space 1007.3GB CS on /vstorage/stor1-cs4 -- Total disk space 1007.3GB SSD on /vstorage/stor1-ssd -- Total disk space 251.8GB Proposed server configuration optimized for reads: - 66.8GB (26%) for CS journals, 29.1GB (11%) reserved (including 3.9GB checksums for 3.9TB of data) - CS journal sizes: 16.7GB for /vstorage/stor1-cs4 at /vstorage/stor1-ssd 16.7GB for /vstorage/stor1-cs3 at /vstorage/stor1-ssd 16.7GB for /vstorage/stor1-cs2 at /vstorage/stor1-ssd 16.7GB for /vstorage/stor1-cs1 at /vstorage/stor1-ssd How to setup the node: vstorage -c stor1 make-cs -r /vstorage/stor1-cs4/cs -j /vstorage/stor1-ssd/cs4-stor1-journal -s 17106 vstorage -c stor1 make-cs -r /vstorage/stor1-cs3/cs -j /vstorage/stor1-ssd/cs3-stor1-journal -s 17106 vstorage -c stor1 make-cs -r /vstorage/stor1-cs2/cs -j /vstorage/stor1-ssd/cs2-stor1-journal -s 17106 vstorage -c stor1 make-cs -r /vstorage/stor1-cs1/cs -j /vstorage/stor1-ssd/cs1-stor1-journal -s 17106 vstorage-mount -c stor1 /vstorage/stor1 -C /vstorage/stor1-ssd/vstorage-stor1-cache -R 159658 Mount option for automatic cluster mount from /etc/fstab: vstorage://stor1 /vstorage/stor1 fuse.vstorage cache=/vstorage/stor1-ssd/vstorage-stor1-cache, cachesize=159658 0 0
In this example, the suggestion is to allocate 61% of SSD space for the cache to achieve optimal cluster performance.
22.214.171.124. Creating a Local Cache¶
Unlike directories used in most Virtuozzo Storage configuration steps, the local cache on SSD is a file. Make sure you supply correct paths to the
vstorage-mount -C command and the
cache parameter in the corresponding
You create a local cache when mounting a Virtuozzo Storage cluster to a client. This process includes two steps:
- If required, preparing the SSD as described in Preparing Disks for Virtuozzo Storage.
- Using the
vstorage-mountcommand to mount the cluster and create the cache.
For example, to make a 64 GB local cache for the
stor1 cluster and store it in the file
/mnt/ssd/vstorage-cache-for-cluster-stor1, you can execute the following command:
# vstorage-mount -c stor1 /vstorage/stor1 -C /mnt/ssd/vstorage-cache-for-cluster-stor1 -R 64000
If you do not specify the cluster size,
vstorage-mount will automatically calculate it using the following formula:
SSD_free_space - 10 GB - SSD_total_space/10
So if the total size of your SSD drive is 100 GB and it has 80 GB of free space, the command will create the local cache with the size of 60 GB.
- The local cache is not created if the resulting cache size is less than 1 GB.
- If you also plan to configure the SSD drive for write journaling, first create the journal to reserve disk space for it and then create a local cache. For more information, see Configuring SSD Drives for Write Journaling.
Configuring Automatic Cache Creation
You can automate the procedure of creating a local cache so that it is automatically created each time you start the client. To do this, add the information about the cache to the
/etc/fstab file on the client.
For example, to (1) have an automatically created cache with the name of
vstorage-cache-for-cluster-stor1 and size of 64 GB, (2) store it in the
/mnt/ssd directory on the client, and (3) disable checksumming for data in the local cache, specify the following parameters in
/etc/fstab and separate them by commas:
cache=<path>. Sets the full path to the local cache file.
cachesize=<size>. Specifies the size of the local cache, in megabytes.
cachechksum=n. Disables checksumming for your data; by default, data checksumming is enabled.
Once you set these parameters, your
fstab file should look like the following:
vstorage://stor1 /vstorage/stor1 fuse.vstorage cache=/mnt/ssd/ vstorage-cache-for-cluster-stor1,cachesize=64000,cachechksum=n 0 0
For more information on options you can use to set up and configure a local cache, see the
vstorage-mount man pages.
Disabling Cache Checksumming
To provide better reliability and integrity of your data, the
vstorage-mount command automatically enables checksumming for the data in the local cache. If necessary, you can disable data checksumming by passing the
-S option to
# vstorage-mount -c stor1 /vstorage/stor1 -C /mnt/ssd/vstorage-cache-for-cluster-stor1 -R 64000 -S
Querying Cache Information
To check whether the cache for a mounted cluster is active and view its current parameters, you can use this command:
# cat /vstorage/stor1/.vstorage.info/read_cache_info path : /mnt/ssd/vstorage-cache-for-cluster-stor1 main size (Mb) : 56000 boot size (Mb) : 8000 block size (Kb) : 64 checksum : enabled
If the cache does not exist, the command output is empty. Otherwise, the command prints:
- path to the cache file,
- size of the main and boot caches,
- block size,
- checksum status.