Creating backup storage in a public cloud
With Backup Gateway, you can have Acronis Cyber Protect Cloud or Acronis Cyber Protect store backups in a number of public clouds and on-premises object storage solutions:
- Amazon S3
- IBM Cloud
- Alibaba Cloud
- IIJ
- Cleversafe
- Cloudian
- Microsoft Azure
- Swift object storage
- Softlayer (Swift)
- Google Cloud Platform
- Wasabi
- Other solutions using S3
However, compared to the local storage cluster, storing backup data in a public cloud increases the latency of all I/O requests to backups and reduces performance. For this reason, it is recommended to use the local storage cluster as the storage backend.
Backups are cold data with a specific access pattern: the data is not accessed frequently but is expected to be available immediately when accessed. For this use case, it is cost-efficient to choose storage classes intended for long-term storage with infrequently accessed data. The recommended storage classes include the following:
- Infrequent Access for Amazon S3
- Cool Blob Storage for Microsoft Azure
- Nearline and Coldline storage for Google Cloud Platform
Archive storage classes like Amazon S3 Glacier, Azure Archive Blob, or Google Archive cannot be used for backup because they do not provide instant access to data. High access latency (several hours) makes it technically impossible to browse archives, restore data fast, and create incremental backups. Even though the archive storage is usually very cost-efficient, keep in mind that there are a number of different cost factors. In fact, the total cost of public cloud storage consists of payments for storing data, operations, traffic, data retrieval, early deletion, and so on. For example, an archive storage service can charge six months’ storage payment for just one data recall operation. If the storage data is expected to be accessed more frequently, the added costs increase significantly the total cost of data storage. In order to avoid the low data retrieval rate and to cut expenses, we recommend using Acronis Cyber Protect Cloud for storing backup data.
Limitations
- Redundancy by replication is not supported for backup storage.
- With external backup destinations, redundancy has to be provided by the external storage. Backup storage does not provide data redundancy or perform data deduplication itself.
Prerequisites
- A clear understanding of the concept Storage policies.
- The storage cluster has at least one disk with the Storage role.
- The destination storage has enough space for both existing and new backups.
- Ensure that each node to join the backup storage cluster has the TCP port 44445 open for outgoing Internet connections, as well as for incoming connections from Acronis backup software.
To select a public cloud as the backup destination
Admin panel
- On the Infrastructure > Networks screen, make sure that the Backup (ABGW) private and Backup (ABGW) public traffic types are added to the networks you intend to use.
- Open the Storage services > Backup storage screen, and then click Create backup storage.
- On the Backup destination step, select Public cloud.
- On the Nodes step, select nodes to add to the backup storage cluster, and then click Next.
-
On the Public cloud step, specify information relevant for your public cloud provider:
- Select a public cloud provider. If your provider is S3 compatible but not in the list, try AuthV2 compatible (S3) or AuthV4 compatible (S3).
- Depending on the provider, specify Region, Authentication (keystone) URL, or Endpoint URL.
- In the case of Swift object storage, specify the authentication protocol version and attributes required by it.
- Specify user credentials. In the case of Google Cloud, select a JSON file with keys to upload.
-
Specify the folder (bucket, container) to store backups in. The folder must be writeable.
- Click Next.
-
On the Storage policy step, select the desired tier, failure domain, and data redundancy mode for the local storage. Then, click Next.
-
On the DNS step, do one of the following:
-
Select Register now, and then specify an external DNS name for backup storage (for example, backupstorage.example.com). Backup agents will use this DNS name and the TCP port 44445 to upload backup data.
- Configure your DNS server according to the example suggested in the admin panel.
- Each time you change the network configuration of nodes in the backup storage cluster, adjust the DNS records accordingly.
-
Select Register later to add registrations for your backup storage later or configure it as the secondary cluster for geo-replication.
For complex environments, HAProxy might be used to build a scalable and redundant load balancing platform, which can be easily moved or migrated and is independent from Virtuozzo Hybrid Infrastructure. For more information, refer to https://kb.acronis.com/content/64787.
-
-
If you selected Register now, specify the following information for your Acronis product on the Acronis account step:
- The URL of the cloud management portal (for example, https://cloud.acronis.com/) or the hostname/IP address and port of the local management server (for example, http://192.168.1.2:9877).
- The credentials of a partner account in the cloud or of an organization administrator on the local management server. Note that the account must be converted to a service account in the Acronis Cyber Protect Cloud management portal. You can do this on the Company management screen in the Users section.
- On the Summary step, review the configuration, and then click Create.
After creating the backup storage, you can increase its storage capacity at any time by adding space to the public cloud storage.
Command-line interface
Use the following command:
vinfra service backup cluster deploy-standalone --nodes <nodes> --name <name> --address <address> [--location <location>] --username <username> --account-server <account-server> --tier {0,1,2,3} --encoding <M>+<N> --failure-domain {0,1,2,3,4} --storage-type {s3,swift,azure,google} [--s3-flavor <flavor>] [--s3-region <region>] [--s3-bucket <bucket>] [--s3-endpoint <endpoint>] [--s3-access-key-id <access-key-id>] [--s3-secret-key-id <secret-key-id>] [--s3-cert-verify <cert-verify>] [--swift-auth-url <auth-url>] [--swift-auth-version <auth-version>] [--swift-user-name <user-name>] [--swift-api-key <api-key>] [--swift-domain <domain>] [--swift-domain-id <domain-id>] [--swift-tenant <tenant>] [--swift-tenant-id <tenant-id>] [--swift-tenant-domain <tenant-domain>] [--swift-tenant-domain-id <tenant-domain-id>] [--swift-trust-id <trust-id>] [--swift-region <region>] [--swift-internal <internal>] [--swift-container <container>] [--swift-cert-verify <cert-verify>] [--azure-endpoint <endpoint>] [--azure-container <container>] [--azure-account-name <account-name>] [--azure-account-key <account-key>] [--google-bucket <bucket>] [--google-credentials <credentials>] [--stdin]
--nodes <nodes>
- A comma-separated list of node hostnames or IDs
--name <name>
- Backup registration name.
--address <address>
- Backup registration address.
--location <location>
- Backup registration location.
--username <username>
- Partner account in the cloud or of an organization administrator on the local management server.
--account-server <account-server>
- URL of the cloud management portal or the hostname/IP address and port of the local management server.
--tier {0,1,2,3}
- Storage tier
--encoding <M>+<N>
-
Storage erasure encoding mapping in the format:
M
: number of data blocksN
: number of parity blocks
--failure-domain {0,1,2,3,4}
- Storage failure domain
--storage-type {local,nfs,s3,swift,azure,google}
- Storage type
--stdin
- Use for setting registration password from
stdin
.
Storage parameters for the s3
storage type:
--s3-flavor <flavor>
(optional)- Flavor name
--s3-region <region>
(optional)- Set region for Amazon S3.
--s3-bucket <bucket>
- Bucket name
--s3-endpoint <endpoint>
- Endpoint URL
--s3-access-key-id <access-key-id>
- Access key ID
--s3-secret-key-id <secret-key-id>
- Secret key ID
--s3-cert-verify <cert-verify>
(optional)- Allow self-signed certificate of the S3 endpoint
Storage parameters for the swift
storage type:
--swift-auth-url <auth-url>
- Authentication (keystone) URL
--swift-auth-version <auth-version>
(optional)- Authentication protocol version
--swift-user-name <user-name>
- User name
--swift-api-key <api-key>
- API key (password)
--swift-domain <domain>
(optional)- Domain name
--swift-domain-id <domain-id>
(optional)- Domain ID
--swift-tenant <tenant>
(optional)- Tenant name
--swift-tenant-id <tenant-id>
(optional)- Tenant ID
--swift-tenant-domain <tenant-domain>
(optional)- Tenant domain name
--swift-tenant-domain-id <tenant-domain-id>
(optional)- Tenant domain ID
--swift-trust-id <trust-id>
(optional)- Trust ID
--swift-region <region>
(optional)- Region name
--swift-container <container>
(optional)- Container name
--swift-cert-verify <cert-verify>
(optional)- Allow self-signed certificate of the Swift endpoint (
true
orfalse
)
Storage parameters for the azure
storage type:
--azure-endpoint <endpoint>
- Endpoint URL
--azure-container <container>
- Container name
--azure-account-name <account-name>
- Account name
--azure-account-key <account-key>
- Account key
Storage parameters for the google
storage type:
--google-bucket <bucket>
- Google bucket name
--google-credentials <credentials>
- Path to the file with Google credentials
For example, to create the backup cluster from three nodes on the S3 storage, run:
# vinfra service backup cluster deploy-standalone --nodes node001,node002,node003 --name registration1 \ --address backupstorage.example.com --storage-type s3 --tier 0 --encoding 1+2 --failure-domain host --s3-bucket mybucket \ --s3-endpoint s3.amazonaws.com --s3-access-key-id e302a06df8adbe9fAIF1 --s3-secret-key-id x1gXquRH<…> \ --s3-cert-verify true --username account@example.com --account-server https://cloud.acronis.com/ --stdin
This command also specifies the registration name and address, tier, failure domain, registration account and server, as well as the required S3 parameters.
You can view the backup storage details in the vinfra service backup cluster show
output:
# vinfra service backup cluster show +-----------------+---------------------------------------------+ | Field | Value | +-----------------+---------------------------------------------+ | dc_uid | 966ac53e-a92c-11ec-be79-fa163ea9f01a | | deployment_mode | - standalone | | geo_replication | | | hosts | - hostname: node001.vstoragedomain | | | id: 24a953ce-b50e-40c2-bf44-0668aafb421d | | | systemd: active | | | - hostname: node002.vstoragedomain | | | id: c1de8940-c38a-d7ae-41b5-bdd35581a906 | | | systemd: active | | | - hostname: node003.vstoragedomain | | | id: 2307dc2c-a954-70a2-3673-8a8f832bd46a | | | systemd: active | | registrations | - account_server: https://cloud.acronis.com | | | address: backupstorage.example.com | | | expires: '2025-03-20T15:20:59+00:00' | | | id: be526718-d9f8-4f2c-9bd3-04a987f7e4c4 | | | name: registration1 | | | type: ABC | | | username: account@example.com | | status | deployed | | storage_params | | | storage_type | local | | upstreams | [] | +-----------------+---------------------------------------------+