Creating backup storage in a public cloud

With Backup Gateway, you can have Acronis Cyber Protect Cloud or Acronis Cyber Protect store backups in a number of public clouds and on-premises object storage solutions:

  • Amazon S3
  • IBM Cloud
  • Alibaba Cloud
  • IIJ
  • Cleversafe
  • Cloudian
  • Microsoft Azure
  • Swift object storage
  • Softlayer (Swift)
  • Google Cloud Platform
  • Wasabi
  • Other solutions using S3

However, compared to the local storage cluster, storing backup data in a public cloud increases the latency of all I/O requests to backups and reduces performance. For this reason, it is recommended to use the local storage cluster as the storage backend.

Backups are cold data with a specific access pattern: the data is not accessed frequently but is expected to be available immediately when accessed. For this use case, it is cost-efficient to choose storage classes intended for long-term storage with infrequently accessed data. The recommended storage classes include the following:

  • Infrequent Access for Amazon S3
  • Cool Blob Storage for Microsoft Azure
  • Nearline and Coldline storage for Google Cloud Platform

Archive storage classes like Amazon S3 Glacier, Azure Archive Blob, or Google Archive cannot be used for backup because they do not provide instant access to data. High access latency (several hours) makes it technically impossible to browse archives, restore data fast, and create incremental backups. Even though the archive storage is usually very cost-efficient, keep in mind that there are a number of different cost factors. In fact, the total cost of public cloud storage consists of payments for storing data, operations, traffic, data retrieval, early deletion, and so on. For example, an archive storage service can charge six months’ storage payment for just one data recall operation. If the storage data is expected to be accessed more frequently, the added costs increase significantly the total cost of data storage. In order to avoid the low data retrieval rate and to cut expenses, we recommend using Acronis Cyber Cloud for storing backup data.

Limitations

  • Redundancy by replication is not supported for backup storage.

Prerequisites

  • A clear understanding of the concept Storage policies.
  • The storage cluster has at least one disk with the Storage role.
  • The destination storage has enough space for both existing and new backups.
  • Ensure that each node to join the backup storage cluster has the TCP port 44445 open for outgoing Internet connections, as well as for incoming connections from Acronis backup software.

To select a public cloud as the backup destination

Admin panel

  1. On the Infrastructure > Networks screen, make sure that the Backup (ABGW) private and Backup (ABGW) public traffic types are added to the networks you intend to use.
  2. Open the Storage services > Backup storage screen, and then click Create backup storage.
  3. On the Backup destination step, select Public cloud.
  4. On the Nodes step, select nodes to add to the backup storage cluster, and then click Next.
  5. On the Public cloud step, specify information relevant for your public cloud provider:

    1. Select a public cloud provider. If your provider is S3 compatible but not in the list, try AuthV2 compatible (S3) or AuthV4 compatible (S3).
    2. Depending on the provider, specify Region, Authentication (keystone) URL, or Endpoint URL.
    3. In the case of Swift object storage, specify the authentication protocol version and attributes required by it.
    4. Specify user credentials. In the case of Google Cloud, select a JSON file with keys to upload.
    5. Specify the folder (bucket, container) to store backups in. The folder must be writeable.

    6. Click Next.

  6. On the Storage policy step, select the desired tier, failure domain, and data redundancy mode. Then, click Next.

  7. On the DNS step, specify an external DNS name for backup storage, for example, backupstorage.example.com. Backup agents will use this DNS name and the TCP port 44445 to upload backup data. Then, click Next.

    • Configure your DNS server according to the example suggested in the admin panel.
    • Each time you change the network configuration of nodes in the backup storage cluster, adjust the DNS records accordingly.

    For complex environments, HAProxy might be used to build a scalable and redundant load balancing platform, which can be easily moved or migrated and is independent from Virtuozzo Hybrid Infrastructure. For more information, refer to https://kb.acronis.com/content/64787.

  8. On the Acronis account step, specify the following information for your Acronis product:

    • The URL of the cloud management portal (for example, https://cloud.acronis.com/) or the hostname/IP address and port of the local management server (for example, http://192.168.1.2:9877)
    • The credentials of a partner account in the cloud or of an organization administrator on the local management server

  9. On the Summary step, review the configuration, and then click Create.

After creating the backup storage, you can increase its storage capacity at any time by adding space to the public cloud storage.

Command-line interface

Use the following command:

vinfra service backup cluster create --nodes <nodes> --domain <domain>
                                     --reg-account <reg-account>
                                     --reg-server <reg-server>
                                     --tier {0,1,2,3} --encoding <M>+<N> 
                                     --failure-domain {0,1,2,3,4}
                                     --storage-type {s3,swift,azure,google}
                                     [--s3-flavor <flavor>]
                                     [--s3-region <region>]
                                     [--s3-bucket <bucket>]
                                     [--s3-endpoint <endpoint>]
                                     [--s3-access-key-id <access-key-id>]
                                     [--s3-secret-key-id <secret-key-id>]
                                     [--s3-cert-verify <cert-verify>]
                                     [--swift-auth-url <auth-url>]
                                     [--swift-auth-version <auth-version>]
                                     [--swift-user-name <user-name>]
                                     [--swift-api-key <api-key>]
                                     [--swift-domain <domain>]
                                     [--swift-domain-id <domain-id>]
                                     [--swift-tenant <tenant>]
                                     [--swift-tenant-id <tenant-id>]
                                     [--swift-tenant-domain <tenant-domain>]
                                     [--swift-tenant-domain-id <tenant-domain-id>]
                                     [--swift-trust-id <trust-id>]
                                     [--swift-region <region>]
                                     [--swift-internal <internal>]
                                     [--swift-container <container>]
                                     [--swift-cert-verify <cert-verify>]
                                     [--azure-endpoint <endpoint>]
                                     [--azure-container <container>]
                                     [--azure-account-name <account-name>]
                                     [--azure-account-key <account-key>]
                                     [--google-bucket <bucket>]
                                     [--google-credentials <credentials>] [--stdin]
--nodes <nodes>
A comma-separated list of node hostnames or IDs
--domain <domain>
Domain name for the backup cluster
--reg-account <reg-account>
Partner account in the cloud or of an organization administrator on the local management server
--reg-server <reg-server>
URL of the cloud management portal or the hostname/IP address and port of the local management server
--tier {0,1,2,3}
Storage tier
--encoding <M>+<N>

Storage erasure encoding mapping in the format:

  • M: number of data blocks
  • N: number of parity blocks
--failure-domain {0,1,2,3,4}
Storage failure domain
--storage-type {local,nfs,s3,swift,azure,google}
Storage type
--stdin
Use for setting registration password from stdin.

Storage parameters for the s3 storage type:

--s3-flavor <flavor> (optional)
Flavor name
--s3-region <region> (optional)
Set region for Amazon S3.
--s3-bucket <bucket>
Bucket name
--s3-endpoint <endpoint>
Endpoint URL
--s3-access-key-id <access-key-id>
Access key ID
--s3-secret-key-id <secret-key-id>
Secret key ID
--s3-cert-verify <cert-verify> (optional)
Allow self-signed certificate of the S3 endpoint

Storage parameters for the swift storage type:

--swift-auth-url <auth-url>
Authentication (keystone) URL
--swift-auth-version <auth-version> (optional)
Authentication protocol version
--swift-user-name <user-name>
User name
--swift-api-key <api-key>
API key (password)
--swift-domain <domain> (optional)
Domain name
--swift-domain-id <domain-id> (optional)
Domain ID
--swift-tenant <tenant> (optional)
Tenant name
--swift-tenant-id <tenant-id> (optional)
Tenant ID
--swift-tenant-domain <tenant-domain> (optional)
Tenant domain name
--swift-tenant-domain-id <tenant-domain-id> (optional)
Tenant domain ID
--swift-trust-id <trust-id> (optional)
Trust ID
--swift-region <region> (optional)
Region name
--swift-container <container> (optional)
Container name
--swift-cert-verify <cert-verify> (optional)
Allow self-signed certificate of the Swift endpoint (true or false)

Storage parameters for the azure storage type:

--azure-endpoint <endpoint>
Endpoint URL
--azure-container <container>
Container name
--azure-account-name <account-name>
Account name
--azure-account-key <account-key>
Account key

Storage parameters for the google storage type:

--google-bucket <bucket>
Google bucket name
--google-credentials <credentials>
Path to the file with Google credentials

For example, to create the backup cluster from three nodes on the S3 storage, run:

# vinfra service backup cluster create --nodes node001,node002,node003 
--storage-type s3 --domain dns.example.com \
--tier 0 --encoding 1+2 --failure-domain host --s3-bucket mybucket --s3-endpoint s3.amazonaws.com \
--s3-access-key-id e302a06df8adbe9fAIF1 --s3-secret-key-id x1gXquRHQXuyiUJQoQMoAohA2TkYHer20o8tfPX7 \
--s3-cert-verify true --reg-account account@example.com --reg-server https://cloud.acronis.com/ --stdin

This command also specifies the domain name, tier, failure domain, registration account and server, as well as the required S3 parameters.

You can view the backup storage details in the vinfra service backup cluster show output:

# vinfra service backup cluster show
+----------------+---------------------------------------------------------+
| Field          | Value                                                   |
+----------------+---------------------------------------------------------+
| abgw_address   | dns.example.com                                         |
| account_server | https://cloud.acronis.com                               |
| dc_uid         | 44893a40296ecd9ae64567297a5b2b07-1577203369             |
| migration      | dns: null                                               |
|                | ips: []                                                 |
|                | running: false                                          |
|                | time_left: 0.0                                          |
| reg_type       | abc                                                     |
| storage_params | access_key_id: e302a06df8adbe9fAIF1                     |
|                | bucket: mybucket                                        |
|                | cert_verify: true                                       |
|                | endpoint: s3.amazonaws.com                              |
|                | flavour: null                                           |
|                | region: null                                            |
|                | secret_key_id: x1gXquRHQXuyiUJQoQMoAohA2TkYHer20o8tfPX7 |
| storage_type   | s3                                                      |
+----------------+---------------------------------------------------------+