How to migrate Google Workspace backup data to a new location


In some cases, you may want to move your Google Workspace (Formerly G Suite) backups to a different location, especially when the current backup storage is nearly full.

CubeBackup doesn’t limit the storage type when you migrate your backup data. You can use the same type of storage as before (for example, a network server or NAS) or you can migrate to a different storage medium (for example, from a local disk to Amazon S3, or from Amazon S3 to a NAS).

Note:
Depending on the amount of data being backed up, data migration can take a considerable amount of time. Please note that the CubeBackup console cannot be accessed during data migration, and no new backups will take place until all data has been migrated to the new location.
If it is absolutely necessary to perform a backup, the data migration must be cancelled and started again from the beginning after the backup is finished.

This migration must be done from the command line on the backup server. Please follow the migration instructions appropriate for your operating system:

For Windows users:

  • Log in to the backup server.
  • Open a Windows Powershell or Windows Command Prompt using the Administrator account.

    Data migration requires Administrator privilege. Please right-click the Windows Powershell or Windows Command Prompt, and select Run as administrator.

  • Run the migration command to get a brief introduction of this command.

    cbackup migration

    Tip: On Windows operating systems, the cbackup file is located in “C:\Program Files\CubeBackup4\bin”.

SYNOPSIS

Here is an overview on the usage of the cbackup migration command:

  • To migrate current backup data to a new local directory:

    cbackup migration [-dataIndexPath=<new Index Path>] local <local destination path> 
  • To migrate current backup data to network storage:

    cbackup  migration [-dataIndexPath=<new Index Path>] wins <windows network destination path> <username> [password]
  • To migrate current backup data to an Amazon S3 bucket:

    cbackup migration [-dataIndexPath=<new Index Path>] s3 <s3 destination bucket> <access key id> <secret key id> [storage class] 

    storage class (optional):
    CubeBackup will select Standard-IA as the storage class of your data by default. If you’d like to choose a different storage class, please specify it in the command using the following keywords:
        STANDARD for Standard
        STANDARD_IA for Standard-IA
        ONEZONE_IA for One Zone-IA
        INTELLIGENT_TIERING for Intelligent-Tiering
    This setting will apply to all future backups.

  • To migrate current backup data to a Google Cloud Storage bucket:

    cbackup migration [-dataIndexPath=<new Index Path>] google <google destination bucket> [storage class] 

    storage class (optional):
    CubeBackup will select Coldline as the storage class of your data by default. If you’d like to choose a different storage class, please specify it in the command using the following keywords:
        STANDARD for Standard
        NEARLINE for Nearline
        COLDLINE for Coldline
        ARCHIVE for Archive
    This setting will apply to all future backups.

  • To migrate current backup data to an Azure Blob Storage Container:

    cbackup migration [-dataIndexPath=<new Index Path>] azure <storage account> <access key> <container> [access tier] 

    access tier (optional):
    CubeBackup will select Cool as the storage class of your data by default. If you’d like to choose a different access tier, please specify it in the command using the following keywords:
        Hot for Hot
        Cool for Cool
    This setting will apply to all future backups.

  • To migrate current backup data to S3-compatible storage ( Wasabi / Backblaze B2 ):

    cbackup migration [-dataIndex=<new Index Path>] s3c <endpoint> <bucket> <access key id> <secret access key>

OPTIONS:

  • -dataIndexPath: Specify a new local directory as the data index path.
    If this option is not specified, CubeBackup will keep the data index in its original path. In most cases, there is no need to specify a new path for the data index.

IMPORTANT:

Due to a design flaw in the Windows operating system, if you are migrating backup data from a windows network location or migrating backup data to a windows network location, the migration process must be run under the Local System account. Otherwise, it won’t succeed.

To run the cbackup migration command under the Local System account:

  • Download PSTools from Sysinternals.
  • Use the following command:

    psexec -i -s cmd.exe 

    -i is for interactive and -s is for system account.

  • When the command completes, a new CMD shell will be launched.

  • In this CMD shell window, change the current directory to “c:\Program Files\CubeBackup4\bin”, then run the cbackup migration command using the above syntax.

EXAMPLES

  • To migrate the current backup data from a NAS to a new directory “c:\gs-backup” on a local disk, specify a new data-index path “d:\backupIndex”:

    First execute the following command:

    psexec –I –s cmd.exe

    Then in the new CMD shell, change the current directory to “c:\Program Files\CubeBackup4\bin”, and run:

    cbackup migration -dataIndexPath=d:\backupIndex local c:\gs-backup 


  • To migrate the current backup data to a NAS location \\SynologyNAS\gs-backup

    First execute the following command:

    psexec –I –s cmd.exe

    Then in the new CMD shell, change the current directory to “c:\Program Files\CubeBackup4\bin”, and run:

    cbackup migration wins \\SynologyNAS\gs-backup  Synology/admin  mypassword123

    Where Synology/admin and mypassword123 are username and password to access the Synology NAS.


  • To migrate the current backup data to an Amazon S3 bucket “cb-bucket” using the storage class “One Zone-IA”, an Amazon IAM access key id AKIA498…96YU3Q and a secret access key AGrtQW…x45nLK:

    cbackup migration s3 cb-bucket AKIA498...96YU3Q AGrtQW...x45nLK ONEZONE_IA   


  • To migrate the current backup data to a Google Cloud Storage bucket “cb-bucket” using the defaulted storage class “Coldline”:

    cbackup migration google cb-bucket   

    Please note: The Google Cloud Storage bucket must be in the same project that was created for CubeBackup during the initial configuration.


  • To migrate the current backup data to an Azure Blob Storage container “cubecontainer” using the defaulted access tier “Cool”, an Azure storage account name cubeadmin and an access key AGrtQ…x45nLK:

    cbackup migration azure cubeadmin AGrtQ...x45nLK cubecontainer   


  • To migrate the current backup data to a Wasabi bucket “cube-bucket” in the us-east-1 region using a Wasabi access key id ZD2LD…9ZGSN and a secret access key azE7Jm…2JSwF4W5:

    cbackup migration s3c s3.wasabisys.com cube-bucket ZD2LD...9ZGSN azE7Jm...2JSwF4W5   


  • To migrate the current backup data to a Backblaze B2 bucket “cubebackuptest” in the us-west-000 region using a Backblaze access key id 000cd3e71…0000004 and a secret access key K000I…Xn9YH4:

    cbackup migration s3c https://s3.us-west-000.backblazeb2.com cubebackuptest 000cd3e71...0000004 K000I...Xn9YH4


Please Note:

  • If something goes wrong and the data migration fails, all backup data and settings are still safe and remain unchanged in the original location.

  • After a successful data migration, subsequent backups will automatically use the new backup location.

  • A log file for the migration can be found at “C:\Program Files\CubeBackup4\log\migration.log”.

  • After a successful migration, the original backup data remains untouched in its original location for safety reasons. You may manually remove this data after you have confirmed that CubeBackup is functioning well in its new backup location.

  • If you filled in an email address under “Notification receiver” when you started the migration, notification email will be sent to you when the migration is completed.

  • The data migration may take a very long time. For a large amount of backup data, it may take days or even weeks to complete. However, CubeBackup will record each file migrated to the log file. So, if you want to verify whether the data migration is still running, you can open the migration file (c:\Program Files\CubeBackup4\log\migration.log) to check whether new lines are continually being added.

For Linux users:

  • Log in to the backup server using SSH.
  • Run the cbackup migration command to get a brief introduction of this command.

    sudo cbackup migration

    TIP: On Linux operating systems, the cbackup file is located in “/opt/cubebackup/bin”.

  • The data migration can take considerably long time, so using Linux screen command is recommended to keep the migration process running without maintaining an active shell session.

SYNOPSIS

Here is an overview on the usage of the cbackup migration command:

  • To migrate current backup data to a new local directory:

    sudo cbackup migration [-dataIndexPath=<new Index Path>] local <local destination path> 
  • To migrate current backup data to network storage:

    sudo cbackup migration [-dataIndexPath=<new Index Path>] nas <mounted network storage path>
  • To migrate current backup data to an Amazon S3 bucket:

    cbackup migration [-dataIndexPath=<new Index Path>] s3 <s3 destination bucket> <access key id> <secret key id> [storage class] 

    storage class (optional):
    CubeBackup will select Standard-IA as the storage class of your data by default. If you’d like to choose a different storage class, please specify it in the command using the following keywords:
        STANDARD for Standard
        STANDARD_IA for Standard-IA
        ONEZONE_IA for One Zone-IA
        INTELLIGENT_TIERING for Intelligent-Tiering
    This setting will apply to all future backups.

  • To migrate current backup data to a Google Cloud Storage bucket:

    cbackup migration [-dataIndexPath=<new Index Path>] google <google destination bucket> [storage class] 

    storage class (optional):
    CubeBackup will select Coldline as the storage class of your data by default. If you’d like to choose a different storage class, please specify it in the command using the following keywords:
        STANDARD for Standard
        NEARLINE for Nearline
        COLDLINE for Coldline
        ARCHIVE for Archive
    This setting will apply to all your future backups.

  • To migrate current backup data to an Azure Blob Storage Container:

    cbackup migration [-dataIndexPath=<new Index Path>] azure <storage account> <access key> <container> [access tier] 

    access tier (optional):
    CubeBackup will select Cool as the storage class of your data by default. If you’d like to choose a different access tier, please specify it in the command using the following keywords:
        Hot for Hot
        Cool for Cool
    This setting will also affect your future backups.

  • To migrate current backup data to S3-compatible storage ( Wasabi / Backblaze B2 ):

    cbackup migration [-dataIndex=<new Index Path>] s3c <endpoint> <bucket> <access key id> <secret access key>

OPTIONS:

  • -dataIndexPath: Specify a new local directory as the data index path.
    If this option is not specified, CubeBackup will keep the data index in its original path. In most cases, there is no need to specify a new path for the data index.

EXAMPLES

  • To migrate the current backup data to a new directory “/var/gs-backup” on a local disk:

    sudo cbackup migration local  /var/gs-backup 


  • To migrate the current backup data to the mounted network path “/mnt/synologyNAS”, specify a new data-index path “/var/backupIndex”:

    sudo cbackup migration -dataIndexPath=/var/backupIndex  nas /mnt/synologyNAS


  • To migrate the current backup data to an Amazon S3 bucket “cb-bucket” using the storage class “One Zone-IA”, an Amazon IAM access key id AKIA498…96YU3Q and a secret access key AGrtQW…x45nLK:

    sudo cbackup migration s3 cb-bucket AKIA498...96YU3Q AGrtQW...x45nLK ONEZONE_IA   


  • To migrate the current backup data to a Google Cloud Storage bucket “cb-bucket” using the defaulted storage class “Coldline”:

    sudo cbackup migration google cb-bucket   

    Please note: The Google Cloud Storage bucket must be in the same project that was created for CubeBackup during the initial configuration.


  • To migrate the current backup data to an Azure Blob Storage container “cubecontainer” using the defaulted access tier “Cool”, an Azure storage account name cubeadmin and an access key AGrtQ…x45nLK:

    sudo cbackup migration azure cubeadmin AGrtQ...x45nLK cubecontainer   


  • To migrate the current backup data to a Wasabi bucket “cube-bucket” in the us-east-1 region using a Wasabi access key id ZD2LD…9ZGSN and a secret access key azE7Jm…2JSwF4W5:

    sudo cbackup migration s3c s3.wasabisys.com cube-bucket ZD2LD...9ZGSN azE7Jm...2JSwF4W5   


  • To migrate the current backup data to a Backblaze B2 bucket “cubebackuptest” in the us-west-000 region using a Backblaze access key id 000cd3e71…0000004 and a secret access key K000I…Xn9YH4:

    sudo cbackup migration s3c https://s3.us-west-000.backblazeb2.com cubebackuptest 000cd3e71...0000004 K000I...Xn9YH4

Please Note:

  • If something goes wrong and the data migration fails, all backup data and settings are still safe and remain unchanged in the original location.

  • After a successful data migration, subsequent backups will automatically use the new backup location.

  • A log file for the migration can be found at ”/opt/cubebackup/log/migration.log”.

  • After a successful migration, the original backup data remains untouched in its original location for safety reasons. You may manually remove this data after you have confirmed that CubeBackup is functioning well in its new backup location.

  • If you filled in an email address under “Notification receiver” when you started the migration, notification email will be sent to you when the migration is completed.

  • The data migration may take a very long time. For a large amount of backup data, it may take days or even weeks to complete. However, CubeBackup will record each file migrated to the log file. So, if you want to verify whether the data migration is still running well, you may check the migration file (/opt/cubebackup/log/migration.log) to see whether new lines are continually being added.

    sudo tail -f /opt/cubebackup/log/migration.log

For Docker users:

CubeBackup data migration in a docker container is almost identical as the data migration on Linux, so that you can just follow all the instructions for data migration on Linux. However, here are a few things to keep in mind:

  • Loggin into the container is requirred to execute the migration command.

    sudo docker exec -it <container_name> /bin/bash
  • After the data migration, the volume mounted to the old backup path will become useless and you cannot access the backup data directly from the host using the old volume.

  • The docker container need to be restarted after the migration.

    sudo docker restart <container_name>