How to migrate G Suite backup data to a new location?


In some cases, you may want to move your G Suite backups to a different location, especially when the current backup storage is nearly full.

CubeBackup doesn’t limit you when you migrate your backup data. You can use the same type of storage as before (for example, a network server or NAS) or you can migrate to a different storage medium (for example, from a local disk to Amazon S3, or from Amazon S3 to a NAS).

This migration must be done from the command line on the backup server.

For Windows users:

  • Log in to the backup server.
  • Open a Windows Powershell or Windows Command Prompt.
  • Run the migration command to get a brief introduction of this command.
cbackup migration

Tip: On Windows operating systems, the cbackup file is located in “C:\Program Files\CubeBackup4\bin”.

SYNOPSIS

Here is an overview on the usage of the cbackup migration command:

  • Migrate current backup data to a new local directory:
cbackup migration [-dataIndexPath=<new Index Path>] local <local destination path> 
  • Migrate current backup data to an Amazon S3 bucket:
cbackup migration [-dataIndexPath=<new Index Path>] s3 <s3 destination bucket> <access key id> <secret access key> 
  • Migrate current backup data to network storage
cbackup  migration [-dataIndexPath=<new Index Path>] wins <windows network destination path> <username> [password] [-dataIndexPath=<new Index Path>]

OPTIONS:

  • -dataIndexPath: Specify a new local directory as the data index path.
    If this option is not specified, CubeBackup will keep the data index in its original path. In most cases, there is no need to specify a new path for the data index.

IMPORTANT:

Due to a design flaw in the Windows operating system, if you are migrating backup data from a windows network location or migrating backup data to a windows network location, the migration process must be run under the Local System account. Otherwise, it won’t succeed.

To run the cbackup migration command under the Local System account:

  • Download PSTools from Sysinternals.
  • Use the following command:
psexec -i -s cmd.exe 

-i is for interactive and -s is for system account.

  • When the command completes, a new CMD shell will be launched.
  • In this CMD shell window, run the cbackup migration command using the above syntax.

EXAMPLES

  • Migrate the current backup data from a NAS to a new directory “c:\gs-backup” on a local disk:

Execute this command first:

psexec –I –s cmd.exe

Then in the new CMD shell, run:

cbackup migration local c:\gs-backup 


  • Migrate the current backup data from a local disk to an Amazon S3 bucket “cb-bucket” using an Amazon IAM access key id and secret access key:
cbackup migration s3  cb-bucket  AKIA498UP4VB96YU3Q   AGrtQWrbIETxf14ETfiux45nLK  


  • Migrate the current backup data from Amazon S3 to a directory named “c:\gs-backup” on a local disk, specifying a new data-index path “d:\backupIndex”:
cbackup migration -dataIndexPath=d:\backupIndex local c:\gs-backup


  • Migrate the current backup data on local storage to a NAS location \SynologyNAS\gs-backup

Execute this command first :

psexec –I –s cmd.exe

Then in the new CMD shell, run:

cbackup migration wins \\SynologyNAS\gs-backup  Synology/admin  mypassword123

Synology/admin and mypassword123 are username and password that are used to access the Synology NAS

Please Note:

  • If something goes wrong and the data migration fails, all backup data and settings are still safe and remain unchanged in the original location.

  • After a successful data migration, subsequent backups will automatically use the new backup location.

  • A log file for the migration can be found at “C:\Program Files\CubeBackup4\log\migration.log”.

For Linux users:

  • Log in to the backup server using SSH.
  • Run the cbackup migration command to get a brief introduction of this command.
cbackup migration

TIP: On Linux operating systems, the cbackup file is located in “/opt/cubebackup/bin”
The data migration can take considerably long time, so using Linux screen command is recommended to keep the migration process running without maintaining an active shell session.

SYNOPSIS

Here is an overview on the usage of the cbackup migration command:

  • Migrate current backup data to a new local directory:
cbackup migration [-dataIndexPath=<new Index Path>] local <local destination path> 
  • Migrate current backup data to an Amazon S3 bucket:
cbackup migration [-dataIndexPath=<new Index Path>] s3 <s3 destination bucket> <access key id> <secret access key> 
  • Migrate current backup data to network storage
cbackup migration [-dataIndexPath=<new Index Path>] nas <mounted network storage path> [-dataIndexPath=<new Index Path>]	

OPTIONS:

  • -dataIndexPath: Specify a new local directory as the data index path.
    If this option is not specified, CubeBackup will keep the data index in its original path. In most cases, there is no need to specify a new path for the data index.

EXAMPLES

  • Migrate the current backup data to a new directory “/var/gs-backup” on a local disk:
cbackup migration local  /var/gs-backup 
  • Migrate the current backup data to an Amazon S3 bucket “cb-bucket” using an Amazon IAM access key id and secret access key:
cbackup migration s3 cb-bucket AKIA498UP4VB96YU3Q AGrtQWrbIETxf14ETfiux45nLK
  • Migrate the current backup data to the mounted network path “/mnt/synologyNAS”, specifying a new data-index path “/var/backupIndex”:
cbackup migration -dataIndexPath=/var/backupIndex  nas /mnt/synologyNAS

Please Note:

  • If something goes wrong and the data migration fails, all backup data and settings are still safe and remain unchanged in the original location.
  • After a successful data migration, subsequent backups will automatically use the new backup location.
  • A log file for the migration can be found at ”/opt/cubebackup/log/migration.log”.

For Docker users:

CubeBackup data migration in a docker container is almost identical as the data migration on Linux, so that you can just follow all the instructions for data migration on Linux. However, here are a few things to keep in mind:

  • Loggin into the container is requirred to execute the migration command.
 sudo docker exec -it <container_name> /bin/bash
  • After the data migration, the volume mounted to the old backup path will become useless and you cannot access the backup data directly from the host using the old volume.

  • The docker container need to be restarted after the migration.

 sudo docker restart <container_name>