Upgrade or Migrate to the Latest Version of Cloud Exchange
Upgrade or Migrate to the Latest Version of Cloud Exchange
Important
PLEASE CONTACT your SE/AM to avail CE “Professional Services” SKUs via ps-scoping@netskope.com for CE installation, deployment, configuration, upgrade/migration:
NK-PS-CE-BASE
NK-PS-ADDON-CEM
Follow these upgrade or migration steps to get the latest version of Cloud Exchange.
Use the upgrade method if the host machine/OS is unchanged, and you have GitHub and docker hub connectivity to get the latest upgrade package.
If the host machine is getting changed, then you should use the migration method. Currently, If you want to use CE as a VM, then only migration is supported since you will be deploying the new host machines via OVA/AMI/VHDX. If you are moving from standalone to HA, you should also use the migration method because you will need to set up multiple host machines as per the HA requirement.
Prerequisites
- Before initiating an upgrade or migration, ensure your instance meets the system requirements for Cloud Exchange.
- Disable all Source Plugins in all modules, then wait until in-queue tasks finished before proceeding with the migration. Use these steps to identify in-queue tasks mentioned below.
- After disabling the source plugins in all modules, wait for 20-30 minutes.
- Apply filter on logs:
- Go to the Logging section.
- Click on Filter Query.
- In the Filters input, enter the following:
message like "Ingested " || message Like "Stored " || message Like "task(s)" || message Like "Completed storing"
- Click on Load.
- Apply the filter to view relevant logs.
- Monitor Logs: Once the logging filter is applied, monitor for new logs. When no new logs appear under this filter, you can proceed to the next step.
- Ensure you have the maintenance password readily available, as it will be needed to complete certain steps in this process.
Important
If the maintenance password is lost, the data could not be retained.
Note
RabbiMQ data migration is not supported due to change of queue type from Classic to Quorum queue.
Upgrade to 5.1.0
Upgrade to 5.1.0 Standalone
From 3.x or 4.x or 5.0.x Standalone
You should update all your Netskope tenants with a V2 token that has access to all the dataexport endpoints before proceeding with the upgrade.
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- Go to the existing ta_cloud_exchange directory with the docker-compose.yml file. Stop the Cloud Exchange containers.
$ sudo ./stop
If the output of the ./stop command is ./stop: No such file or directory, execute the following command.
$ sudo docker compose down -v
- If you have made any local changes to the docker-compose.yml file, reset those using (you might need sudo).
$ sudo git reset --hard
- Pull the latest changes.
$ sudo git pull
- Execute the setup script.
$ sudo python3 ./setup
- Launch Cloud Exchange.
$ sudo ./start
- Close all of your Cloud Exchange browser instances and log in again in Incognito mode, or clear the browser cache before logging in.
The Cloud Exchange UI is now accessible with the system’s IP: http(s)://<ip>.
Upgrade to 5.1.0 HA
From 3.x or 4.x or 5.0.x Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- Stop the standalone deployment and copy the data.
$ sudo ./stop
- Copy the data/mongo-data/data/ directory from source machine to destination machine. The new path for the machine should be the location where you installed ta_cloud_exchange on the target machine.
$ sudo scp -r data/mongo-data/data/ username@ip_address:<new-machine-path>/data/mongo-data/
Make sure you have cloned the ta_cloud_exchange repo with <username>.
- Copy the data/custom_plugins directory from the source machine to the destination machine if custom plugins are being used. The new path for the machine should be the location where you installed ta_cloud_exchange on the target machine.
$ sudo scp -r data/custom_plugins username@ip_address:<new-machine-path>/data/custom_plugins
Make sure you have cloned the ta_cloud_exchange repo with <username>.
- To transition copied data into the new HA configuration, refer to the HA deployment guide for instructions on adding the necessary HA parameters and initializing the cluster. This process facilitates the migration of MongoDB data into the replica set. Additionally, it involves the importation of RabbitMQ messages into the new HA machine, with subsequent integration of other nodes into the cluster.
- If custom plugins and/or custom repos are being used, make sure you copy those to the designated shared location following the execution of the setup script. This step will be included in the installation steps.
Following the successful completion of the HA migration, it’s important to note that there may be a brief delay during the initial few minutes. This is attributable to MongoDB’s replication process, which involves the distribution of data across all nodes in the system. - Run the setup script in the primary node first and update the IP list. Keep the existing IPs in the same order and add a new IP address at the end.
$ sudo python3 ./setup
- If custom plugins are being used in your previous standalone setup, it is necessary to transfer the data of these custom plugins to the shared drive by executing the following command after the setup has been completed successfully.
$ sudo cp -r data/custom_plugins/* <shared_drive>/custom_plugins/
- Run the setup script in the remaining machines to add the connection info.
$ sudo python3 ./setup --location /path/to/mounted/directory
- Run the start script in the primary node first, and then run the start script for the remaining machines as well. At last, run the start script in the new node.
$ sudo ./start
From 5.0.0 HA
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- To ensure all containers will stop properly, start by stopping the Secondary and Third node before stopping the Primary node with this command.
$ sudo ./stop
- Pull the latest changes in all the nodes from GitHub with this command.
$ sudo git pull
If any issues arise during the git pull command, reset the changes using:
$ sudo git reset --hard
- Run the setup script in the primary node first using this command.
$ sudo python3 ./setup
Next run this command in the secondary and third nodes.
$ sudo python3 ./setup --location <shared_drive>
- Start the containers, beginning with the primary node followed by the other nodes, using:
$ sudo ./start
Wait for a few minutes following the execution of the start script on the primary node.
Migrate to 5.1.0
Click play to watch a video.
Migrate to 5.1.0 Containerized or CE as VM using script (Standalone)
This script is used to migrate the Cloud Exchange v4.2.0 or v5.0.1 (Containerized or CE as a VM) to v5.1.0 Standalone. This script needs to be executed in the destination (New) machine.
Prerequisites for this script
- When migrating from a containerized machine, you must have the root password of the old machine where CE is installed.
- When migrating from CE as a VM machine, the script will work with the password of the cteadmin user.
- In order to run the migrate_ce script, you must have the latest Cloud Exchange repository cloned.
- In case of containerized, use this command to clone the latest Cloud Exchange repository.
$ sudo git clone https://github.com/netskopeoss/ta_cloud_exchange
- In case of CE as a VM, deploy the latest OVA/AMI/GCP/Azure.
Steps
- In the destination machine, go the directory where latest Cloud Exchange is deployed and run below command:
$ sudo ./migrate_ce
- In the next step, It will ask for the old machine’s username, machine IP and the password of the old machine or file path of PEM file in case of PEM file.
- After successful connection with the old machine, it will run these scripts automatically in order to migrate:
- “Stop” Script.
- It will zip the existing data in the old machine and transfer it on the new machine.
- After successful transfer to the new machine, it will unzip the file in the desired folder on the new machine.
- Now it will run the setup script and ask for the necessary inputs in order to up the v5.1 Cloud Exchange.
- In the end, it will run the “Start” script.
- To check the status of the container run this command:
$ sudo docker ps
. In case of podman, use$ sudo podman ps)
- Now open the UI of newly migrated Cloud Exchange v5.1 in the web browser. Login with the old credentials of CE UI to check the configurations and data and ensure everything is working properly.
- Run the following command after the UI is accessible.
- If you are using CE as VM standalone:
$ git -C /opt/cloudexchange/cloudexchange/data/repos/Default clean -df
- If you are using Containerized CE both stand alone and HA, use
podman-compose
instead ofdocker compose
for RHEL base OS.$ sudo docker compose exec -w /opt/netskope/repos/Default core git clean -df
- If you are using CE as VM standalone:
Migrate to 5.1.0 Containerized (Standalone – Manual workflow)
From 3.x or 4.x Containerized Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current standalone instance.
- Go to the existing ta_cloud_exchange directory with the docker-compose.yml file.
- Stop the CE containers.
$ sudo ./stop
- If the output of the ./stop command is ./stop: No such file or directory, execute the following command:
$ sudo docker compose down -v
- Grab the MAINTENANCE_PASSWORD password from a standalone setup. This will be required during the installation of the new CE as VM setup (as the value for Maintenance Password).
If the maintenance password is lost, the data could not be retained. - Create a zip file for Mongo data.
$ cd ta_cloud_exchange/data $ sudo zip -r ce_backup.zip mongo-data/
- Add custom plugins to the backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new instance using this scp command.
$ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:<ta_cloud_directory>/data
To transfer the data to an AMI or Azure image based CE as a VM, use this command.
$ sudo scp -i <plublic_key> ce_backup.zip cteadmin@<public_ip>:<ta_cloud_directory>/data
- SSH into the new CE as VM instance.
$ ssh cteadmin@<ip-of-vm>
- Go to the data directory of Cloud Exchange directory.
$ cd <ta_cloud_exchange-dir>/data
- Extract the data inside this directory and go back to the working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud Exchange is not started yet. If it was started, stop it using this command.
$ sudo ./stop
- Execute the setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted before migration.
- Launch Cloud Exchange.
$ sudo ./start
Check the configured plugins after migration, you may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the system’s IP: https://<ip>.
From 5.0.x Containerized Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current standalone instance.
- Go to the existing ta_cloud_exchange directory with the docker-compose.yml file.
- Stop the CE containers.
$ sudo ./stop
- If the output of the ./stop command is ./stop: No such file or directory, execute the following command:
$ sudo docker compose down -v
- Create a zip file for Mongo data.
$ cd ta_cloud_exchange/data $ sudo zip -r ce_backup.zip mongo-data/
- Add custom plugins to the backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new CE instance using this scp command.
$ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:<ta_cloud_directory>/data
To transfer the data to an AMI or Azure image based CE as a VM, use this command:
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:<ta_cloud_directory>/data
- SSH into the new CE as VM instance.
$ ssh cteadmin@<ip-of-vm>
- Go to the data directory of Cloud Exchange directory.
$ cd <ta_cloud_directory>/data
- Extract the data inside this directory and go back to the working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud Exchange is not started yet. If it was started, stop it using this command.
$ sudo ./stop
- Execute the setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted before migration.
- Launch Cloud Exchange.
$ sudo ./start
Check the configured plugins after migration, you may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the system’s IP: https://<ip>.
From CE as VM 5.0.0 Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current CE as VM Instance.
- Go to the existing Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange
- Stop the CE containers.
$ sudo ./stop
- If the output of ./stop command is ./stop: No such file or directory, use this command:
$ sudo docker compose down -v
- Create a zip file for Mongo data after going to the data directory.
$ cd data $ sudo zip -r ce_backup.zip mongo-data/
- Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new CE as a VM instance using the scp command (make sure your VM has the private key added).
$ sudo scp ce_backup.zip cteadmin@<ip-of-new-vm>:<ta_cloud_directory>/data
To transfer the data to an AMI based CE as a VM, use this command.
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:<ta_cloud_directory>/data
- SSH into the new CE as a VM instance.
$ ssh -i <private-key-of-new-vm> cteadmin@<ip-of-new-vm>
- Go to the data directory of Cloud Exchange directory.
$ cd <ta_cloud_directory>/data
- Extract the data into this folder, and then go back to the original working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud exchange is not started yet. If it was started, stop it by using this command.
$ sudo ./stop
- Use this setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted down from old CE as VM setup.
- Launch Cloud Exchange.
$ sudo ./start
Check the configured plugins after migration. You may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the Machine IP address: https://<ip>.
From CE as VM 5.0.1 Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current CE as VM Instance.
- Go to the existing Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange
- Stop the CE containers.
$ sudo ./stop
- If the output of ./stop command is ./stop: No such file or directory, use this command:
$ sudo docker compose down -v
- Create a zip file for Mongo data after going to the data directory.
$ cd data $ sudo zip -r ce_backup.zip mongo-data/ repos/ plugins/
- Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new CE instance using the scp command (make sure your VM has the private key added).
$ sudo scp ce_backup.zip cteadmin@<ip-of-new-vm>:<ta_cloud_directory>/data
To transfer the data to an AMI based CE as a VM, use this command:
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:<ta_cloud_directory>/data
- SSH into the new CE as a VM instance.
$ ssh -i <private-key-of-new-vm> cteadmin@<ip-of-new-vm>
- Go to the data directory of Cloud Exchange directory.
$ cd <ta_cloud_directory>/data
- Extract the data into this folder, and then go back to the original working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud exchange is not started yet. If it was started, stop it by using this command.
$ sudo ./stop
- Use this setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted down from old CE as VM setup.
- Launch Cloud Exchange.
$ sudo ./start
After the UI is accessible run following commandIf you are using Containerised CE both stand alone and HA, use “podman-compose” instead of “docker compose” for RHEL base OS.
$ sudo docker compose exec -w /opt/netskope/repos/Default core git clean -df
Check the configured plugins after migration. You may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the Machine IP address: https://<ip>.
Migrate to CE as VM 5.1.0 Standalone (Manual workflow)
From 3.x or 4.x Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current standalone instance.
- Go to the existing ta_cloud_exchange directory with the docker-compose.yml file.
- Stop the CE containers.
$ sudo ./stop
- If the output of the ./stop command is ./stop: No such file or directory, execute the following command:
$ sudo docker compose down -v
- Create a zip file for Mongo data.
$ cd ta_cloud_exchange/data $ sudo zip -r ce_backup.zip mongo-data/
- Add custom plugins to the backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new CE as VM instance using this scp command.
$ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:/opt/cloudexchange/cloudexchange/data
To transfer the data to an AMI or Azure image based CE as a VM, use this command.
$ sudo scp -i <plublic_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
- SSH into the new CE as VM instance.
$ ssh cteadmin@<ip-of-vm>
- Go to the Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange/data
- Extract the data inside this directory and go back to the working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud Exchange is not started yet. If it was started, stop it using this command.
$ sudo ./stop
- Execute the setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted before migration.
- Launch Cloud Exchange.
$ sudo ./start
Check the configured plugins after migration, you may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the system’s IP: https://<ip>.
From CE as VM 5.0.0 Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current CE as VM Instance.
- Go to the existing Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange
- Stop the CE containers.
$ sudo ./stop
- If the output of ./stop command is ./stop: No such file or directory, use this command:
$ sudo docker compose down -v
- Create a zip file for Mongo data after going to the data directory.
$ cd data $ sudo zip -r ce_backup.zip mongo-data/
- Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new CE as a VM instance using the scp command (make sure your VM has the private key added).
$ sudo scp ce_backup.zip cteadmin@<ip-of-new-vm>:/opt/cloudexchange/cloudexchange/data
To transfer the data to an AMI based CE as a VM, use this command.
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
- SSH into the new CE as a VM instance.
$ ssh -i <private-key-of-new-vm> cteadmin@<ip-of-new-vm>
- Go to the Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange/data
- Extract the data into this folder, and then go back to the original working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud exchange is not started yet. If it was started, stop it by using this command.
$ sudo ./stop
- Use this setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted down from old CE as VM setup.
- Launch Cloud Exchange.
$ sudo ./start
Check the configured plugins after migration. You may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the Machine IP address: https://<ip>.
From CE as VM 5.0.1 Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- SSH into current CE as VM Instance.
- Go to the existing Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange
- Stop the CE containers.
$ sudo ./stop
- If the output of ./stop command is ./stop: No such file or directory, use this command:
$ sudo docker compose down -v
- Create a zip file for Mongo data after going to the data directory.
$ cd data $ sudo zip -r ce_backup.zip mongo-data/ repos/ plugins/
- Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r ce_backup.zip custom_plugins
- Copy the backup to the new CE as a VM instance using the scp command (make sure your VM has the private key added).
$ sudo scp ce_backup.zip cteadmin@<ip-of-new-vm>:/opt/cloudexchange/cloudexchange/data
To transfer the data to an AMI based CE as a VM, use this command.
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
- SSH into the new CE as a VM instance.
$ ssh -i <private-key-of-new-vm> cteadmin@<ip-of-new-vm>
- Go to the Cloud Exchange directory.
$ cd /opt/cloudexchange/cloudexchange/data
- Extract the data into this folder, and then go back to the original working directory.
$ sudo unzip ce_backup.zip $ cd ..
- Make sure Cloud exchange is not started yet. If it was started, stop it by using this command.
$ sudo ./stop
- Use this setup script and follow the steps.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted down from old CE as VM setup.
- Launch Cloud Exchange.
$ sudo ./start
After the UI is accessible, run this command:
$ git -C /opt/cloudexchange/cloudexchange/data/repos/Default clean -df
Check the configured plugins after migration. You may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the Machine IP address: https://<ip>.
Migrate to CE as VM 5.1.0 HA
From CE as VM 5.0.0 Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- In order to transfer the data from the older CE as VM to the latest CE as VM based on HA, it is necessary to copy the data from CE as VM and transfer it to the latest CE as VM primary node.
- Stop the containers into the current CE as VM machine.
$ cd /opt/cloudexchange/cloudexchange $ sudo ./stop
- We need to copy the data from the current CE as VM. Create a zip file for Mongo data after going to the data directory.
$ cd /opt/cloudexchange/cloudexchange/data $ sudo zip -r ce_backup.zip mongo-data/ repos/ plugins/
- This step is applicable only if you’re using custom plugins.
Add custom plugins to the backup zip.$ sudo zip -r ce_backup.zip custom_plugins/
- In order for the migration to be successful, you need to transfer data from the current CE as VM to the primary node of the CE as VM, which has the latest version of Cloud Exchange.
$ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:/opt/cloudexchange/cloudexchange/data
To transfer the data to an AMI based CE as a VM, use this command:
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
- Go to the primary node of the CE as VM machine which contains the latest version of Cloud Exchange.
$ cd /opt/cloudexchange/cloudexchange/data
- Extract the data into zipped folders and then navigate back to the original working directory.
$ sudo unzip ce_backup.zip "mongo-data/*" -d . $ sudo unzip ce_backup.zip "plugins/*" "repos/*" -d <shared_drive> $ cd ..
- Run the setup script in the primary node first using this command.
$ sudo python3 ./setup
Make sure you enter the same maintenance password that you noted from the old CE as VM setup.
Then run the below commands in secondary and third nodes.$ sudo python3 ./setup --location <shared_drive>
- Migrate the Mongo data, run the following command one time only on the primary node.
$ sudo ./restore_ha_backup
- Start the containers, beginning with the Primary node followed by the other nodes, using this command.
$ sudo ./start
- Run this command after the UI is accessible:
$ git -C <shared_drive>/repos/Default clean -df
From CE as VM 5.0.x HA
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- In order to transfer the data from the current CE as VM based on HA to the latest CE as VM based on HA, it is necessary to duplicate the current CE as VM primary node and transfer it to the latest CE as VM primary node.
To ensure all containers will stop properly, start by stopping the secondary and third node before stopping the primary node using this command.$ cd /opt/cloudexchange/cloudexchange $ sudo ./stop
- Copy the data from the primary node. Create a zip file for Mongo data after going to the data directory.
$ cd /opt/cloudexchange/cloudexchange/data $ sudo zip -r ce_backup.zip mongo-data/
- Go to shared drive to copy plugins and repos folder
$ cd <shared_drive> $ sudo zip -r <ce_backup_zip_location>/ce_backup.zip repos/ plugins/
- Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r <ce_backup_zip_location>/ce_backup.zip custom_plugins/
- In order for the migration to be successful, you need to transfer data from the current machine primary node to the primary node of the new machine, which has the latest version of Cloud Exchange.
$ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:/opt/cloudexchange/cloudexchange/data
- To transfer the data to an AMI based CE as a VM, use this command:
$ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
- Go to the new primary node that contains the latest version of Cloud Exchange.
$ cd /opt/cloudexchange/cloudexchange/data
- Extract the data into zipped folders and then navigate back to the original working directory.
$ sudo unzip ce_backup.zip "mongo-data/*" -d . $ sudo unzip ce_backup.zip "plugins/*" "repos/*" -d <shared_drive> $ cd ..
- Run the setup script in the primary node first using this command.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted from the old CE as VM setup.
Next run this command in the secondary and third nodes.$ sudo ./setup --location <shared_drive>
- To migrate the Mongo data, use this command one time only on the primary node.
$ sudo ./restore_ha_backup
- Start the containers, beginning with the Primary node followed by the other nodes, using this command.
$ sudo ./start
- Run this command after the UI is accessible:
$ git -C <shared_drive>/repos/Default clean -df
Wait for a few minutes following the execution of the start script on the primary node.
Migrate to Containerized 5.1.0 HA
From 5.0.x Standalone
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- In order to transfer the data from the older standalone to the latest container based HA, it is necessary to copy the data from the old standalone and transfer it to the latest HA primary node.
- Stop the containers into the current standalone machine.
$ cd <ta_cloud_exchange_directory_path> $ sudo ./stop
- You need to copy the data from the current standalone machine. Create a zip file for Mongo data after going to the data directory.
$ cd <ta_cloud_exchange_directory_path> $ sudo zip -r ce_backup.zip mongo-data/ repos/ plugins/
- This step is applicable only if you’re using custom plugins.
Add custom plugins to the backup zip.$ sudo zip -r ce_backup.zip data/custom_plugins
- In order for the migration to be successful, you need to transfer data from the current standalone to the primary node of the HA, which has the latest version of Cloud Exchange.
$ sudo scp ce_backup.zip <username>@<ip-of-vm>:<ta_cloud_exchange_directory_path>/data
To transfer the data to an cloud based HA deployment, use this command:
$ sudo scp -i <public_key> ce_backup.zip <username>@<public_ip>:<ta_cloud_exchange_directory_path>/data
- Go to the primary node of the HA machine which contains the latest version of Cloud Exchange.
$ cd <ta_cloud_exchange_directory_path>/data
- Extract the data into zipped folders and then navigate back to the original working directory.
$ sudo unzip ce_backup.zip "mongo-data/*" -d . $ sudo unzip ce_backup.zip "plugins/*" "repos/*" -d <shared_drive> $ cd ..
- Run the setup script in the primary node first using this command.
$ sudo python3 ./setup
Make sure you enter the same maintenance password that you noted from the old standalone setup.
Then run the this command in the secondary and third nodes.$ sudo python3 ./setup --location <shared_drive>
- Migrate the Mongo data, run the following command one time only on the primary node.
$ sudo ./restore_ha_backup
- Start the containers, beginning with the Primary node followed by the other nodes, using this command.
$ sudo ./start
- Run the following command once the UI is accessible. If you are using Containerised CE both stand alone and HA, use “podman-compose” instead of “docker compose” for RHEL base OS.
$ sudo docker compose exec -w /opt/netskope/repos/Default core git clean -df
From 5.0.x Containerized HA
- Before proceeding, ensure that all prerequisites have been met. Verifying these requirements in advance is essential to avoid potential issues during the process.
- In order to transfer the data from the current container based HA to the latest container based HA, it is necessary to make a copy of data in the current HA primary node and transfer it to the latest containerized HA primary node.
- To ensure all containers will stop properly, start by stopping the secondary and third node before stopping the primary node using this command.
$ cd <ta_cloud_exchange_directory_path> $ sudo ./stop
- Copy the data from the primary node. Create a zip file for Mongo data after going to the data directory.
$ cd <ta_cloud_exchange_directory_path>/data $ sudo zip -r ce_backup.zip mongo-data/
- Go to shared drive to copy plugins and repos folder
$ cd <shared_drive> $ sudo zip -r <ce_backup_zip_location>/ce_backup.zip repos/ plugins/
- Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
$ sudo zip -r <ce_backup_zip_location>/ce_backup.zip custom_plugins/
- In order for the migration to be successful, you need to transfer data from the current machine primary node to the primary node of the new machine, which has the latest version of Cloud Exchange.
$ sudo scp ce_backup.zip <username>@<ip-of-vm>:<ta_cloud_exchange_directory_path>/data
- To transfer the data to an cloud based HA, use this command:
$ sudo scp -i <public_key> ce_backup.zip <username>@<public_ip>:<ta_cloud_exchange_directory_path>/data
- Go to the new primary node that contains the latest version of Cloud Exchange.
$ cd <ta_cloud_exchange_directory_path>/data
- Extract the data into zipped folders and then navigate back to the original working directory.
$ sudo unzip ce_backup.zip "mongo-data/*" -d . $ sudo unzip ce_backup.zip "plugins/*" "repos/*" -d <shared_drive> $ cd ..
- Run the setup script in the primary node first using this command.
$ sudo ./setup
Make sure you enter the same maintenance password that you noted from the old HA setup.
Next run this command in the secondary and third nodes.$ sudo ./setup --location <shared_drive>
- To migrate the Mongo data, use this command one time only on the primary node.
$sudo ./restore_ha_backup
- Start the containers, beginning with the Primary node followed by the other nodes, using this command.
$ sudo ./start
- Run this command after the UI is accessible. If you are using Containerized CE both standalone and HA, use
podman-compose
instead ofdocker compose
for RHEL base OS.$ sudo docker compose exec -w /opt/netskope/repos/Default core git clean -df
Wait for a few minutes following the execution of the start script on the primary node.