Upgrade or Migrate to the Latest Version of Cloud Exchange

Upgrade or Migrate to the Latest Version of Cloud Exchange

Follow these upgrade or migration steps to get the latest version of Cloud Exchange.

Use the upgrade method if the host machine/OS is unchanged, and you have GitHub and docker hub connectivity to get the latest upgrade package.

If the host machine is getting changed, then you should use the migration method. Currently, If you want to use CE as VM, then only migration is supported since you will be deploying the new host machines via OVA/AMI/VHDX. If you are moving from standalone to HA, you should also use the migration method because you will need to set up multiple host machines as per the HA requirement.

Upgrade

Upgrade to 5.0.1 Standalone

From 3.x or 4.x or 5.0.0 Standalone

You should update all your Netskope tenants with a V2 token that has access to all the dataexport endpoints before proceeding with the upgrade.

  1. Disable all Netskope Source Plugins in all modules, then wait for 2-3 minutes before proceeding with the migration.
  2. Go to the existing ta_cloud_exchange directory with the docker-compose.yml file. Stop the Cloud Exchange containers.
    $ sudo ./stop

    If the output of the ./stop command is ./stop: No such file or directory, execute the following command.

    $ sudo docker-compose down -v
  3. If you have made any local changes to the docker-compose.yml file, reset those using (you might need sudo).
    $ sudo git reset --hard
  4. Pull the latest changes.
    $ sudo git pull
  5. Execute the setup script.
    $ sudo python3 ./setup
  6. Launch Cloud Exchange.
    $ sudo ./start
  7. Close all of your Cloud Exchange browser instances and log in again in Incognito mode, or clear the browser cache before logging in.

The Cloud Exchange UI is now accessible with the system’s IP: http(s)://<ip>.

Upgrade to 5.0.1 HA

From 3.x or 4.x or 5.0.0 Standalone
  1. Disable all Netskope Source Plugins in all modules, then wait until in-queue tasks finish before proceeding with the migration. Use these steps to identify in-queue tasks mentioned below. Recommended disk space utilization should be around 200 MB, and queue size should be below 50.
    • To check disk space utilization for in-queue tasks.
      $ du -sh data/rabbitmq/data
    • To check the number of in-queue tasks, run the appropriate command from the installation directory for the cloudexchange queue.
      • Docker based OS
        $ docker-compose exec rabbitmq-stats rabbitmqctl list_queues
      • Podman based OS
        $ podman-compose exec rabbitmq-stats rabbitmqctl list_queues
  2. Stop the standalone deployment and copy the data.
    $ sudo ./stop
  3. This step is only applicable if you are on CE version 4.2.0 or above.
    To convert the RabbitMQ data as per the new specification, start the RabbitMQ container using the following command after exporting the environment file. If you are using podman compose, then use podman run instead of docker run.

    $ . ./.env
    $ export RABBIT_NEW_DIR_NAME=<ip-or-hostname-of-primary-node>
    • Run this command if you are using IP addresses to configure the HA. Use the IP address of the primary machine.
      $ export RABBIT_NEW_DIR_NAME=10.10.10.10
    • Run this command if you are using Hostnames to configure the HA. Use the Hostname of the primary machine.
      $ export RABBIT_NEW_DIR_NAME=subdomain.hostname.local
    $ sudo docker run --rm -v 
    ./data/rabbitmq/custom.conf:/etc/rabbitmq/conf.d/custom.conf:z -v ./data/rabbitmq/data:/var/lib/rabbitmq/mnesia:z -u 1001:1001 -e RABBITMQ_NODENAME=rabbit@rabbitmq-stats -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=${MAINTENANCE_PASSWORD} index.docker.io/rabbitmq:3.12.10-management /bin/bash -c "echo ${MAINTENANCE_PASSWORD_ESCAPED} > /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie && rabbitmqctl rename_cluster_node rabbit@rabbitmq-stats rabbit@${RABBIT_NEW_DIR_NAME} && mv /var/lib/rabbitmq/mnesia/rabbit\@rabbitmq-stats /var/lib/rabbitmq/mnesia/rabbit\@${RABBIT_NEW_DIR_NAME} && mv /var/lib/rabbitmq/mnesia/rabbit\@rabbitmq-stats-feature_flags /var/lib/rabbitmq/mnesia/rabbit\@${RABBIT_NEW_DIR_NAME}-feature_flags && mv /var/lib/rabbitmq/mnesia/rabbit\@rabbitmq-stats-plugins-expand /var/lib/rabbitmq/mnesia/rabbit\@${RABBIT_NEW_DIR_NAME}-plugins-expand && mv /var/lib/rabbitmq/mnesia/rabbit\@rabbitmq-stats-rename /var/lib/rabbitmq/mnesia/rabbit\@${RABBIT_NEW_DIR_NAME}-rename"

  4. Copy the data/rabbitmq/data/ directory from source to destination machine. The new path for the machine should be the location where you installed ta_cloud_exchange on the target machine.
    $ sudo scp -r data/rabbitmq/data/* username@ip_address:<new-machine-path>/data/rabbitmq/data/

    Make sure you have cloned the ta_cloud_exchange repo with <username>.

  5. Copy the data/mongo-data/data/ directory from source machine to destination machine. The new path for the machine should be the location where you installed ta_cloud_exchange on the target machine.
    $ sudo scp -r data/mongo-data/data/ username@ip_address:<new-machine-path>/data/mongo-data/

    Make sure you have cloned the ta_cloud_exchange repo with <username>.

  6. Copy the data/custom_plugins directory from source machine to destination machine if custom plugins are being used. The new path for the machine should be the location where you installed ta_cloud_exchange on the target machine.
    $ sudo scp -r data/custom_plugins username@ip_address:<new-machine-path>/data/custom_plugins

    Make sure you have cloned the ta_cloud_exchange repo with <username>.

  7. To transition copied data into the new HA configuration, refer to the HA deployment guide for instructions on adding the necessary HA parameters and initializing the cluster. This process facilitates the migration of MongoDB data into the replica set. Additionally, it involves the importation of RabbitMQ messages into the new HA machine, with subsequent integration of other nodes into the cluster.
  8. If custom plugins and/or custom repos are being used, make sure you copy those to the designated shared location following the execution of the setup script. This step will be included in the installation steps.
    Following the successful completion of the HA migration, it’s important to note that there may be a brief delay during the initial few minutes. This is attributable to MongoDB’s replication process, which involves the distribution of data across all nodes in the system.
  9. Run the setup script in the primary node first and update the IP list. Keep the existing IPs in the same order and add a new IP address at the end.
    $ sudo python3 ./setup
  10. If custom plugins are being used in your previous standalone setup, it is necessary to transfer the data of these custom plugins to the shared drive by executing the following command after the setup has been completed successfully.
    $ sudo cp -r data/custom_plugins/* <shared_drive>/custom_plugins/
  11. Run the setup script in the remaining machines to add the connection info.
    $ sudo python3 ./setup --location /path/to/mounted/directory
  12. Run the start script in the primary node first and then run the start script for remaining machines as well. At last run the start script in the new node.
    $ sudo ./start
From 5.0.0 HA
  1. Disable all Netskope Source Plugins in all modules, then wait until in-queue tasks finished before proceeding with the migration. Use these steps to identify in-queue tasks mentioned below. Recommended disk space utilization should be around 200 MB, and queue size should be below 50.
    • To check disk space utilization for in-queue tasks.
      $ du -sh data/rabbitmq/data
    • To check no of in-queue tasks, run this command from the installation directory for cloudexchange queue.
      $ docker-compose exec rabbitmq-stats rabbitmqctl list_queues
  2. To ensure all containers will stop properly start by stopping the Secondary and Third node before stopping the Primary node with this command.
    $ sudo ./stop
  3. Pull the latest changes in all the nodes from GitHub with this command.
    $ sudo git pull

    If any issues arise during the git pull command, reset the changes using:

    $ sudo git reset --hard
  4. Run the setup script in the primary node first using the below command.
    $ sudo python3 ./setup

    Next run the below commands in secondary and third nodes.

    $ sudo python3 ./setup --location <shared_drive>
  5. Start the containers, beginning with the primary node followed by the other nodes, using
    $ sudo ./start

    Wait for a few minutes following the execution of the start script on the primary node.

Migrate

Migrate to CE as VM 5.0.1 Standalone

From 3.x or 4.x Standalone
  1. Disable all Netskope Source Plugins in all modules, then wait for 2-3 minutes before proceeding with the migration.
  2. SSH into current standalone instance.
  3. Go to the existing ta_cloud_exchange directory with the docker-compose.yml file.
  4. Stop the CE containers.
    $ sudo ./stop
  5. If the output of the ./stop command is ./stop: No such file or directory, execute the following command:
    $ sudo docker-compose down -v
  6. Grab the MAITENANCE_PASSWORD password from a standalone setup. This will be required during the installation of the new CE as VM setup (as the value for Maintenance Password).
    If the maintenance password is lost, the data could not be retained.
  7. Create a zip file for Mongo and RabbitMQ data.
    $ cd ta_cloud_exchange/data 
    $ sudo zip -r ce_backup.zip mongo-data/ rabbitmq/data
  8. Add custom plugins to the backup zip. This step is applicable only if you’re using custom plugins.
    $ sudo zip -r ce_backup.zip custom_plugins
  9. Copy the backup to the new CE as VM instance using this scp command.
    $ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:/opt/cloudexchange/cloudexchange/data

    To transfer the data to an AMI or Azure image based CE as a VM, use this command.

    $ sudo scp -i <plublic_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
  10. SSH into the new CE as VM instance.
    $ ssh cteadmin@<ip-of-vm>
  11. Go to the Cloud Exchange directory.
    $ cd /opt/cloudexchange/cloudexchange/data
  12. Extract the data inside this directory and go back to the working directory.
    $ sudo unzip ce_backup.zip 
    $ cd ..
  13. Make sure Cloud Exchange is not started yet. If it was started, stop it using this command.
    $ sudo ./stop
  14. Execute the setup script and follow the steps.
    $ sudo ./setup

    Make sure you enter the same maintenance password that you noted before migration.

  15. Launch Cloud Exchange.
    $ sudo ./start

Check the configured plugins after migration, you may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the system’s IP: https://<ip>.

From CE as VM 5.0.0 Standalone
  1. Disable all Netskope Source Plugins in all modules, then wait for 2-3 minutes before proceeding with the migration.
  2. SSH into current CE as VM Instance.
  3. Go to the existing Cloud Exchange directory.
    $ cd /opt/cloudexchange/cloudexchange
  4. Get the MAINTENANCE_PASSWORD password from the standalone setup. This will be required during the installation of the new CE as VM setup (as the value for Maintenance Password).
    $ sudo cat .env | grep 'MAINTENANCE_PASSWORD='

    If the maintenance password is lost, the data could not be retained.

  5. Stop the CE containers.
    $ sudo ./stop
  6. If the output of ./stop command is ./stop: No such file or directory,  use this command:
    $ sudo docker-compose down -v
  7. Create a zip file for Mongo and RabbitMQ data after going to the data directory.
    $ cd data 
    $ sudo zip -r ce_backup.zip mongo-data/ rabbitmq/data repos/ plugins/
  8. Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
    $ sudo zip -r ce_backup.zip custom_plugins
  9. Copy the backup to new CE as VM instance using the scp command (make sure your VM has the private key added).
    $ sudo scp ce_backup.zip cteadmin@<ip-of-new-vm>:/opt/cloudexchange/cloudexchange/data

    To transfer the data to an AMI based CE as a VM, use this command.

    $ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
  10. SSH into the new CE as VM instance.
    $ ssh -i <private-key-of-new-vm> cteadmin@<ip-of-new-vm>
  11. Go to the Cloud Exchange directory.
    $ cd /opt/cloudexchange/cloudexchange/data
  12. Extract the data into this folder, and then go back to the original working directory.
    $ sudo unzip ce_backup.zip 
    $ cd ..
  13. Make sure Cloud exchange is not started yet. If it was started, stop it by using this command.
    $ sudo ./stop
  14. Use this setup script and follow the steps.
    $ sudo ./setup

    Make sure you enter the same maintenance password that you noted down from old CE as VM setup.

  15. Launch Cloud Exchange.
    $ sudo ./start

Check the configured plugins after migration. You may need to reconfigure some plugins due to new plugin changes.
The UI is now accessible with the Machine IP address: https://<ip>.

Migrate to CE as VM 5.0.1 HA

From CE as VM 5.0.0 Standalone
  1. Disable all Netskope Source Plugins in all modules, then wait until in-queue tasks finished before proceeding with the migration. Use these steps to identify in-queue tasks mentioned below. Recommended Disk space utilization should be around 200 MB, and queue size should be below 50.
    • To check disk space utilization for in-queue tasks.
      $ du -sh data/rabbitmq/data
    • To check no of in-queue tasks run below command from installation directory for cloudexchange queue.
      $ docker-compose exec rabbitmq-stats rabbitmqctl list_queues
  2. In order to transfer the data from the older CE as VM to the latest CE as VM based on HA, it is necessary to copy the data from CE as VM and transfer it to the latest CE as VM primary node.
  3. Stop the containers into the current CE as VM machine.
    $ cd /opt/cloudexchange/cloudexchange 
    $ sudo ./stop
  4. Grab the MAINTENANCE_PASSWORD password from a standalone setup. This will be required during the installation of the new HA-CE as VM setup (as the value for Maintenance Password).
    If the maintenance password is lost, the data could not be retained.
  5. We need to copy the data from the current CE as VM. Create a zip file for Mongo and RabbitMQ data after going to the data directory.
    $ cd /opt/cloudexchange/cloudexchange/data 
    $ sudo zip -r ce_backup.zip mongo-data/ rabbitmq/data repos/ plugins/
  6. This step is applicable only if you’re using custom plugins.
    Add custom plugins to the backup zip.

    $ sudo zip -r ce_backup.zip data/custom_plugins
  7. In order for the migration to be successful, you need to transfer data from the current CE as VM to the primary node of the CE as VM, which has the latest version of Cloud Exchange.
    $ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:/opt/cloudexchange/cloudexchange/data

    To transfer the data to an AMI based CE as a VM, use this command:

    $ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
  8. Go to the primary node of the CE as VM machine which contains the latest version of Cloud Exchange.
    $ cd /opt/cloudexchange/cloudexchange/data
  9. Extract the data into this folder and then navigate back to the original working directory.
    $ sudo unzip ce_backup.zip 
    $ cd ..
  10. Run the setup script in the primary node first using this command.
    $ sudo python3 ./setup

    Make sure you enter the same maintenance password that you noted from the old CE as VM setup.
    Then run the below commands in secondary and third nodes.

    $ sudo python3 ./setup --location <shared_drive>
  11. Migrate RabbitMQ and Mongo data, run the following command one time only on primary node.
    $ sudo ./restore_ha_backup
  12. Start the containers, beginning with the Primary node followed by the other nodes, using this command.
    $ sudo ./start
From CE as VM 5.0.0 HA
  1. Disable all Netskope Source Plugins in all modules, then wait until in-queue tasks finished before proceeding with the migration. Please find steps to identify in-queue tasks mentioned below. Recommended Disk space utilization should be around 200 MB and queue size should be below 50.
    • To check disk space utilization for in-queue tasks.
      $ du -sh data/rabbitmq/data
    • To check the number of in-queue tasks, use this command from installation directory for the cloudexchange queue.
      $ docker-compose exec rabbitmq-stats rabbitmqctl list_queues
  2. In order to transfer the data from the current CE as VM based on HA to the latest CE as VM based on HA, it is necessary to duplicate the current CE as VM primary node and transfer it to the latest CE as VM primary node.
  3. To ensure all containers will stop properly, start by stopping the secondary and third node before stopping the primary node using this command.
    $ cd /opt/cloudexchange/cloudexchange 
    $ sudo ./stop
  4. Copy the data from the primary node. Create a zip file for Mongo and RabbitMQ data after going to the data directory.
    $ cd /opt/cloudexchange/cloudexchange/data 
    $ sudo zip -r ce_backup.zip mongo-data/ rabbitmq/data repos/ plugins/
  5. Add custom plugins to backup zip. This step is applicable only if you’re using custom plugins.
    $ sudo zip -r ce_backup.zip data/custom_plugins
  6. In order for the migration to be successful, you need to transfer data from the current machine primary node to the primary node of the new machine, which has the latest version of Cloud Exchange.
    $ sudo scp ce_backup.zip cteadmin@<ip-of-vm>:/opt/cloudexchange/cloudexchange/data
  7. To transfer the data to an AMI based CE as a VM, use this command:
    $ sudo scp -i <public_key> ce_backup.zip cteadmin@<public_ip>:/opt/cloudexchange/cloudexchange/data
  8. Go to the new primary node that contains the latest version of Cloud Exchange.
    $ cd /opt/cloudexchange/cloudexchange/data
  9. Extract the data into this folder, and then go back to the original working directory.
    $ sudo unzip ce_backup.zip 
    $ cd ..
  10. Run the setup script in the primary node first using this command.
    $ sudo ./setup

    Make sure you enter the same maintenance password that you noted from the old CE as VM setup.
    Next run this command in the secondary and third nodes.

    $ sudo ./setup --location <shared_drive>
  11. To migrate the RabbitMQ and Mongo data, use this command one time only on the primary node.
     $ sudo ./restore_ha_backup
  12. Start the containers, beginning with the Primary node followed by the other nodes, using this command.
    $ sudo ./start

Wait for a few minutes following the execution of the start script on the primary node.

Share this Doc

Upgrade or Migrate to the Latest Version of Cloud Exchange

Or copy link

In this topic ...