Cloud Exchange KB Articles
Cloud Exchange KB Articles
Expand the sections to view these KB articles.
High Availability (HA) Setup Guide for Cloud Exchange on Linux Nodes with Azure Blob Storage
This section explains how to configure a High Availability (HA) environment for Cloud Exchange on Linux nodes using Azure Blob Storage as the NFS (Network File System) for shared storage.
Cloud Exchange System Requirements are present at:
- https://docs.netskope.com/en/cloud-exchange-system-requirements/
- https://docs.netskope.com/en/cloud-exchange-high-availability-1/#prerequisite-for-ha-deployment-on-linux
Prerequisites
System Requirements for HA Nodes
- 16 CPU CORES (Compute Intensive F16s series)
- 32 GB RAM
- 120 GB Available free disk space where CE will be installed; for example:
/home/root/
- 20 GB free space at
/var/lib
for container storages.
NFS Setup with Azure Blob Storage
Network Configuration
Linux machines will be treated as HA nodes, and all nodes along with Storage accounts should be in the same Virtual network/subnet and same Azure region/zone.After configuring NFS File shares, refer to the steps below for Mount the NFS share.
Configure Cloud Exchange High Availability (HA)Validate Prerequisites
- Check Required Ports: Ensure the following ports are open and available on each node:
4369 RabbitMQ Communication Port 5672 AMQP Communication PORT 15672 RABBITMQ Administration Console 25672 RabbitMQ Internal Port 35672 Used FOR CLI Communications 27017 MongoDB Port 443 Service Health Check - Network Configuration: The Virtual Network should be one across all Nodes.
- Validate RAM, Disk space, and CPU.
sudo df -h
sudo free -h
sudo nproc - Git package, curl, and zip package:
sudo yum install -y git curl zip
- Install nfs-utils.
sudo yum install -y nfs-utils
- Python3 along with pip.
sudo yum install -y python3 python3-pip python3-devel
- Podman and podman-compose (Ref: https://podman.io/docs/installation#rhel8 )
sudo yum module enable -y container-tools:rhel8
sudo yum module install -y container-tools:rhel8
sudo yum install -y podman-plugins
sudo pip3 install podman-compose
sudo chmod +x /usr/local/bin/podman-compose
sudo ln -s /usr/local/bin/podman-compose /usr/bin/podman-compose - Check Podman and podman-compose version:
podman-compose —-version
podman –-version - Required python packages:
pip3 install "pyyaml>=6.0.0"
pip3 install "python-dotenv>=0.20.0,<=1.0.0"
pip3 install "pymongo>=4.1.1,<=4.3.3
Configure NFS In Azure Blob Storage
Mount NFS Storage in Each Nodes
- Go to NFS File Share Overview.
- Create a mounting directory as mentioned in mount Path (Also obtained from NFS FIle share overview guide) using this command:
sudo mkdir -p <mount_path>
Edit the/etc/fstab
file and the mounted configuration as mentioned below:cestorageha.file.core.windows.net:/cestorageha/shared-storage-nfs /mount/cestorageha/shared-storage-nfs nfs default 0 0
cestorageha.file.core.windows.net:/cestorageha/shared-storage-nfs
will be the source NFS directory present at the Azure Blob Storage NFS File share which will be obtained from sample command presents at Home > Respective Storage account > File shares > Newly created NFS File share > Overview./mount/cestorageha/shared-storage-nfs
will be the mount path and created directory.- The other remaining part is NFS Configuration.
- Run this command:
sudo mount -a
Download the Installation Package
On each node, download the Cloud Exchange installation package using this command:
git clone https://github.com/netskopeoss/ta_cloud_exchange
Run the Setup Script on the Primary Node
Go to the Cloud Exchange directory. On the primary node, run the setup script with this command:
sudo ./setup
Run the Setup Script in the Secondary Nodes
On each secondary node, go to the Cloud Exchange directory. Run the setup script using the following command, replacing <mounting_dir>
with the path to your NFS mounting directory:
sudo ./setup --location <mounting_dir>
Enable the CSV index
- On each node, edit the Podman Compose HA configuration file to enable CSV indexing:
$ vi podman-compose-ha.yml
- Locate the core section and add the following environment variable to enable network event streaming:
ITERATOR_EVENT_NETWORK=stream_network_events_rollsroyce
Update the RabbitMQ and MongoDB Tags
- RabbitMQ:
index.docker.io/crestsystems/cloudexchange:rabbitmq-5.0.1-csv-hotfixx
For each node, open the
podman-compose-ha.yml
file.$ vi podman-compose-ha.yml
Update the RabbitMQ and MongoDB service image tags as follows:
- MongoDB:
index.docker.io/crestsystems/cloudexchange:mongo-5.0.1-csv-hotfix
Update the Core and UI Tags
- Go to NFS storage.
- Edit the
.env
file:$ vi config/.env
- Update the CORE_TAG and UI_TAG variables as follows.
CORE_TAG=crestsystems/cloudexchange:core-5.0.1-csv-hotfix
UI_TAG=crestsystems/cloudexchange:ui-5.0.1-csv-hotfix
podman-compose-ha.yml
Backup On each node, create a backup of the podman-compose-ha.yml
file:
$ cp podman-compose-ha.yml podman-compose-ha.yml.backup
Start Cloud Exchange on the Primary Node
- Run the start script on the primary node:
$ sudo ./start
- Wait for the Migration completed message before proceeding to the next step.
Start Cloud Exchange on Secondary Nodes
After you receive the Migration completed message in the primary node, then execute the start script in the remaining nodes.
$ sudo ./start
Upload and Configure the Azure Sentinel Plugin
- Go to Settings > Plugin Repository in CE and click on upload plugin (⬆) button.
- Select the Zip file which is shared through the Support Case and click on the upload button.
- After the plugin is uploaded successfully, go to Settings > plugins to configure the uploaded plugin with the version 3.0.2.
Cloud Exchange Platform Logs Error Notification Setup for Slack
This section is designed to guide you through the process of setting up notifications in Slack for Cloud Exchange errors. This will allow customers to be promptly informed about any critical issues or errors encountered during the Cloud Exchange operations. Follow the steps below to configure notifications for various Cloud Exchange modules, ensuring you get notified whenever an error occurs.
Overview
To keep customers informed of critical errors in Cloud Exchange, we have implemented a Slack notifier plugin. This allows you to receive real-time alerts directly to your Slack channel whenever an error occurs in any of the Cloud Exchange modules. By setting up business rules tailored to specific modules, you can ensure that only relevant errors trigger notifications, making it easier to monitor and address issues quickly.
Notifier for Slack Configuration
If you are freshly installing the plugin, configure the Slack Notifier Plugin by following the steps outlined in the Notifier Plugin for Ticket Orchestrator document. Once done, refer to this document to configure the business rules. If you have already configured Notifier, you can directly follow this document to edit the existing business rules. Below is how to configure the business rule for each Cloud Exchange module:
Cloud Exchange Platform Errors
This section covers general Cloud Exchange Platform errors that are not tied to any specific module. These errors may include system-wide issues.
Steps to Configure the Business Rule
- In Cloud Exchange, go to a Cloud Exchange module (like Ticket Orchestrator) and click Business Rules, and then create or edit an existing business rule.
- Edit the rule as shown.
- Click Save.
Log Shipper Module Errors
This section covers errors specific to the Log Shipper module in the Cloud Exchange environment. Any errors generated related to this module will trigger a notification to Slack.
Steps to Configure the Business Rule
- In Cloud Exchange, go to Log Shipper > Business Rules and create or edit an existing business rule.
- Edit the rule as shown.
- Click Save.
Ticket Orchestrator Module Errors
This section covers errors specific to the Ticket Orchestrator Module in the Cloud Exchange environment. Any errors generated related to this module will trigger a notification to Slack.
Steps to Configure the Business Rule
- In Cloud Exchange, go Ticket Orchestrator > Business Rules and create or edit an existing business rule.
- Edit the rule as shown.
- Once done save the rule.
Threat Exchange Module Errors
This section focuses on the Threat Exchange module, which manages threat intelligence sharing across the Cloud Exchange platform. Errors in this module will trigger notifications to keep you informed about security-related issues.
Steps to Configure the Business Rule
- In Cloud Exchange, go to Threat Exchange > Business Rules and create or edit an existing business rule.
- Edit the rule as shown.
- Click Save.
Risk Exchange Module Errors
This section covers errors specific to the Risk Exchange module in the Cloud Exchange environment. Any errors generated related to this module will trigger a notification to Slack.
Steps to Configure the Business Rule
- In Cloud Exchange, go to Risk Exchange > Business Rules and create or edit an existing business rule.
- Edit the rule as shown.
- Click Save.
Queue Configuration
Queue configuration is a crucial step where you can map the appropriate fields.
Map Fields: In this section, you can map values between alerts and notifications. Alert attributes can be accessed using the “$” symbol in the custom message field.
For example, in the screenshots below, we have mapped the values $errorCode, $alertType, and $message. As a result, when you receive the notification in Slack, it will display the information as specified here.
Map fields in queue configuration.
For example, we attempt to save the plugin while leaving the Event Types and Alert Types fields empty, it will result in an error. This error will trigger a notification on Slack.
Verified that error generated for CLS module.
Notification generated on slack.
Cloud Exchange Error Codes
Here are links to sections that provide detailed descriptions of various Cloud Exchange Error Codes. These sections cover specific error codes related to different modules within the Cloud Exchange platform, offering in-depth explanations of each error type, and its causes.
Web Transaction Logs Pointer Reset
This section is intended to help you address gaps in web transaction data collection that may occur due to an outage or issue with SIEM integrations, such as the Netskope Add-on for Splunk, or Netskope Cloud Exchange. If there is a disruption in the flow of web transaction data, this document will walk you through the process of filling the gap.
For example, let’s assume there was an interruption in web transaction data collection starting on 5th February 2025 at 6:00 AM (UTC), and the issue was resolved by 7th February 2025 at the same time. While recent logs are now appearing, the data gap from the 5th to the 7th remains unfilled. This guide will show you how to reset the pointer back to 5th February 2025 at 6:00 AM to resume data collection from that point onward.
NOTE: The default retention period for web transaction logs is 7 days.
Prerequisites
- Ensure git and python version 3.8.x or greater are installed on your machine.
- Ensure that you have access to the tenant’s Subscription Endpoint and Subscription Key before executing the commands below. You may get the Subscription Endpoint and download the Subscription Key from the Settings > Tools > Event Streaming, as shown here:
Reset the Web Transaction Logs Pointer
The following steps will guide you through the process of moving the pointer to a custom date and time:
- Git Clone the netskope_transaction_logs directory:
sudo git clone https://github.com/netskopeoss/netskope_transaction_logs
- Move to inside the directory netskope_transaction_logs:
cd netskope_transaction_logs
- Check the presence of the virtualenv package:
sudo python3 -m virtualenv venv
- Install virtualenv, if not found on the machine:
sudo python3 -m pip install virtualenv
- Recheck the presence of virtual environment package:
sudo python3 -m virtualenv venv
- Activate venv:
source venv/bin/activate
- Inside venv, create a sa.json file:
vi sa.json
- Paste your Subscription Key in this file.
- Install Google-based packages in this virtual environment:
pip install google-api-core
pip install google-cloud-pubsublite
- Export your Subscription Key as Google Credentials:
export GOOGLE_APPLICATION_CREDENTIALS=<path-to>/sa.json
Replace <path-to> with the actual path where the sa.json file is located on your machine.
- Move the cursor/pointer to a customized point:
Here is the Generic Command Template for reference:
python3 txlog_subscriber_seek_target_sample.py -p <subscription_endpoint> -t <timestamp_type> -s <timestamp>
You should move your pointer to 2025-02-05 06:00:00 UTC. Execute the following command:
Make sure to replace “projects/00000000000/locations/europe-west3-a/subscriptions/prod-goskope-au000-su b-streaming-0000-0000000000” with your actual Subscription Endpoint before executing the command.
Example:
python3 txlog_subscriber_seek_target_sample.py -p projects/00000000000/locations/europe-west3-a/subscriptions/prod-gosko pe-au000-sub-streaming-0000-0000000000 -t PUBLISH -s "2025-02-05 06:00:00"
For reference, here is the expected output of successful execution of the above command:
Note that in order to exit the venv mode, execute this command:
deactivate