Ticket Orchestrator Custom Plugin Developers Guide
Ticket Orchestrator Custom Plugin Developers Guide
This guide explains how to write a new Ticket Orchestrator plugin that allows users to create or update new tasks or alerts on platforms. The developer should be able to write a new plugin independently without any technical issues by following this guide.
Prerequisites
- Python 3.x programming experience (intermediate level).
- Access to the Netskope Cloud Exchange platform.
- API or Python SDK access to the product or solution for which you need to write the plugin.
- An account with minimum permission for the product.
Ticket Orchestrator Module
The Cloud Exchange platform, and its Ticket Orchestrator module, comes with a rich set of features and functionality that allow for a high degree of customization, so we recommend that you familiarize yourself with the different aspects of the platform as listed below.
Note: This module supports sharing of data from Netskope to third party tool.
Netskope Concepts & Terminology:
- Core: The Cloud Exchange core engine manages the 3rd party plugins and their life cycle methods. It has API endpoints for interacting with the platform to perform various tasks.
- Module: The functional code areas that invoke modular-specific plugins to accomplish different workflows. Ticket Orchestrator is one of the modules running in Cloud Exchange.
- Plugin: Plugins are Python packages that have logic to create or update tasks or alerts.
- Plugin Configurations: Plugin configurations are the plugin class objects which are configured with the required parameters and are scheduled by the Cloud Exchange core engine for creating or updating tasks or alerts.
- Tasks: These are tasks that are to be created or updated in platforms such as Jira or ServiceNow. A task class will have an alert class in it.
- Alerts: These are notifications that are sent to the platform.
Development Guidelines
- Use the Package Directory Structure for all Python code.
- Make sure all the 3rd party libraries packaged with the plugin package are checked for known vulnerabilities.
- Make sure to follow the standard Python code conventions (https://peps.python.org/pep-0008/).
- Run and verify that the Flake8 Lint check passes with the docstring check enabled.The maximum length of a line should be 80.
- For Scripts/Integrations written in Python, make sure to create unit tests. Please refer the below section Unit Testing.
- Plugin architecture allows storing states; however, avoid storing huge objects for state management.
- Check your python code for any vulnerabilities.
- The plugin icon has to be under 10kb. Make sure to use the company logo(and not the product’s) with a transparent background. The recommended size for the logo is 300×50 or a similar aspect ratio.
- Use the checkpoint provided by the Cloud Exchange core rather than implementing one on your own.
- Follow the Plugin Directory Structure guidelines.
- If the description contains a link, it should be a hyperlink instead of plaintext.
- Convert the time-stamp values to the human-readable format (from epoch to DateTime object).
- Use proper validation for the parameters passed to the validate method and provide the proper help text for all the parameters.
- Use a notifier object to raise the notification for failures or critical situations (like rate-limiting, or exceeding payload size) to notify the status of the plugin to the user.
- Make sure to implement a proper logging mechanism with the logger object passed by the Cloud Exchange platform. Make sure enough logging is done, which helps the Ops team in troubleshooting. Make sure any sensitive data is not logged or leaked in the notification.
- Make sure the plugin directory name (like sample_plugin) matches with the manifest.json’s ID field.
- Make sure to add a retry mechanism for the 429 status code.
- Pagination should always be considered while developing any feature in a plugin.
- User Agent should be added to the headers while making any API call. Format for the User Agent is netskope-ce-<ce_version>–<module>–<plugin_name>–<plugin_version>. Refer to the URE Microsoft Azure AD or URE Crowdstrike Identity Protect plugin for more details. Note that the plugin version should be dynamically fetched, and to fetch the netskope-ce-<version> string, use the method defined by core.
- API Tokens and Password fields should not use strip().
- The log messages should start with <module> <app name> Plugin <configuration_name>. Example: Ticket Orchestrator ServiceNow Plugin <ServiceNow Configuration Name>: <log_message>“. This is a suggestion; you can avoid the configuration name: (logger.info(“<module> <plugin_name> Plugin: <message>”))
- While logging an error log, if possible, you should add a traceback of the exception. USE: self.logger.error(error, details=traceback.format_exc())
- The Toast message should not contain the <app_name> <module> Plugin: in the message.
- Make sure to catch proper exceptions and status codes while and after making the API calls. If feasible, you can create a helper method from where the requests would be made, and this method can be called with proper parameters wherever required.
- The CHANGELOG.md file should be updated with proper tags, such as Added, Changed, and Fixed, along with a proper user-friendly message. Make sure the file name should exactly matches CHANGELOG.md.
- Use a notifier object to raise the notification for failures or critical situations (like rate-limiting, or exceeding payload size) to notify the status of the plugin to the user.
- Provide the proper help text (tooltip) for all the parameters. If feasible, make sure to explain the significance of the parameter in the tooltip.
- Make sure to provide a meaningful name and description to plugin configuration parameters.
- Make sure to provide an appropriate configuration type (text, number, password, choice, multi-choice) to the parameters.
- Make sure to use the proxy configuration dict and the SSL certificate validation flag that is passed by the Cloud Exchange platform while making any outbound request (API/SDK).
- Make sure to collect the value of a non-mandatory parameter using the .get() method, and provide a default value while using .get() method.
- Make sure to add a retry mechanism for the 429 status code.
- If the Plugin description contains a link, it should be a hyperlink redirecting to the documentation page. Refer to the Threat Exchange Trend Micro Vision One plugin for details.
- The logger messages and the Toast messages should not contain the API Token and the Password type field values.
- Pagination should always be considered while developing any feature in a plugin.
- All the logger statement should follow below format: logger.info(“<module> <plugin_name> Plugin: <message>”).
Writing a Plugin
This section illustrates the process of writing a plugin from scratch.
Download the sample plugin from the NetskopeOSS public GitHub repository or the Cloud Exchange Knowledge Base found here: https://support.netskope.com/hc/en-us/articles/360052128734-Cloud-Threat-Exchange.
Development setup
Python
Our system utilizes Python3 (v3.7 and above). Make sure to set up python3 in your development environment. Pytest is used to run unit tests.
Included Python Libraries
The following Python libraries are included within the Netskope Cloud Exchange platform.
Library Name | Version |
---|---|
aiofiles | 22.1.0 |
amqp | 5.1.1 |
anyio | 3.6.2 |
asgiref | 3.6.0 |
attrs | 22.2.0 |
azure-core | 1.26.2 |
azure-storage-blob | 12.14.1 |
bcrypt | 4.0.1 |
boto3 | 1.26.51 |
botocore | 1.29.51 |
billiard | 3.6.4.0 |
celery | 5.2.7 |
cabby | 0.1.23 |
cachetools | 5.2.1 |
celerybeat-mongo | 0.2.0 |
certifi | 2022.12.7 |
cffi | 1.15.1 |
chardet | 5.1.0 |
charset-normalizer | 3.0.1 |
click | 8.1.3 |
click-didyoumean | 0.3.0 |
click-plugins | 1.1.1 |
click-repl | 0.2.0 |
colorama | 0.4.6 |
colorlog | 6.7.0 |
cryptography | 39.0.0 |
cybox | 2.1.0.21 |
defusedxml | 0.7.1 |
dnspython | 2.3.0 |
docker | 6.0.1 |
fastapi | 0.89.1 |
furl | 2.1.3 |
google-api-core | 2.11.0 |
google-auth | 2.16.0 |
google-cloud-core | 2.3.2 |
google-cloud-pubsub | 2.13.12 |
google-cloud-pubsublite | 1.6.0 |
google-cloud-storage | 2.7.0 |
google-crc32c | 1.5.0 |
google-resumable-media | 2.4.0 |
googleapis-common-protos | 1.58.0 |
grpc-google-iam-v1 | 0.12.6 |
grpcio | 1.51.1 |
grpcio-status | 1.51.1 |
gunicorn | 20.1.0 |
h11 | 0.14.0 |
idna | 3.4 |
importlib-metadata | 6.0.0 |
isodate | 0.6.1 |
jmespath | 1.0.1 |
jsonpath | 0.82 |
jsonschema | 4.17.3 |
kombu | 5.2.4 |
libcst | 0.3.21 |
libtaxii | 1.1.119 |
lxml | 4.9.2 |
mongoengine | 0.25.0 |
more-itertools | 9.0.0 |
MarkupSafe | 2.1.2 |
memory-profiler | 0.61.0 |
mixbox | 1.0.5 |
mongoquery | 1.4.2 |
msrest | 0.7.1 |
multidict | 6.0.4 |
mypy-extensions | 0.4.3 |
netskopesdk | 0.0.25 |
numpy | 1.23.5 |
oauthlib | 3.2.2 |
onelogin | 3.1.0 |
ordered-set | 4.1.0 |
orderedmultidict | 1.0.1 |
overrides | 6.5.0 |
pandas | 1.5.0 |
packaging | 23.0 |
passlib | 1.7.4 |
pycparser | 2.21 |
prompt-toolkit | 3.0.36 |
proto-plus | 1.22.2 |
protobuf | 4.21.12 |
psutil | 5.9.4 |
pydantic | 1.10.4 |
pyasn1 | 0.4.8 |
pyasn1-modules | 0.2.8 |
PyJWT | 2.6.0 |
pymongo | 4.3.3 |
pyparsing | 3.0.9 |
python-dateutil | 2.8.2 |
pyrsistent | 0.19.3 |
python-multipart | 0.0.5 |
python3-saml | 1.15.0 |
pytz | 2022.7.1 |
PyYAML | 6.0 |
requests | 2.28.2 |
requests-oauthlib | 1.3.1 |
rsa | 4.9 |
six | 1.16.0 |
starlette | 0.22.0 |
sniffio | 1.3.0 |
s3transfer | 0.6.0 |
stix | 1.2.0.11 |
taxii2-client | 2.3.0 |
typing-inspect | 0.8.0 |
typing-utils | 0.1.0 |
typing_extensions | 4.4.0 |
urllib3 | 1.26.14 |
uvicorn | 0.20.0 |
vine | 5.0.0 |
wcwidth | 0.2.6 |
weakrefmethod | 1.0.3 |
websocket-client | 1.4.2 |
Werkzeug | 2.2.2 |
xmlsec | 1.3.11 |
zipp | 3.11.0 |
requests-mock | 1.7.0 |
Including Custom Plugin Libraries
Netskope advises bundling any of the third party python libraries your plugin will need into the plugin package itself. Use the pip installer to achieve this bundling; it provides a switch which takes a directory as an input. If the directory is provided, pip will install the packages into that directory.
For example, this command will install the “cowsay” package into the directory “lib”.
> pip install cowsay --target ./lib
For the official documentation on this, refer to https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-t.
While importing modules from the above lib folder, you should be using a relative import instead of an absolute import.
IDE
Recommended IDEs are PyCharm or Visual Studio Code.
Plugin Directory Structure
This section mentions the typical directory structure of the Ticket Orchestrator Plugin.
/sample_plugin/ ├── /utils/ ├── constants.py ├── __init__.py ├── Changelog.md ├── icon.png ├── main.py └── manifest.json
- __init__.py: Every plugin package is considered a python module by the Ticket Orchestrator code. Make sure every plugin package contains the empty “__init__.py” file.
- CHANGELOG.md: This file contains the details about plugin update and should be updated with proper tags as Added, Changed, Fixed along with a proper user-friendly message.
- icon.png: Plugin icon logo, this will be visible in the plugin chiclet and configuration cards on the UI. The logo should have a transparent background with a recommended size of 300*50 pixels or a similar aspect ratio.
- main.py: This python file contains the Plugin class containing the concrete implementation for the create_tasks, update_tasks, sync_states, validate_steps, get_available_fields, get_default_mappings, and get_queues methods. get_fields can also be implemented depending on the need.
- manifest.json: Manifest file for the plugin package, containing information about all the configurable parameters and their data types. This file has more information about the plugin integration as well.
- utils/: This directory is used to write utility functions and define constants. it contains different files as per plugin requirements. Here, the example files are shown.
- constants.py: This file contains the constants that can be used plugin wide like Field mappings, etc
The listed files here are mandatory for any plugin integration, but you can add other files based on specific integration requirements.
Note
Make sure the plugin directory name (e.g. sample_plugin) matches with the manifest.json’s ID field.
CHANGELOG.md
This is a file that contains details about plugin updates and should be updated with proper tags such as Added, Changed, and Fixed along with a proper user-friendly message.
- Added: use it when new features are added
- Fixed: use it when any bug/error is fixed.
- Changed: use it when there is any change in the existing implementation of the plugin
Sample CHANGELOG.md
# 1.0.1 ## Fixed - Fixed pagination while fetching all available fields. # 1.0.0 ## Added - Initial release.
Manifest.json
This is a JSON file that stores the meta-information related to the plugin, which is then read by the Ticket Orchestrator module to render the plugin in the UI as well as enabling the Ticket Orchestrator module to know more about the plugin, including required configuration parameters, the plugin-id, the plugin-name, etc.
- Every plugin must contain this file with the required information so that Ticket Orchestrator can instantiate the Plugin object properly.
- Common parameters for manifest.json include:
- name: (string) Name of the plugin. (Required)
- id: (string) ID of the plugin package. Make sure it is unique across all the plugins installed in the Cloud Exchange. The ID has to match the directory name of the plugin package. (Required)
- version: (string) Version of the plugin. Usage of a MAJOR.MINOR.PATCH (ex. 1.0.1) versioning scheme is encouraged, although there are no restrictions. (Required)
- description: (string) Description of the plugin. Provide a detailed description that mentions the capabilities and instructions to use the plugin, (ex. Create issues/tickets on the platform.) This description would appear on the Plugin Configuration card. (Required). If the description contains a link, it should be a hyperlink instead of plaintext.
- pulling_supported: (boolean) Set true if there is a possibility of fetching alerts. For example, Netskope has this facility.
- receiving_supported: (boolean) Set true if you want to create tickets or send notifications
- configuration: (array) Array of JSON objects that contains information about all the parameters required by the plugin – their name, type, id, etc. The common parameters for the nested JSON objects are explained below.
- label: Name of the parameter. This will be displayed on the plugin configuration page. (Required)
- key: Unique parameter key, which will be used as a key in the python dict object where the plugin configuration is used. (Required)
- type: Value type of the parameter. Allowed values are ‘text’, ‘password’, ‘number’, ‘choice’, and ‘multichoice’. (Required) Refer to the Plugin Configuration parameter types below for more details.
- default: The default value for this parameter. This value will appear on the plugin configuration page in the Ticket Orchestrator UI. Supported data-types are “text”, “number”, and “list” (for multichoice type). (Required)
- mandatory: Boolean, which indicates whether this parameter is mandatory or not. If a parameter is mandatory, Ticket Orchestrator UI won’t let you pass an empty value for the parameter. Allowed values are `true` and `false`. (Required)
- description: Help text-level description for the parameter, which can give more details about the parameter and expected value. This string will appear on the plugin configuration page as a help text. (Required)
- choices: A list of JSON objects containing keys and values as JSON keys. This parameter is only supported by ‘type’: ‘choice and multichoice`.
Plugin Configuration Parameter types
Make sure all the required plugin configuration parameters are listed under the configuration section of manifest.json for the plugin.
Password Parameter
Use this parameter for storing any secrets/passwords for authentication with API endpoints. Parameters with type as the password will have a password text box in the Plugin configuration page and will be obfuscated and encrypted by the platform.
Sample JSON:
"configuration": [ { "label": "API Token", "key": "api_token", "type": "password" }, ]
Plugin configuration view:
Text Parameter
Use this parameter for storing any string information such as base-url, username, etc. This parameter will have a normal text input on the plugin configuration page.
Sample JSON:
"configuration": [ { "label": "Tenant Name", "key": "tenant_name", "type": "text" }, ]
Plugin configuration view:
Number Parameter
Use this parameter for storing number/float values. This parameter will have a number input box on the plugin configuration page. (From CTE)
Sample JSON:
"configuration": [ { "label": "Maximum File hash list size in MB.", "key": "max_size", "type": "number" }, ]
Plugin configuration view:
Choice Parameter
Use this parameter for storing any enumeration parameter values. This parameter will have a dropdown box on the plugin configuration page. (From Log Shipper)
Sample JSON:
{ "label": "Syslog Protocol", "key": "syslog_protocol", "type": "choice", "choices": [ { "key": "TLS", "value": "TLS" }, { "key": "UDP", "value": "UDP" }, { "key": "TCP", "value": "TCP" } ], "default": "UDP", "mandatory": true, "description": "Protocol to be used while ingesting data." },
Plugin configuration view:
After selecting the input:
Multichoice Parameter
Use this parameter for storing multiple choice values. This parameter will have a dropdown box on the plugin configuration page with the ability to select multiple values. (From Ticket Orchestrator)
Sample JSON:
"configuration": [ { "label": "Severity", "key": "severity", "type": "multichoice", "choices": [ { "key": "Unknown", "value": "unknown" }, { "key": "Low", "value": "low" }, { "key": "Medium", "value": "medium" }, { "key": "High", "value": "high" }, { "key": "Critical", "value": "critical" } ], "default": [ "critical", "high", "medium", "low", "unknown" ], "mandatory": false, "description": "Only indicators with matching severity will be saved." } ]
Plugin Configuration View:
Toggle Parameter
This parameter stores a boolean value, toggle enabled is True and toggle disabled is False.
Use System Proxy (‘proxy’): Use system proxy configured in Settings.(Default: False)
Plugin Configuration View:
1. When toggle is Off
2. When toggle is On.
Note
This parameter is provided by Core, and it is not allowed to add from the plugin’s manifest.json file.
main.py
This python file contains the core implementation of the plugin.
Standard Imports
rom netskope.integrations.itsm.plugin_base import ( PluginBase, ValidationResult, MappingField, ) from netskope.integrations.itsm.models import ( FieldMapping, Queue, Task, TaskStatus, Alert, )
PluginBase Variables
PluginBase provides access to variables that can be used during the plugin lifecycle Methods. Below is the list of variables.
Variable Name | Usage | Description |
---|---|---|
self.logger | self.logger.error(“Message”) self.logger.warn(“Message”) self.logger.info(“Message”) |
Logger handle provided by core. Use this object to log important events. The logs would be visible in CT Audit logs. Refer the Logger Object documentation. |
self.configuration | self.configuration.get(<attribute-key-name>) | JSON representation of the configuration object of the Plugin instance. Use this to access the configuration attributes like authentication credentials, server details, etc. Use the key name of the attribute mentioned in manifest.json. |
self.last_run_at | If self.last_run_at: self.last_run_at.timestamp() Use this format to convert the last run time in epoch format. |
Provides the timestamp of the last successful run time of the Plugin’s pull method. Cloud Exchange core maintains the checkpoint time after each successful pull() execution. For the first execution, the value would be None. The datatype of the object is datetime. |
self.storage | Cloud Exchange provides the plugin with a mechanism to maintain the state. Use this object to persist in state that would be required during subsequent calls. The datatype of this object is python dict. | |
self.notifier | self.notifier.info(“message”) self.notifier.warn(“message”) self.notifier.error(“message”) |
This object provides the handle of the Cloud Exchange core’s notifier. Use this object to push any notification to the platform. The notifications would be visible in the Ticket Orchestrator UI. Make sure the message contains summarized information for the user to read and take necessary actions. For example, the used notifier in Netskope plugin if the push() method exceeds the 8MB limit of the product. |
self.proxy | requests.get(url=url, proxies=self.proxy) | Handle of system’s proxy settings if configured, else {}. |
self.ssl_validation | requests.get(url=url, verify=self.ssl_validation) | Boolean value which mentions if ssl validation be enforced for REST API calls. |
Plugin Class
- Plugin class has to be inherited from the PluginBase class. PluginBase class is defined in netskope.integrations.cto.plugin_base.
- Make sure Plugin class provides implementation for the create_tasks, update_tasks, sync_states, validate_steps, get_available_fields, get_default_mappings, and get_queues methods.
- Pagination should always be considered while developing any feature in a plugin.
- The plugin class will contain all the necessary parameters to establish connection and authentication with the 3rd party API.
- Constants like PLUGIN_NAME, LIMIT, etc. should be declared.
"""Sample plugin implementation. This is a sample implementation of base PluginBase class, which explains the concrete implementation of the base class. """ from netskope.integrations.itsm.plugin_base import ( PluginBase, ValidationResult, MappingField, ) from netskope.integrations.itsm.models import ( FieldMapping, Queue, Task, TaskStatus, Alert, ) PLUGIN_NAME = "<module> <plugin_name> Plugin" class Plugin(PluginBase): """SamplePlugin class having concrete implementation for creating and updating tasks or alerts. This class is responsible for implementation of the create_tasks, update_tasks, sync_states, validate_steps, get_available_fields, get_default_mappings, and get_queues methods with proper return types. Hence it's lifecycle execution can be scheduled by the CTO core engine. """
def create_tasks
- This method implements the logic to create tickets or send notifications in the target platform using the API endpoints.
- Raise appropriate logs if an exception occurs or the status code is not successful.
- Make sure to return a Task object with the necessary details.
- Check the body structure and schema of each field and send the body accordingly in the API call.
- Handle all the Exceptions with connection and HTTP response code and raise exceptions with error messages along with the loggers in case of errors are encountered.
- User Agent should be added to the headers while making any API call. Format for the User Agent: netskope-ce-<ce_version>-<module>-<plugin_name>-<plugin_version>.
- In this method, the ‘proxy’ variable for the ‘Use System Proxy’’ toggle should be used while making an API call.
- Make sure you handle status code 429 – Too Many Requests, suggested approach
- Implement retry mechanism in case of status code 429 – Too Many Requests
"""Create an issue/ticket on platform.""" def create_task(self, alert: Alert, mappings: Dict, queue: Queue) -> Task: """Create an issue/ticket on platform.""" params = self.configuration["auth"] project_id, issue_type = queue.value.split(":") # Filter out the mapped attributes based on given project and issue_type create_meta = self._get_createmeta() mappings = self._filter_mappings(create_meta, mappings) body = self._generate_body(create_meta, mappings) ... # Set fields with nested structure headers = { "Accept": "application/json", "Content-Type": "application/json", } response = requests.post( f"{params['url'].strip('/')}/rest/api/3/issue", json=body, auth=HTTPBasicAuth(params["email"], params["api_token"]), headers=add_user_agent(headers), proxies=self.proxy, ) if response.status_code == 201: result = response.json() # Need result for to create link in Task() return Task( id, status, link, )
def update_tasks
- This method implements the logic to update tasks in the target platform using the API endpoints.
- It is similar to create_tasks, except you change an existing task.
- Raise appropriate logs if an exception occurs or the status code is not successful.
- Make sure to return a Task object with the necessary details
def update_task( self, task: Task, alert: Alert, mappings: Dict, queue: Queue ) -> Task: """Add a comment in existing issue.""" params = self.configuration["auth"] comment = { "body": self._get_atlassian_document( f"New alert received at {str(alert.timestamp)}." ) } response = requests.post( f"{params['url'].strip('/')}/rest/api/3/issue/{task.id}/comment", headers=add_user_agent(), json=comment, auth=HTTPBasicAuth(params["email"], params["api_token"]), proxies=self.proxy, ) if response.status_code == 201: return task elif response.status_code == 404: self.logger.info( f"{PLUGIN_NAME}: Issue with ID {task.id} no longer exists on platform" f" or the configured user does not have permission to add" f"comment(s)." ) return task else: raise requests.HTTPError( f"{PLUGIN_NAME}: Could not add comment. The existing issue on the platform with ID {task.id}." )
def sync_states
- If the ticket status is modified in the target platform, then we fetch the latest status and update the current status in Netskope Cloud Exchange.
- If ticket status matches with our TaskStatus model, then the status will be reflected in a particular ticket, otherwise status will be Other.
- Make sure to handle the case when the maximum payload size supported by the API endpoint is exceeded. There can be multiple ways to handle this case.
- If the API endpoint supports multiple requests with a fixed payload size, send the data in chunks.
- Make sure you handle status code 429 – Too Many Requests, suggested approach
- Implement retry mechanism in case of status code 429 – Too Many Requests
def sync_states(self, tasks: List[Task]) -> List[Task]: """Sync all task states.""" params = self.configuration["auth"] task_ids = [task.id for task in tasks] task_statuses = {} body = { "jql": f"key IN ({','.join(task_ids)})", "maxResults": 100, "fields": ["status"], # We only need status of tickets "startAt": 0, "validateQuery": "none", } while True: response = requests.post( f"{params['url'].strip('/')}/rest/api/3/search", headers=add_user_agent(), json=body, auth=HTTPBasicAuth(params["email"], params["api_token"]), proxies=self.proxy, ) response.raise_for_status() if response.status_code == 200: json_res = response.json() body["startAt"] += json_res["maxResults"] if len(json_res["issues"]) == 0: break for issue in json_res["issues"]: task_statuses[issue.get("key")] = ( issue.get("fields", {}).get("status", {}).get("name") ).lower() for task in tasks: if task_statuses.get(task.id): task.status = STATE_MAPPINGS.get( task_statuses.get(task.id), TaskStatus.OTHER ) else: task.status = TaskStatus.DELETED return tasks
def validate_steps
This is an abstract method of PluginBase Class.
- This method validates the plugin configuration and authentication parameters passed while creating a plugin configuration.
- This method will be called only when a new configuration is created or updated.
- Separate validations should be made for empty field validation and type check validation in the validation method for plugin configuration.
- Validates against all the mandatory parameters are passed with the proper data type.
- Validate the authentication parameters and the API endpoint to ensure the smooth execution of the plugin lifecycle.
- While validating, use strip() for configuration parameters like base URL, email, username, etc. except API Tokens and Password fields.
- Return the object of ValidationResult(Refer ValidationResult Data Model) with a success flag indicating validation success or failure and the validation message containing the validation failure reason.
def validate_step( self, name: str, configuration: dict ) -> ValidationResult: """Validate a given configuration step.""" if name == "auth": return self._validate_auth(configuration) elif name == "params": return self._validate_params(configuration) else: return ValidationResult( success=True, message="Validation successful." )
def get_available_fields
- This method should return the list of fields to be rendered in the UI in the queue when the configuration is selected from the dropdown.
- This method should only return supported fields.
- Make sure to use paginated API so in case of a huge number of fields, the plugin should not fail.
def get_available_fields(self, configuration: dict) -> List[MappingField]: """Get list of all the available fields for issues/tickets.""" params = configuration["auth"] response = requests.get( f"{params['url'].strip('/')}/rest/api/3/field", auth=HTTPBasicAuth(params["email"], params["api_token"]), headers=add_user_agent(), proxies=self.proxy, ) response.raise_for_status() if response.status_code == 200: return list( map( lambda item: MappingField( label=item.get("name"), value=item.get("key") ), response.json(), ) ) else: raise requests.HTTPError( f"{PLUGIN_NAME}: Could not fetch available fields from platform." )
def get_default_mappings
If the user wants to send some default field onto the target platform, then we need to implement this method; otherwise, return an empty list.
def get_default_mappings(self, configuration: dict) -> List[FieldMapping]: """Get default mappings.""" return [ FieldMapping( extracted_field="custom_message", destination_field="summary", custom_message="Netskope $appCategory alert: $alertName", ), FieldMapping( extracted_field="custom_message", destination_field="description", custom_message=( "Alert ID: $id\nApp: $app\nAlert Name: $alertName\n" "Alert Type: $alertType\nApp Category: $appCategory\nUser: $user" ), ), ]
def get_queues
- This will fetch all the destination projects where we need to create a ticket at.
- Raise an appropriate exception if the ticket is not created along with necessary log statements
- Use Pagination as there may be lots of projects.
def get_queues(self) -> List[Queue]: """Get list of projects as queues.""" params = self.configuration["auth"] start_at, is_last = 0, False projects = [] issue_types = self.configuration["params"]["issue_type"] issue_types = list(map(lambda x: x.strip(), issue_types.split(","))) total_ids = [] while not is_last: response = requests.get( f"{params['url'].strip('/')}/rest/api/3/project/search", params={"startAt": start_at, "maxResults": 50}, headers=add_user_agent(), auth=HTTPBasicAuth(params["email"], params["api_token"]), proxies=self.proxy, ) response.raise_for_status() if response.status_code == 200: json_res = response.json() is_last = json_res["isLast"] start_at += json_res["maxResults"] # Create combination of projects and issue types for project in json_res.get("values"): total_ids.append(project.get("id")) # batches of 650 Project ids if we pass more than that # it will throw 500 server error if is_last or (start_at % 650) == 0: total_project_ids = ",".join(total_ids) meta = self._get_createmeta( self.configuration, {"projectIds": total_project_ids}, ) projects_list = meta.get("projects") for project in projects_list: if not project: continue for issue_type in project.get("issuetypes"): # Issue type is defined as a "key:value" string # Value of queue is defined as "project_id:issue_type" string if issue_type.get("name") not in issue_types: continue projects.append( Queue( label=f"{project.get('name')} - {issue_type.get('name')}", value=f"{project.get('id')}:{issue_type.get('name')}", ) ) total_ids = [] # restart the batch ids else: raise requests.HTTPError( f"{PLUGIN_NAME}: Could not fetch projects from platform." ) return projects
Data Models
This section lists down the Data Models and their properties.
Queue Data Model
- A queue is a project in the platform
- This queue can have tickets created in or notifications sent to it.
Name | Type | Description |
---|---|---|
label | string | The project name that will be rendered in the UI. |
value | string | We can use value as a key. |
default_mapping | Dict | This can be used for default mapping and deduplication default mapping. |
from netskope.integrations.itsm.models import Queue project_id, issue_type = queue.value.split(":")
Alert Data Model
- Alert is a notification that can be sent to the 3rd party platform
- Alerts are used in create_tasks and update_tasks.
Name | Type | Description |
---|---|---|
id | string | A unique ID of the Alert object. |
configuration | string | JSON representation to access the configuration attributes, like authentication credentials and server. |
alertName | string | Name of the alert. |
alertType | string | Type of the alert. |
app | string | Name of the app. |
appCategory | string | Category of the app. |
user | string | User name. |
type | string | Type of record. |
timeStamp | datetime | Exact time of when an event occurred. |
rawAlert | dict | Additional fields for alerts. |
from netskope.integrations.itsm.models import Alert comment = { "body": self._get_atlassian_document( f"New alert received at {str(alert.timestamp)}." ) }
Task Data Model
- A task is a ticket that is created or updated in a platform.
- A task will have a taskStatus which determines its state.
- Tasks are used createTask, updateTask, and syncStates method.
Name | Type | Description |
---|---|---|
id | string | A unique ID of the Task object. |
status | TaskStatus | Status of the ticket. For example:- “in progress”. |
dedupeCount | int | Duplicate ticket count. |
link | str | Link of the ticket. |
deletedAt | datetime | When the task was deleted at. |
configuration | str | JSON representation to access the configuration attributes, like authentication credentials and server. |
createdAt | datetime | When the task was created at. |
businessRule | str | Query to filter data. |
alert | Alert | Notification. Mentioned above (Alert Data Model). |
from netskope.integrations.itsm.models import Task return Task( id=result.get("key"), status=STATE_MAPPINGS.get(issue_status, TaskStatus.OTHER), link=( f"{self.configuration['auth']['url'].strip('/')}/browse/" f"{result.get('key')}" ), )
Field Mapping Data Model
This model is used for mapping alerts fields to target platform fields.
Name | Type | Description |
---|---|---|
extracted_field | string | Fields that we want to make or custom_message |
destination_field | string | Target platform field |
custom_message | string | If you want to add a custom message. |
from netskope.integrations.itsm.models import FieldMapping return [ FieldMapping( extracted_field="custom_message", destination_field="summary", custom_message="Netskope $appCategory alert: $alertName", )
Logging
Ticket Orchestrator provides a handle of a logger object for logging.
- Avoid print statements in the code.
- This object logs to the central Cloud Exchange database with the timestamp field. Supported log levels are info, warn, and error.
- Make sure any API authentication secret or any sensitive information is not exposed in the log messages.
- Make sure to implement a proper logging mechanism with the logger object passed by the Cloud Exchange platform.
- Make sure enough logging is done, which helps the Ops team in troubleshooting.
- Make sure any sensitive data is not logged or leaked in the notification or logs.
self.logger.error( f"{PLUGIN_NAME}: Error log-message goes here." ) self.logger.warn( f"{PLUGIN_NAME}: Warning log-message goes here." ) self.logger.info( f"{PLUGIN_NAME}: Info log-message goes here." )
Notifications
Ticket Orchestrator provides a handle of notification object, which can be used to generate notifications on the Ticket Orchestrator UI.
- This object is passed from the Cloud Exchange platform to the Plugin object. Every plugin integration can use this object whenever there is a case of failure which has to be notified to the user immediately. The notification timestamp is managed by the Cloud Exchange platform.
- This object will raise the notification in the UI with a color-coding for the severity of the failure. Supported notification severities are info, warn, and error.
- Make sure any API authentication secret or any sensitive information is not exposed in the notification messages.
- Use a notifier object to raise the notification for failures or critical situations (like rate-limiting, or exceeding payload size) to notify the status of the plugin to the user.
self.notifier.info( f"{PLUGIN_NAME}: Info notification-message goes here." ) self.notifier.error( f"{PLUGIN_NAME}: Error notification-message goes here." ) self.notifier.warn( f"{PLUGIN_NAME}: Warning notification-message goes here." )
Testing
Linting
As part of the build process, we run a few linters to catch common programming errors, stylistic errors, and possible security issues.
Flake8
This is a basic linter. It can be run without having all the dependencies available and will catch common errors. We also use this linter to enforce the standard python pep8 formatting style. On rare occasions, you may encounter a need to disable an error/warning returned from this linter. Do this by adding an inline comment of the sort on the line you want to disable the error:
# noqa: <error-id>
For example:
example = lambda: 'example' # noqa: E731
When adding an inline comment, always also include the error code you are disabling for. That way if there are other errors on the same line they will be reported.
More info: https://flake8.pycqa.org/en/latest/user/violations.html#in-line-ignoring-errors
PEP8 style docstring check is also enabled with the flake8 linter. So make sure every function/module has a proper docstring added.
Unit Testing
Ensure unit testing to test small units of code in an isolated and deterministic fashion. Make sure that unit tests avoid performing communication with external APIs and use mocking. Make sure unit tests ensure the code coverage is more than 70%.
Environment Setup
In order to work with unit testing the integration or automation script needs to be developed in a Plugin Directory Structure, We use PIP to install all the required Python module dependencies required to run the setup. Before running the tests, make sure you install all the required dependencies mentioned in the requirements.txt of the Cloud Exchange core repository.
Write Your Unit Tests
Make sure unit tests are written in a separate Python file named: <your_plugin_name>_test.py. Within the unit test file, each unit test function should be named: test_<your test case>. More information on writing unit tests and their format is available at the PyTest Docs.
Mocking
We use pytest-mock for mocking. pytest-mock is enabled by default and installed in the base environment mentioned above. To use a mocker object simply pass it as a parameter to your test function. The mocker can then be used to mock both the plugin class object and also external APIs.
Example:- from CLS
def test_push(mocker, common_config): """To test push method of CLSPlugin.""" cls_plugin = CLS_Plugin( , common_config, None, None, logger, source="", mappings=, ) # Initialize required classes syslogger = logging.getLogger( "SYSLOG_LOGGER_{}".format(threading.get_ident()) ) mocker.patch( "netskope.plugins.Default.cls.main.CLSPlugin.init_handler", return_value=syslogger, ) try: cls_plugin.push(data, data_type, subtype) except Exception as e: assert False, f"Push raised exception: {e}"
Running Your Unit Tests
$ PYTHONPATH=. pytest
Plugin Deployment on Cloud Exchange
Package the Plugin
Cloud Exchange expects the developed plugin in zip and tar.gz format.
Execute the below command to zip the package:
zip -r sample_plugin.zip sample_plugin
Execute the below command to generate tar.gz package:
tar -zcvf sample_plugin.tar.gz sample_plugin
Upload the Plugin
To deploy this zip or tar.gz on Cloud Exchange platform:
- Log in to Cloud Exchange platform.
- Go to Settings > Plugin.
- Click Add New Plugin.
- Click Browse.
- Select your zip or tar.gz file.
- Click Upload.
Add a Repo
To deploy your plugin on Cloud Exchange platform, you can add your repo in the Cloud Exchange platform:
- Log in to Cloud Exchange platform.
- Go to Settings > Plugin Repository.
- Click Configure New Repository.
- Enter a Repository name, Repository URL, Username, and Personal Access Token.
- Click Save.
- Go to Settings > Plugins.
- Select Repository name from the Repository dropdown.
Deliverables
Plugin Guide
- Make sure to update the plugin guide with every release.
- Plugin guide should have this content:
- Compatibility.
- Release Notes.
- Description.
- Prerequisites.
- Permissions.
- Authorization of the product.
- Configuration of Netskope ITSM plugin.
- Configuration of the third-party Ticket Orchestrator plugin that we have developed.
- Configuration of a business rule.
- Configuration of a queue.
- Validation.
Demo Video
After the successful development of the Ticket Orchestrator plugin, create a demo video and show the end-to-end workflow of the plugin.