FAQs to setup Data Protection for Public Cloud

FAQs to setup Data Protection for Public Cloud

The following topic covers frequently asked questions and common scenarios when setting up DLP Scan and /or Threat Protection (Malware Scan) features for Public Cloud.

Scanning S3 Buckets encrypted with KMS keys

You must configure your AWS environment to provide Netskope with the necessary permissions to enable storage scan on S3 Buckets that are encrypted with KMS keys.

To provide the required permissions to Netskope, copy the IAM role created by Netskope’s CFT into each KMS key policy and provide the specified Sid, Action and Condition.

Follow these detailed instructions.

  1. Log in to the AWS Management Console using the credentials of the AWS account you are setting up with Netskope for IaaS and navigate to Services > IAM > Roles.
  2. Under Roles, search for Netskope_Role and copy the Role ARN for this role.
  3. Navigate to Services > Key Management Service.
  4. Under Customer managed keys go to each KMS key used to encrypt S3 Buckets and edit the Key policy.
  5. Under the Key policy of each KMS key, click Edit.
  6. Edit the Statement section of the policy to include the following:
    {
                "Sid": "Enable Netskope to use KMS via S3",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::<customer_account_id>:role/Netskope_Role"
                },
                "Action": "kms:Decrypt",
                "Resource": "*",
                "Condition": {
                    "StringEquals": {
                        "kms:ViaService": "s3.us-west-1.amazonaws.com"
                    }
                }
            }
    • Edit the Sid to Enable Netskope to use KMS via S3.
    • Paste the Role ARN of Netskope_Role under AWS.
    • Edit Action to kms:Decrypt.
    • Add the Condition as provided in the code snippet above.

      The condition key ensures that Netskope does not perform any action directly on the KMS key but only through the S3 managed service.

      Note

      Since KMS can only encrypt buckets in the same region as the key ensure that kms:ViaService matches the region of the KMS key.

  7. Save the key policy changes as AWS-KMS.

Limit bucket & object events sent out from GCP log sink

If the DLP/Malware Scan for GCP is set up at the Org/Folder/Project level, there may be a large outflow of notifications from objects present under every folder/project in the account. In some cases, this behavior may be undesired as the customer may only be concerned with scanning objects from a very small number of buckets and sending out notifications for all objects may result in unwanted costs for the customer.

This is a manual workaround; if the customer wants to change the buckets to send notifications for, they must manually make these changes, the reason being DLP/Malware Scan for GCP solution is designed to scan all storage buckets under a customer’s Org/Folder/Project.

Solution

The solution involves editing the log sink created during instance setup to enhance the inclusion filter to allow sending object-level events only for a few buckets.

Pre-requisites

Identify and access the GCP organization/folder/project where the log sink for the DLP/Malware Scan for GCP instance was created.

Steps

  1. In the search bar, type Log router and select it from the options. See figure below.

  2. Select the log sink created for DLP/Malware Scan for GCP from the list of log sinks. In the screenshot below, it’s called ns_sink.

  3. From the 3-dot menu, select Edit sink to open the Edit logs routing sink page.

  4. Scroll down to the Choose logs to include in sink section.

  5. Validate that the inclusion filter contains following:

    (
      resource.type=folder 
      AND 
      (
         protoPayload.methodName=CreateFolder OR
         protoPayload.methodName=DeleteFolder
      )
    ) 
    OR 
    (
      resource.type=project 
      AND 
      (
        protoPayload.methodName=CreateProject OR      
        protoPayload.methodName=DeleteProject
      )
    ) 
    OR 
    (
      resource.type=gcs_bucket 
      AND 
      (
        protoPayload.methodName=storage.objects.delete OR
        protoPayload.methodName=storage.objects.create OR
        protoPayload.methodName=storage.buckets.create OR
        protoPayload.methodName=storage.buckets.delete
      )
    )
  6. Replace the inclusion filter with the following:

    (
      resource.type=folder 
      AND 
      (
         protoPayload.methodName=CreateFolder OR
         protoPayload.methodName=DeleteFolder
      )
    ) 
    OR 
    (
      resource.type=project 
      AND 
      (
        protoPayload.methodName=CreateProject OR      
        protoPayload.methodName=DeleteProject
      )
    ) 
    OR 
    (
      resource.type=gcs_bucket 
      AND
      (
        ( 
          protoPayload.methodName=storage.buckets.create 
          OR protoPayload.methodName=storage.buckets.delete
        )
        OR
        (
          (
            protoPayload.methodName=storage.objects.delete 
            OR protoPayload.methodName=storage.objects.create 
          )
          AND
          (
            resource.labels.bucket_name="bucket-1" 
            OR resource.labels.bucket_name="bucket-2"
          )
        )
      )
    )

Pay close attention to the last condition:

resource.labels.bucket_name="bucket-1" 
OR resource.labels.bucket_name="bucket-2"

The customer must replace the placeholder values “bucket-1” and “bucket-2” with values of the buckets they have added in their policy. To add more buckets, they must add additional “OR resource.labels.bucket_name=”bucket-name” conditions here where “bucket-name” is the name of the GCP bucket.

Explanation

The enhanced inclusion filter achieves two things:

  • It ensures that when any new buckets are created, or any existing buckets are deleted, Netskope can add/remove the names of such buckets from the Policy creation page. This allows the customer to create policies for newly created buckets and not see buckets which no longer exist in GCP. Also, when any folders/projects are created/deleted, the respective events will also be sent to Netskope.

  • Object-level changes (creation/deletion) sent to Netskope will be limited to only the buckets specified in the filter. This ensures the log sink doesn’t send extra notifications from irrelevant buckets.

    The log sink applies the filter and sends the filtered messages to the GCP Pub/Sub topic, which has a subscription attached. This subscription has a default retention period of 7 days.

    After the sink filter is updated, the subscription may have older events from buckets the customer doesn’t want scanned and also the buckets which are part of the new filter. Netskope will still attempt to scan the buckets from these older events. To prevent this, the customer may purge the messages in the subscription to remove all older events.
Share this Doc

FAQs to setup Data Protection for Public Cloud

Or copy link

In this topic ...