1. Integrate with Data Lakehouse
  2. Lakehouse Storage

Integrate With Databricks Unity Catalog

Introduction

This guide offers a detailed walkthrough for integrating StreamNative Cloud with Databricks Unity Catalog. It covers essential aspects such as configuring authentication, networking, storage buckets, catalogs, and other key components. By following this guide, you will enable seamless interaction between StreamNative Cloud and Databricks Unity Catalog.

Setup Databricks

Before initiating the integration of Databricks with StreamNative Cloud, please ensure the following prerequisites are fulfilled. You can also watch this video to learn more about Preparing Databricks Account

AWS Permissions:

Setting up a Databricks workspace requires appropriate AWS permissions. Ensure you are logged into your AWS account with an active session and administrative privileges to enable seamless authorization. To simplify the required permissions, we recommend you to use the same AWS account you used to create a StreamNative BYOC Cloud Environment.

Subscription Level:

In Databricks Cloud, upgrade to an Enterprise subscription. The Premium plan imposes restrictions on specific operations essential for this integration.

Step 1: Create Databricks Workspace

Click Create workspace to proceed

Cloud Provider

Choose Quickstart, and click Next

Cloud Provider

Enter your workspace name and choose the AWS region. In this example, the bucket is located in the us-east-2 region, so that region is selected. To avoid cross regional data transfer costs, we recommend you to choose the same region where the StreamNative Cloud Environment is created.

Finally, click Start Quickstart to proceed.

Cloud Provider

The process will redirect you to the AWS Management Console. Enable the acknowledgment checkbox and click Create Stack to continue.

Cloud Provider

The system will then initiate the setup tasks in AWS, which may take some time to complete.

After a few minutes, you will see the CREATE_COMPLETE event for the workspace, indicating the setup is successfully completed.

Cloud Provider

Next, return to the Databricks console. You will see that the workspace has been successfully created. Click Open to access the Unity Catalog console.

Cloud Provider

The Unity Catalog console will appear as illustrated below.

Cloud Provider

Step 2: Configure network and external catalog access settings

Configure network settings

By default, the Databricks Unity Catalog server restricts access from external engines. To enable our engine to access the Unity Catalog server, we need to configure enableIpAccessLists:true. To successfully configure the network setting you will need access to a Personal Access Token (PAT) token.

Generate a Personal Access Token within Databricks console by following the steps listed below

In your Databricks workspace, click your Databricks username in the top bar, and then select Settings from the drop down.

Then navigate to Developer → Access Tokens Manage.

Cloud Provider

Generate a new token

Cloud Provider

Copy the generated token.

Cloud Provider

To update this configuration, use your PAT token to execute a curl command with the following details:

curl -X POST <Unity_Catalog_URI>/api/2.0/unity-catalog/config \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{"enableIpAccessLists": true}'

Replace <Unity_Catalog_URI> with your console URI and <your_token> with your access token.

You can also use this PAT token to configure StreamNative authentication with Databricks Unity Catalog. This process will be discussed in greater detail in Step 3.

Configure external data access

Click Catalog → Settings → Metastore to proceed.

Cloud Provider

Enable the External Data Access option.

Cloud Provider

Step 3: Configure unity catalog access setting

Part A : Choose authentication method

There are two ways to authenticate and authorize StreamNative Cluster to access Databricks Unity Catalog.

Personal Access Token (PAT)

Databricks recommends using the PAT token for development and testing purposes only. Use the PAT token created in step2 when you configure Catalog Integration while creating StreamNative Cluster.

OAuth2

Databricks recommends using OAuth2 for configuring authentication between StreamNative Cloud and Databricks Unity Catalog for production deployment.To configure OAuth2 Machine to Machine based authentication, follow the steps below to generate Client ID and Secret for a Service Principal.

Within Databricks Workspace, navigate to Identity & Access, and click on 'Manage' button next to Service principals

Identity & Access

Click on Add service principal to create a new principal or click on an existing service principal

Identity & Access

Click on Secrets tab and then click on Generate secret to generate ClientId and Secret which will be used later while configuring a StreamNative Cluster.

Identity & Access

Part B : Configure access permissions

Grant the necessary privileges for the catalog to ensure appropriate access permissions.

Cloud Provider

Principal:

  • For PAT token, Select All Accounts.
  • For OAuth2, Select Service Principal. Service Principal might not show up in the drop down and you may need to search for it.

Privilege Presets: Choose Data Editor, which will automatically select the relevant privileges.

External Use Schema: Ensure this option is Enabled.

Cloud Provider

Step 4: Setup storage bucket

Choose bucket location and grant access to StreamNative Cloud.You have two choices to setup a storage bucket.

Use StreamNative provided bucket

This process involves deploying the StreamNative BYOC Cloud Environment. StreamNative will automatically assign the necessary permissions to this bucket. To proceed, you will need to complete the steps for granting vendor access, creating a Cloud Connection, and setting up the Cloud Environment.

Use your own bucket

You need to create your own storage bucket, with the option to create a subfolder if required. StreamNative will require access to this storage bucket. To grant access, execute the following Terraform module.

Create a Terraform script and save it as main.tf

module "sn_managed_cloud" {
  source = "github.com/streamnative/terraform-managed-cloud//modules/aws/volume-access?ref=v3.18.0"

  external_id = "<your-organization-name>"
  role = "<your-role-name>"
  buckets      = [
    "<your-bucket-name>/<your-bucket-path>",
  ]

  account_ids = [
    "<your-aws-account-id>"
  ]
}

You can find your organization name in the StreamNative console, as shown below: Organization name

Before executing the Terraform module, you must define the following environment variables. These variables are used to grant you access to the AWS account where the S3 bucket is located.

export AWS_ACCESS_KEY_ID="<YOUR_AWS_ACCESS_KEY_ID>"
export AWS_SECRET_ACCESS_KEY="<YOUR_AWS_SECRET_ACCESS_KEY>"
export AWS_SESSION_TOKEN="<YOUR_AWS_SESSION_TOKEN>"

Run the Terraform module

terraform init
terraform plan
terraform apply

Step 5: Grant bucket permissions to the Databricks Unity Catalog role

During the Databricks workspace initialization, an AWS role is automatically created for the Unity Catalog. You can view the Unity Catalog role's ARN (Amazon Resource Name) in the AWS console.

Click Catalog → Settings → Credentials to proceed.

Catalog Credentials

Copy the ARN role as shown in the figure below. We will need this in the next steps.

catalog_arn

To grant bucket permission to this role, follow these steps:

  • Access AWS IAM Console : Log in to the AWS Management Console and navigate to the IAM service.
  • Search for the Role : In the IAM dashboard, search for the IAM role.
  • View Role Details: : Click on the role to open its detail page.

Click on the attached policy for the role, then select Edit Policy to make the necessary modifications.

Once you view the policy content, you will notice that it already grants permissions to the bucket, which was created by Databricks. Since we are not using this bucket, we need to modify the policy to grant permissions to our bucket.

Edit catalog policy

Edit the policy to include the ARN of your bucket. This can be the bucket created by StreamNative or the user created bucket from Step 4. Please note that when entering the bucket name, you should specify only the bucket name itself and exclude the path.

To view the name of the storage bucket created by StreamNative, log in to the AWS Cloud Console and navigate to the AWS S3 service. Search for a bucket with the following naming format: <YOUR_CLOUD_ENVIRONMENT_ID>-tiered-storage-snc.

Edit catalog policy

Ensure the correct permissions are applied for access. For example:

Update the policy to include the bucket arn for the root path and also with '/*' as shown in the picture below. In the image below we are using a custom bucket called 'test-databrick-unity-catalog'

Edit catalog policy

After editing the policy, click Next to review your changes, and then click Save Changes to apply the updated permissions.

Setup StreamNative Cluster

Before creating a cluster, make sure you complete the steps for granting vendor access, creating a Cloud Connection, and setting up the Cloud Environment. You can also watch this video to learn more about deploying a StreamNative Cluster.

Step 1 : Create an Ursa cluster in StreamNative Cloud Console

In this section we will create and set up a cluster in StreamNative Cloud. Login to StreamNative Cloud and click on 'Create an instance and deploy cluster'

Edit catalog policy

Click on Deploy BYOC

Deploy BYOC

Enter Instance name, select Cloud Connection, select URSA Engine and click on Cluster Location

Deploy BYOC

Enter Cluster Name, select Cloud Environment, select Multi AZ

Deploy BYOC

To configure Storage Location there are two options

Select Use Existing BYOC Bucket to choose the bucket created by StreamNative

Deploy BYOC

Select Use Your Own Bucket to choose your own storage bucket by entering the following details

  • AWS role
  • Region
  • Bucket name
  • Bucket path,
  • Confirm that StreamNative has been granted the necessary permissions to access your S3 bucket . The required permissions were granted by running a Terraform module in Step4.

Deploy BYOC

There are two options for configuring authentication.

The Personal Access Token (PAT) is a suitable option for development and testing.

Deploy BYOC

The OAuth2 based authentication is recommended for production scenarios.

Deploy BYOC

After entering all the details, deploy the cluster.

Once the cluster is successfully deployed, you can move to the next step and populate data in the cluster.

Step 2: Produce Kafka messages to topic

Follow the creating and running a producer section to produce Kafka messages to a topic.

Step 3: Create an external location for the databricks unity catalog

The Unity Catalog requires an external location to access the S3 bucket. To create an external location, follow these steps:

Edit catalog policy

Click on Create External Location

Edit catalog policy

We need to select the Manual option because the AWS Quickstart is not suitable for our scenario. Quickstart automatically creates a new bucket during the setup process, whereas in our case, the bucket has already been pre-created. Therefore, choosing the Manual option aligns with our requirements.

Edit catalog policy

Enter the details listed below to create an external location.

  • External Location Name: Enter any name of your choice.
  • URL: Specify the URL of the storage bucket.
  • For a StreamNative Provided Bucket, the path has the following format. s3://<CLOUD_ENVIRONMENT_ID>/<CLUSTER_ID>/compaction
  • For a User Provided Bucket, the path has the following format. s3://<CUSTOM_BUCKET>/<PATH>/compaction
  • Storage Credential: Select the IAM role from the drop down. You can fetch this role from the Databricks workspace you created in Step1.

Click Create, and the external location will be successfully created.

Edit catalog policy

After the external location is successfully created, navigate to the next step.

Review Ingested Data In Databricks

Step 1: Check the Databricks Unity catalog console

In the Databricks Unity Catalog console, you will see that a table has already been created and is available for use.

Table in Databricks Unity Catalog

[NOTE]: StreamNative Cloud adheres to the following conventions for converting special characters:

  • / is replaced with __
    • is replaced with ___
  • . is replaced with ____

Step 2: Check the storage bucket

The messages from the topic will be automatically offloaded to the configured storage bucket as shown in the figure below.

S3 Bucket In External Location

Step 3: View ingested data in Databricks unity catalog

At this point users can view the ingested data in the Unity Catalog as shown in the figure below.

Login to Databricks workspace and navigate to the catalog to view the ingested data in the tables.

View ingested data in catalog

Previous
Get Started