top of page
Green Juices

Migrating On-Premise SQL Server Data to Snowflake Using Snowpipe and AWS S3


Introduction:


Data migration is a crucial process for organizations transitioning their workloads to the cloud. This guide will provide a detailed walkthrough of migrating data from an on-premise SQL Server to Snowflake, a widely used cloud-based data warehousing platform.


We'll leverage `Snowpipe` and AWS S3 event notifications to create a pipeline that automatically loads data into Snowflake as soon as a file is uploaded to an AWS S3 bucket.


Steps


1. Export Data from SQL Server to CSV


2. Set Up an AWS S3 Bucket and Upload the CSV File


3. Set Up Snowflake


3.1 Create a Table in Snowflake

3.2 Create a Stage in Snowflake

3.3 Create a File Format in Snowflake 3.4 Create a `Snowpipe` in Snowflake


4. Configure AWS S3 Event Notifications and Lambda Function


4.1 Create an AWS Lambda Function

4.2 Add the Required Code and Permissions to the Lambda Function

4.3 Enable S3 Event Notifications


5. Automate the process of exporting data from SQL Server to a CSV file and uploading it to an AWS S3 bucket


5.1 Install the necessary tools and libraries

5.2 Create a script that exports data from SQL Server to a CSV file and uploads it to the

AWS S3 bucket

5.3 Schedule the script to run periodically


1. Export Data from SQL Server to CSV


First, we need to export the data from the SQL Server to a CSV file. You can use SQL Server Management Studio (SSMS) or any other SQL Server client to execute a T-SQL query that exports the data to a CSV file. Make sure to replace `[table_name]` with the actual table name and `[file_path]` with the desired file path.


-- Replace `[table_name]` with the actual table name and `[file_path]`
-- with the desired file path
SELECT *FROM [table_name] 
INTO OUTFILE '[file_path]'
FIELDS TERMINATED BY ',' 
ENCLOSED BY '"' 
LINES TERMINATED BY '\n';

2. Set Up an AWS S3 Bucket and Upload the CSV File


Create an S3 bucket in the AWS Management Console to store the CSV file. Use the AWS CLI or SDKs to upload the CSV file to the bucket. Make sure to replace `[local_file_path]`, `[bucket_name]`, and [key] with the appropriate values.

aws s3 cp [local_file_path] s3://[bucket_name]/[key]

3. Set Up Snowflake


In this step, we'll set up the necessary components in Snowflake to handle the data migration. This includes creating a table, a stage, a file format, and a `Snowpipe`.


3.1 Create a Table in Snowflake


Create a table in Snowflake with a structure that matches your SQL Server table:

-- Replace the appropriate values for your dataset

CREATE TABLE [database_name].[schema_name].[table_name] (
    [column_name] [data_type],
    ...
);

3.2 Create a Stage in Snowflake


Create a stage in Snowflake that references the S3 bucket containing the CSV files:

CREATE STAGE [database_name].[schema_name].[stage_name]
    URL = 's3://[bucket_name]/'
    CREDENTIALS = (AWS_KEY_ID='[aws_key_id]'         
                   AWS_SECRET_KEY='[aws_secret_key]');

3.3 Create a File Format in Snowflake


Create a file format in Snowflake that specifies the format of the CSV files being loaded:

-- Replace the appropriate values for your dataset

CREATE FILE FORMAT [database_name].[schema_name].[file_format_name]
    TYPE = 'CSV'
    FIELD_DELIMITER = ','
    RECORD_DELIMITER = '\n'
    FIELD_OPTIONALLY_ENCLOSED_BY = '"';

3.4 Create a Snowpipe in Snowflake


Create a Snowpipe that loads data from the stage to the Snowflake table using the created file format:

CREATE PIPE [database_name].[schema_name].[pipe_name]
    AS COPY INTO [database_name].[schema_name].[table_name]
    FROM @[database_name].[schema_name].[stage_name]
    FILE_FORMAT = [database_name].[schema_name].[file_format_name]
    ON_ERROR = 'CONTINUE';

4. Configure AWS S3 Event Notifications and Lambda Function


In this step, we'll set up an AWS Lambda function to process the uploaded CSV files and load the data into Snowflake. Additionally, we'll configure S3 event notifications to trigger the Lambda function when a new file is uploaded.


4.1 Create an AWS Lambda Function


Create an AWS Lambda function in the AWS Management Console. Provide a name for your function and choose the desired runtime (e.g., Python 3.8). Create a new execution role with basic Lambda permissions.


4.2 Add the Required Code and Permissions to the Lambda Function


Add the necessary code to your Lambda function to handle the event, such as loading data into Snowflake or any other processing you require. Make sure to save and deploy your changes.


In the Lambda function's configuration, navigate to the "Permissions" tab and ensure that the execution role associated with the function has the necessary permissions to access the S3 bucket and perform any required actions (e.g., reading objects).


Here's a sample Lambda function code to load the data into Snowflake:

import json
import boto3
import snowflake.connector

# Replace with your Snowflake credentials and pipe details
SNOWFLAKE_ACCOUNT = 'your_snowflake_account'
SNOWFLAKE_USER = 'your_snowflake_user'
SNOWFLAKE_PASSWORD = 'your_snowflake_password'
SNOWFLAKE_PIPE = 'your_snowflake_pipe'def lambda_handler(event, context):
    # Establish a connection to Snowflake
    conn = snowflake.connector.connect(
        user=SNOWFLAKE_USER,
        password=SNOWFLAKE_PASSWORD,
        account=SNOWFLAKE_ACCOUNT
    )

    # Iterate through S3 eventsfor record in event['Records']:
        s3_bucket = record['s3']['bucket']['name']
        s3_key = record['s3']['object']['key']

        # Construct the INSERT statement
        insert_query = f"INSERT INTO {SNOWFLAKE_PIPE} " \
                       f"SELECT $1 " \
                       f"FROM @{SNOWFLAKE_PIPE} " \
                       f"(FILE_FORMAT => '{s3_bucket}/{s3_key}', " \
                       f"  SKIP_HEADER => 1, " \
                       f"  FIELD_OPTIONALLY_ENCLOSED_BY => '\"') " \
                       f"WHERE METADATA$FILENAME = '{s3_key}';"# Execute the INSERT statementwith conn.cursor() as cur:
            cur.execute(insert_query)

    # Close the Snowflake connection
    conn.close()

    return {
        'statusCode': 200,
        'body': json.dumps('Data loaded successfully!')
    }

4.3 Enable S3 Event Notifications


Follow these steps to enable S3 event notifications to trigger a Lambda function when a new object is created in an S3 bucket:

  1. Open the AWS Management Console and navigate to the Amazon S3 service.

  2. Select the desired S3 bucket for which you want to enable event notifications.

  3. Click on the "Properties" tab located near the top of the page.

  4. Scroll down to the "Event notifications" section and click on the "Create event notification" button.

  5. In the "General configuration" section, provide a unique name for the event notification.

  6. Under the "Event types" section, select the "All object create events" checkbox. This configuration triggers the Lambda function whenever a new object is created in the bucket.

  7. In the "Destination" section, click on the "Lambda function" radio button to set it as the event notification destination.

  8. Use the "Choose a Lambda function" dropdown list to select the Lambda function you created earlier.

  9. Click the "Save changes" button to finalize the event notification configuration. Your S3 bucket is now configured to trigger the Lambda function when a new object is uploaded.

With these steps, you have successfully enabled S3 event notifications to trigger a Lambda function when a new object is created in your chosen S3 bucket. This setup allows for automatic processing of new files, such as loading data into Snowflake or performing other required tasks.


5. Automate the process of exporting data from SQL Server to a CSV file and uploading it to an AWS S3 bucket


To automate the process of exporting data from SQL Server to a CSV file and uploading it to an AWS S3 bucket without manual intervention, you can create a script that combines these tasks and schedule it to run periodically using a task scheduler like Windows Task Scheduler or cron on Linux.


Here is a high-level overview of the steps to create and schedule such a script:

5.1 Install the necessary tools and libraries:

  • Install SQL Server Command Line Utilities (e.g., sqlcmd) if not already installed. You can find the installation instructions here.

  • Install the AWS CLI if not already installed.


5.2 Create a script that exports data from SQL Server to a CSV file and uploads it to the

AWS S3 bucket:


For Windows, create a batch script (export_and_upload.bat) with the following content:


@echo off
set sql_server="your_sql_server_instance"
set sql_database="your_sql_database"
set sql_user="your_sql_user"
set sql_password="your_sql_password"
set sql_query="SELECT * FROM your_table"
set output_file="your_output_file.csv"
set aws_s3_bucket="your_s3_bucket"
set aws_s3_key="your_s3_key"

sqlcmd -S %sql_server% -d %sql_database% -U %sql_user% -P %sql_password% -Q %sql_query% -s "," -o %output_file% -W -k 1

aws s3 cp %output_file% s3://%aws_s3_bucket%/%aws_s3_key%


For Linux, create a shell script (export_and_upload.sh) with the following content:

#!/bin/bash
sql_server="your_sql_server_instance"
sql_database="your_sql_database"
sql_user="your_sql_user"
sql_password="your_sql_password"
sql_query="SELECT * FROM your_table"
output_file="your_output_file.csv"
aws_s3_bucket="your_s3_bucket"
aws_s3_key="your_s3_key"

sqlcmd -S "$sql_server" -d "$sql_database" -U "$sql_user" -P "$sql_password" -Q "$sql_query" -s "," -o "$output_file" -W -k 1

aws s3 cp "$output_file" "s3://$aws_s3_bucket/$aws_s3_key"

Make sure to replace the placeholder values with your actual SQL Server, database, table, and S3 bucket information.


5.3 Schedule the script to run periodically:


For Windows:

  • Open Windows Task Scheduler and create a new task.

  • Configure the task with the desired schedule, such as daily or weekly.

  • Set the action for the task to "Start a program" and point it to the export_and_upload.bat script you created earlier.

For Linux:

  • Open the terminal and type crontab -e to edit the cron table for the current user.

  • Add a new line with the desired schedule (e.g., 0 0 * * * /path/to/your/export_and_upload.sh to run the script daily at midnight).

  • Save and exit the editor.

With these steps, your script will run automatically according to the specified schedule, exporting data from SQL Server to a CSV file and uploading it to an AWS S3 bucket without manual intervention.


Conclusion:


This comprehensive guide provides a step-by-step walkthrough of migrating data from an on-premise SQL Server to Snowflake using Snowpipe and AWS S3 event notifications. By following these steps, you can set up an automated process that loads data into Snowflake as soon as new files are added to an S3 bucket.


This approach helps streamline your data migration process and ensures that your data is always up-to-date in the cloud-based data warehousing platform.

Commenti


bottom of page