Snowflake
The Conduit Platform by default supports Snowflake as a destination.
The Snowflake destination can connect to and produce records to a table.
Required Configurations
Name | Description | Required | Default |
---|---|---|---|
username | Snowflake username. | Yes | |
password | Snowflake password. | Yes | |
host | Snowflake instance host. | Yes | |
database | The database you wish to write to. | Yes | |
schema | The schema you wish to use. | Yes | |
warehouse | The warehouse you wish to use. | Yes | |
stage | The stage for uploading files before merging into the destination tables. | Yes | |
primaryKey | The primary key of the source data. | Yes | |
format | The data type we upload and copy data from to Snowflake. | Yes |
Looking for something else? See advanced configurations.
Upsert Behavior
The destination connector takes an sdk.Record
and parses it into a valid SQL query. It is designed to handle different payloads and keys. Because of this, each record is individually parsed and upserted.
The destination connector improves performance by deduplicating records within batches, to help reduce the number of operations performed on the table and parallelize processing. Additionally, the connector MERGE
s records into the destination table by checking the row's primaryKey
. This is done to prevent duplicated records, since Snowflake does not enforce uniqueness on primary keys.
We ensure that all files are properly compressed and uploaded to prevent data loss and minimize latency.
User privileges
The Snowflake user you provide to the Conduit Platform must have privileges to USAGE
on the data store, CREATE STAGE
and CREATE TABLE
on the schema, USAGE
and CREATE
on the file format, USAGE
and OWNERSHIP
on the stage, SELECT
on the Table, external table, and view and OWNERSHIP
on the table. To do this, you may access Snowflake and run the following queries:
- Create a unique role and user for the platform. You may replace
CONDUIT_USER
andCONDUIT_ROLE
with whatever you prefer.
USE ROLE ACCOUNTADMIN;
CREATE ROLE CONDUIT_ROLE;
CREATE USER CONDUIT_USER PASSWORD='very-secure-password';
- Grant the platform role to the platform user created.
GRANT ROLE CONDUIT_ROLE TO USER CONDUIT_USER;
ALTER USER CONDUIT_USER SET DEFAULT_ROLE = CONDUIT_ROLE;
- Create a new database and schema. Replace
MY_DATABASE
andMY_SCHEMA
with your respective database and schema names.
CREATE DATABASE MY_DATABASE;
USE MY_DATABASE;
CREATE SCHEMA MY_SCHEMA;
- Grant the
USAGE
privilege on the warehouse, database, and schema. ReplaceMY_WAREHOUSE
,MY_DATABSE
andMY_SCHEMA
with your respective warehouse, database, and schema names.
GRANT USAGE ON WAREHOUSE MY_WAREHGOUSE TO ROLE CONDUIT_ROLE;
GRANT USAGE ON DATABASE MY_DATABASE TO ROLE CONDUIT_ROLE;
GRANT USAGE ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
- Grant the
CREATE
privilege forSTAGE
,FILE FORMAT
andTABLE
on your respectiveSCHEMA
for the platform role.
GRANT CREATE STAGE ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
GRANT CREATE FILE FORMAT ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
GRANT CREATE TABLE ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
- Grant
ALL PRIVILEGES
onDATABASE
,SCHEMA
, andALL TABLES IN SCHEMA
to yourACCOUNTADMIN
role. This provides visibility into the destination table once the application is running.
GRANT ALL PRIVILEGES ON DATABASE MY_DATABASE TO ROLE ACCOUNTADMIN;
GRANT ALL PRIVILEGES ON SCHEMA MY_DATABASE.MY_SCHEMA TO ROLE ACCOUNTADMIN;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA MY_DATABASE.MY_SCHEMA TO ROLE ACCOUNTADMIN;
If you do not have the privileges to perform above actions, please contact your Snowflake administrator.
Advanced Configurations
Name | Description | Required | Default |
---|---|---|---|
namingPrefix | The prefix to append to update_at , deleted_at , create_at on the destination table. | No | conduit_ |
compression | The compression to use when staging files in Snowflake | No | |
sdk.batch.size | The maximum size of batch before it gets written to Snowflake. | No | 1000 |
sdk.batch.delay | The maximum delay before an incomplete batch is written to the destination. | No | |
csvGoRoutines | For CSV processing, the number of goroutines to concurrently process CSV rows. | No | |
fileUploadThreads | The number of threads to run for PUT file uploads. | No | |
keepAlive | Whether to keep the session alive even when the connection is idle. | No |