Skip to main content

Snowflake

The Conduit Platform by default supports Snowflake as a destination.

The Snowflake destination can connect to and produce records to a table.

Required Configurations

NameDescriptionRequiredDefault
usernameSnowflake username.Yes
passwordSnowflake password.Yes
hostSnowflake instance host.Yes
databaseThe database you wish to write to.Yes
schemaThe schema you wish to use.Yes
warehouseThe warehouse you wish to use.Yes
stageThe stage for uploading files before merging into the destination tables.Yes
primaryKeyThe primary key of the source data.Yes
formatThe data type we upload and copy data from to Snowflake.Yes

Looking for something else? See advanced configurations.

Upsert Behavior

The destination connector takes an sdk.Record and parses it into a valid SQL query. It is designed to handle different payloads and keys. Because of this, each record is individually parsed and upserted.

The destination connector improves performance by deduplicating records within batches, to help reduce the number of operations performed on the table and parallelize processing. Additionally, the connector MERGEs records into the destination table by checking the row's primaryKey. This is done to prevent duplicated records, since Snowflake does not enforce uniqueness on primary keys.

We ensure that all files are properly compressed and uploaded to prevent data loss and minimize latency.

User privileges

The Snowflake user you provide to the Conduit Platform must have privileges to USAGE on the data store, CREATE STAGE and CREATE TABLE on the schema, USAGE and CREATE on the file format, USAGE and OWNERSHIP on the stage, SELECT on the Table, external table, and view and OWNERSHIP on the table. To do this, you may access Snowflake and run the following queries:

  1. Create a unique role and user for the platform. You may replace CONDUIT_USER and CONDUIT_ROLE with whatever you prefer.
USE ROLE ACCOUNTADMIN;
CREATE ROLE CONDUIT_ROLE;
CREATE USER CONDUIT_USER PASSWORD='very-secure-password';
  1. Grant the platform role to the platform user created.
GRANT ROLE CONDUIT_ROLE TO USER CONDUIT_USER;
ALTER USER CONDUIT_USER SET DEFAULT_ROLE = CONDUIT_ROLE;
  1. Create a new database and schema. Replace MY_DATABASE and MY_SCHEMA with your respective database and schema names.
CREATE DATABASE MY_DATABASE;
USE MY_DATABASE;
CREATE SCHEMA MY_SCHEMA;
  1. Grant the USAGE privilege on the warehouse, database, and schema. Replace MY_WAREHOUSE, MY_DATABSE and MY_SCHEMA with your respective warehouse, database, and schema names.
GRANT USAGE ON WAREHOUSE MY_WAREHGOUSE TO ROLE CONDUIT_ROLE;
GRANT USAGE ON DATABASE MY_DATABASE TO ROLE CONDUIT_ROLE;
GRANT USAGE ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
  1. Grant the CREATE privilege for STAGE, FILE FORMAT and TABLE on your respective SCHEMA for the platform role.
GRANT CREATE STAGE ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
GRANT CREATE FILE FORMAT ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
GRANT CREATE TABLE ON SCHEMA MY_SCHEMA TO ROLE CONDUIT_ROLE;
  1. Grant ALL PRIVILEGES on DATABASE, SCHEMA, and ALL TABLES IN SCHEMA to your ACCOUNTADMIN role. This provides visibility into the destination table once the application is running.
GRANT ALL PRIVILEGES ON DATABASE MY_DATABASE TO ROLE ACCOUNTADMIN;
GRANT ALL PRIVILEGES ON SCHEMA MY_DATABASE.MY_SCHEMA TO ROLE ACCOUNTADMIN;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA MY_DATABASE.MY_SCHEMA TO ROLE ACCOUNTADMIN;

If you do not have the privileges to perform above actions, please contact your Snowflake administrator.

Advanced Configurations

NameDescriptionRequiredDefault
namingPrefixThe prefix to append to update_at, deleted_at, create_at on the destination table.Noconduit_
compressionThe compression to use when staging files in SnowflakeNo
sdk.batch.sizeThe maximum size of batch before it gets written to Snowflake.No1000
sdk.batch.delayThe maximum delay before an incomplete batch is written to the destination.No
csvGoRoutinesFor CSV processing, the number of goroutines to concurrently process CSV rows.No
fileUploadThreadsThe number of threads to run for PUT file uploads.No
keepAliveWhether to keep the session alive even when the connection is idle.No