Amazon Redshift
The Conduit Platform by default supports Amazon Redshift as a source and a destination.
The Amazon Redshift destination can connect to and produce records to a table.
Required Configurations
Name | Description | Required | Default |
---|---|---|---|
dsn | Data source name (DSN) to connect to Redshift. Example: redshift://username:password@redshift-cluster-endpoint:5439/database | Yes | |
table | The table the destination connector should write to, by default. Example: orders | Yes |
Looking for something else? See advanced configurations.
Upsert Behavior
The destination connector takes an sdk.Record
and parses it into a valid SQL query.
Note: Redshift does not support map or slice types and will be stored as marshaled strings.
Key handling
- When inserting records (i.e. when the CDC operation is
CREATE
orsnapshot
) the key is ignored. - When updating records (i.e. when the CDC operations is
UPDATE
):- If the record key exists, it is expected to be structured.
- If the record key doesn't exist, then it will be built from the
keyColumns
in the payload'sAfter
field.
- When deleting records (i.e. when the CDC operation is
DELETE
) the key is required.
Table name
Records are written to the table specified by the redshift.table
property in the metadata, if it exists. If not, the connector will fall back to the table configured in the connector.
Advanced Configurations
Name | Description | Required | Default |
---|---|---|---|
keyColumns | Comma-separated list of column names to build the sdk.Record.Key . Learn more: Key handling. | No |