Google BigQuery
The Conduit Platform by default supports Google BigQuery as a source only.
The Google BigQuery source can connect to and emit records from a dataset.
Required Configurations
Name | Description | Required | Default |
---|---|---|---|
serviceAccount | A service account with access to the project. | Yes | |
projectID | The Project ID on endpoint. | Yes | |
datasetID | The Dataset ID to pull data from. | Yes | |
datasetLocation | The location where dataset exists. | Yes | |
primaryKeyColName | The primary key column name. Example: id | Yes |
Looking for something else? See advanced configurations.
Updates
When the source connector starts, it incrementally syncs INSERT
and UPDATE
events as defined in the source configuration from any table or all tables at a set interval.
Advanced Configurations
Name | Description | Required | Default |
---|---|---|---|
tableID | The Table ID for the table the source connector should read from. | No | All tables in dataset. |
pollingTime | The duration of time used to poll data. | No | 5m |
incrementingColumnName | The column name used to track new rows or updates. Example: It can be an id that increases in value or created_at timestamp. | No |
Google Cloud Platform Setup
You must have a Google Cloud Platform (GCP) account that has billing enabled for Google BigQuery.
- Create a new GCP Service Account to securely authenticate with the Conduit Platform.
- Grant the GCP Service Account both BigQuery predefined IAM roles: BigQuery Data Editor and BigQuery Job User.
- Create a Service Account Key and download the credentials JSON file. You will need this file when configuring the source connector.
- Locate the
projectID
. - Retrieve information about the dataset to find the
datasetID
,tableID
, anddatasetLocation
.