Skip to main content

Google BigQuery

The Conduit Platform by default supports Google BigQuery as a source only.

The Google BigQuery source can connect to and emit records from a dataset.

Required Configurations

serviceAccountA service account with access to the project.Yes
projectIDThe Project ID on endpoint.Yes
datasetIDThe Dataset ID to pull data from.Yes
datasetLocationThe location where dataset exists.Yes
primaryKeyColNameThe primary key column name. Example: idYes

Looking for something else? See advanced configurations.


When the source connector starts, it incrementally syncs INSERT and UPDATE events as defined in the source configuration from any table or all tables at a set interval.

Advanced Configurations

tableIDThe Table ID for the table the source connector should read from.NoAll tables in dataset.
pollingTimeThe duration of time used to poll data.No5m
incrementingColumnNameThe column name used to track new rows or updates. Example: It can be an id that increases in value or created_at timestamp.No

Google Cloud Platform Setup

You must have a Google Cloud Platform (GCP) account that has billing enabled for Google BigQuery.

  1. Create a new GCP Service Account to securely authenticate with the Conduit Platform.
  2. Grant the GCP Service Account both BigQuery predefined IAM roles: BigQuery Data Editor and BigQuery Job User.
  3. Create a Service Account Key and download the credentials JSON file. You will need this file when configuring the source connector.
  4. Locate the projectID.
  5. Retrieve information about the dataset to find the datasetID, tableID, and datasetLocation.