Apache Kafka
The Conduit Platform by default supports Apache Kafka as a source and destination.
The Apache Kafka destination can connect to and produce records to a topic.
Required Configurations
Name | Description | Required | Default |
---|---|---|---|
servers | Servers is a list of Kafka bootstrap servers (i.e. brokers), which will be used to discover all the brokers in a cluster. | Yes |
Looking for something else? See advanced configurations.
Output format
The output format can be adjusted using configuration options provided by the connector SDK:
sdk.record.format
: used to choose the formatsdk.record.format.options
: used to configure the specifics of the chosen format
See this Conduit article for more information on configuring the output format.
Batching
Batching can also be configured using connector SDK provided options:
sdk.batch.size
: maximum number of records in batch before it gets written to the destination (defaults to 0, no batching)sdk.batch.delay
: maximum delay before an incomplete batch is written to the destination (defaults to 0, no limit)
Advanced Configurations
There's no global, connector configuration. Each connector instance is configured separately.
Name | Description | Required | Default |
---|---|---|---|
topic | Topic is the Kafka topic. It can contain a Go template that will be executed for each record to determine the topic. By default, the topic is the value of the opencdc.collection metadata field. | No | orders or {{ index .Metadata "opencdc.collection" }} |
clientID | A Kafka client ID. | No | conduit-connector-kafka |
acks | Acks defines the number of acknowledges from partition replicas required before receiving a response to a produce request. none = fire and forget, one = wait for the leader to acknowledge the writes, all = wait for the full ISR to acknowledge the writes. | No | all |
deliveryTimeout | Message delivery timeout. | No | |
batchBytes | Limits the maximum size of a request in bytes before being sent to a partition. This mirrors Kafka's max.message.bytes. | No | 1000012 |
compression | Compression applied to messages. Options: none , gzip , snappy , lz4 , zstd . | No | snappy |
clientCert | A certificate for the Kafka client, in PEM format. If provided, the private key needs to be provided too. | No | |
clientKey | A private key for the Kafka client, in PEM format. If provided, the certificate needs to be provided too. | No | |
caCert | The Kafka broker's certificate, in PEM format. | No | |
insecureSkipVerify | Controls whether a client verifies the server's certificate chain and host name. If true , accepts any certificate presented by the server and any host name in that certificate. | No | false |
saslMechanism | SASL mechanism to be used. Options: PLAIN , SCRAM-SHA-256 , SCRAM-SHA-512 . If empty, authentication won't be performed. | No | |
saslUsername | SASL username. If provided, a password needs to be provided too. | No | |
saslPassword | SASL password. If provided, a username needs to be provided too. | No |