Confluent Cloud Kafka
The Conduit Platform by default supports Confluent Cloud Kafka as a source and destination.
The Confluent Cloud Kafka destination can connect to and produce records to a topic.
Required Configurations
Name | Description | Required | Default |
---|---|---|---|
servers | Servers is a list of Kafka bootstrap servers (i.e. brokers), which will be used to discover all the brokers in a cluster. This is your Confluent Bootstrap server. | Yes | |
saslMechanism | SASL mechanism to be used. Options: PLAIN , SCRAM-SHA-256 , SCRAM-SHA-512 . If empty, authentication won't be performed. | Yes | PLAIN |
saslUsername | SASL username. If provided, a password needs to be provided too. This is your Confluent API Key. | Yes | |
saslPassword | SASL password. If provided, a username needs to be provided too. This is your Confluent API Secret. | Yes |
Confluent Requirements
Unlike configuration for the Apache Kafka connector, Confluent requires certain configurations to be set in order to successfully connect.
servers
: Must be set to your Confluent Bootstrap server.saslMechanism
: Must be set toPLAIN
.saslUsername
: Is required. Use your Confluent API Key.saslPassword
: Is required. Use your Confluent API Secret.
Connections to Confluent Cloud Kafka will not be possible without the above configurations.
Looking for something else? See advanced configurations.
Output format
The output format can be adjusted using configuration options provided by the connector SDK:
sdk.record.format
: used to choose the formatsdk.record.format.options
: used to configure the specifics of the chosen format
See this Conduit article for more information on configuring the output format.
Batching
Batching can also be configured using connector SDK provided options:
sdk.batch.size
: maximum number of records in batch before it gets written to the destination (defaults to 0, no batching)sdk.batch.delay
: maximum delay before an incomplete batch is written to the destination (defaults to 0, no limit)
Advanced Configurations
The following configuration options are required to establish a successful connection with Confluent Kafka.
Name | Description | Required | Default |
---|---|---|---|
topic | Topic is the Kafka topic. It can contain a Go template that will be executed for each record to determine the topic. By default, the topic is the value of the opencdc.collection metadata field. | No | orders or {{ index .Metadata "opencdc.collection" }} |
clientID | A Kafka client ID. | No | conduit-connector-kafka |
acks | Acks defines the number of acknowledges from partition replicas required before receiving a response to a produce request. none = fire and forget, one = wait for the leader to acknowledge the writes, all = wait for the full ISR to acknowledge the writes. | No | all |
deliveryTimeout | Message delivery timeout. | No | |
batchBytes | Limits the maximum size of a request in bytes before being sent to a partition. This mirrors Kafka's max.message.bytes. | No | 10000 |
compression | Compression applied to messages. Options: none , gzip , snappy , lz4 , zstd . | No | snappy |
clientCert | A certificate for the Kafka client, in PEM format. If provided, the private key needs to be provided too. | No | |
clientKey | A private key for the Kafka client, in PEM format. If provided, the certificate needs to be provided too. | No | |
caCert | The Kafka broker's certificate, in PEM format. | No | |
insecureSkipVerify | Controls whether a client verifies the server's certificate chain and host name. If true , accepts any certificate presented by the server and any host name in that certificate. | No | false |