Skip to main content

Confluent Cloud Kafka

The Conduit Platform by default supports Confluent Cloud Kafka as a source and destination.

The Confluent Cloud Kafka destination can connect to and produce records to a topic.

Required Configurations

NameDescriptionRequiredDefault
serversServers is a list of Kafka bootstrap servers (i.e. brokers), which will be used to discover all the brokers in a cluster. This is your Confluent Bootstrap server.Yes
saslMechanismSASL mechanism to be used. Options: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512. If empty, authentication won't be performed.YesPLAIN
saslUsernameSASL username. If provided, a password needs to be provided too. This is your Confluent API Key.Yes
saslPasswordSASL password. If provided, a username needs to be provided too. This is your Confluent API Secret.Yes

Confluent Requirements

Unlike configuration for the Apache Kafka connector, Confluent requires certain configurations to be set in order to successfully connect.

  • servers: Must be set to your Confluent Bootstrap server.
  • saslMechanism: Must be set to PLAIN.
  • saslUsername: Is required. Use your Confluent API Key.
  • saslPassword: Is required. Use your Confluent API Secret.
warning

Connections to Confluent Cloud Kafka will not be possible without the above configurations.

Looking for something else? See advanced configurations.

Output format

The output format can be adjusted using configuration options provided by the connector SDK:

  • sdk.record.format: used to choose the format
  • sdk.record.format.options: used to configure the specifics of the chosen format

See this Conduit article for more information on configuring the output format.

Batching

Batching can also be configured using connector SDK provided options:

  • sdk.batch.size: maximum number of records in batch before it gets written to the destination (defaults to 0, no batching)
  • sdk.batch.delay: maximum delay before an incomplete batch is written to the destination (defaults to 0, no limit)

Advanced Configurations

The following configuration options are required to establish a successful connection with Confluent Kafka.

NameDescriptionRequiredDefault
topicTopic is the Kafka topic. It can contain a Go template that will be executed for each record to determine the topic. By default, the topic is the value of the opencdc.collection metadata field.Noorders or {{ index .Metadata "opencdc.collection" }}
clientIDA Kafka client ID.Noconduit-connector-kafka
acksAcks defines the number of acknowledges from partition replicas required before receiving a response to a produce request. none = fire and forget, one = wait for the leader to acknowledge the writes, all = wait for the full ISR to acknowledge the writes.Noall
deliveryTimeoutMessage delivery timeout.No
batchBytesLimits the maximum size of a request in bytes before being sent to a partition. This mirrors Kafka's max.message.bytes.No10000
compressionCompression applied to messages. Options: none, gzip, snappy, lz4, zstd.Nosnappy
clientCertA certificate for the Kafka client, in PEM format. If provided, the private key needs to be provided too.No
clientKeyA private key for the Kafka client, in PEM format. If provided, the certificate needs to be provided too.No
caCertThe Kafka broker's certificate, in PEM format.No
insecureSkipVerifyControls whether a client verifies the server's certificate chain and host name. If true, accepts any certificate presented by the server and any host name in that certificate.Nofalse