Elasticsearch Sink Connector
The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes

Available on
StreamNative Cloud console

Authored by
Support type
Apache License 2.0

The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. For more information about connectors, see Connector Overview.

This document introduces how to get started with creating an Elasticsearch sink connector and get it up and running.

Quick start


The prerequisites for connecting an Elasticsearch sink connector to external systems include:

Create a Elasticsearch cluster. You can create a single-node Elasticsearch cluster by executing this command:

docker run -p 9200:9200 -p 9300:9300 \
  -e "discovery.type=single-node" \
       -e "ELASTIC_PASSWORD=pulsar-sink-test" \

1. Create a connector

The following command shows how to use pulsarctl to create a builtin connector. If you want to create a non-builtin connector, you need to replace --sink-type elastic-search with --archive /path/to/pulsar-io-elastic-search.nar. You can find the button to download the nar package at the beginning of the document.

For StreamNative Cloud User

If you are a StreamNative Cloud user, you need set up your environment first.

pulsarctl sinks create \
  --sink-type elastic-search \
  --name es-sink \
  --tenant public \
  --namespace default \
  --inputs "Your topic name" \
  --parallelism 1 \
  --sink-config \
    "elasticSearchUrl": "http://localhost:90902",
    "indexName": "myindex",
    "typeName": "doc",
    "username": "elastic",
    "password": "pulsar-sink-test"

The --sink-config is the minimum necessary configuration for starting this connector, and it is a JSON string. You need to substitute the relevant parameters with your own. If you want to configure more parameters, see Configuration Properties for reference.


You can also choose to use a variety of other tools to create a connector:

2. Send messages to the topic


If your connector is created on StreamNative Cloud, you need to authenticate your clients. See Build applications using Pulsar clients for more information.

   PulsarClient client = PulsarClient.builder()
            .serviceUrl("{{Your Pulsar URL}}")

   Producer<String> producer = client.newProducer(Schema.STRING)
     .topic("{{Your topic name}}")

   String message = "{\"a\":1}";
   MessageId msgID = producer.send(message);
   System.out.println("Publish " + message + " and message ID " + msgID);


3. Check documents in Elasticsearch

  • Refresh the index
curl -s http://localhost:9200/my_index/_refresh
  • Search documents
curl -s http://localhost:9200/my_index/_search
  • You can see the record that was published earlier has been successfully written into Elasticsearch.

Configuration Properties

This table outlines the properties of an Elasticsearch sink connector.

elasticSearchUrlStringtruefalse" " (empty string)The URL of elastic search cluster to which the connector connects.
indexNameStringfalsefalse" " (empty string)The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern %{+<date-format>}. For example, suppose the event time of the record is 1645182000000L, the indexName is logs-%{+yyyy-MM-dd}, then the formatted index name would be logs-2022-02-18.
schemaEnableBooleanfalsefalsefalseTurn on the Schema Aware mode.
createIndexIfNeededBooleanfalsefalsefalseManage index if missing.
maxRetriesIntegerfalsefalse1The maximum number of retries for elasticsearch requests. Use -1 to disable it.
retryBackoffInMsIntegerfalsefalse100The base time to wait when retrying an Elasticsearch request (in milliseconds).
maxRetryTimeInSecIntegerfalsefalse86400The maximum retry time interval in seconds for retrying an elasticsearch request.
bulkEnabledBooleanfalsefalsefalseEnable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period.
bulkActionsIntegerfalsefalse1000The maximum number of actions per elasticsearch bulk request. Use -1 to disable it.
bulkSizeInMbIntegerfalsefalse5The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it.
bulkConcurrentRequestsIntegerfalsefalse0The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests.
bulkFlushIntervalInMsLongfalsefalse1000The maximum period of time to wait for flushing pending writes when bulk writes are enabled. -1 or zero means the scheduled flushing is disabled.
compressionEnabledBooleanfalsefalsefalseEnable elasticsearch request compression.
connectTimeoutInMsIntegerfalsefalse5000The elasticsearch client connection timeout in milliseconds.
connectionRequestTimeoutInMsIntegerfalsefalse1000The time in milliseconds for getting a connection from the elasticsearch connection pool.
connectionIdleTimeoutInMsIntegerfalsefalse5Idle connection timeout to prevent a read timeout.
keyIgnoreBooleanfalsefalsetrueWhether to ignore the record key to build the Elasticsearch document _id. If primaryFields is defined, the connector extract the primary fields from the payload to build the document _id If no primaryFields are provided, elasticsearch auto generates a random document _id.
primaryFieldsStringfalsefalse"id"The comma separated ordered list of field names used to build the Elasticsearch document _id from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated _id is a string representation of a JSON array of the field values.
nullValueActionenum (IGNORE,DELETE,FAIL)falsefalseIGNOREHow to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message.
malformedDocActionenum (IGNORE,WARN,FAIL)falsefalseFAILHow to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document.
stripNullsBooleanfalsefalsetrueIf stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped.
socketTimeoutInMsIntegerfalsefalse60000The socket timeout in milliseconds waiting to read the elasticsearch response.
typeNameStringfalsefalse"_doc"The type name to which the connector writes messages to. <br /><br /> The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise.
indexNumberOfShardsintfalsefalse1The number of shards of the index.
indexNumberOfReplicasintfalsefalse1The number of replicas of the index.
usernameStringfalsetrue" " (empty string)The username used by the connector to connect to the elastic search cluster. <br /><br />If username is set, then password should also be provided.
passwordStringfalsetrue" " (empty string)The password used by the connector to connect to the elastic search cluster. <br /><br />If username is set, then password should also be provided.
sslElasticSearchSslConfigfalsefalseConfiguration for TLS encrypted communication
compatibilityModeenum (AUTO,ELASTICSEARCH,ELASTICSEARCH_7,OPENSEARCH)falsefalseAUTOSpecify compatibility mode with the ElasticSearch cluster. AUTO value will try to auto detect the correct compatibility mode to use. Use ELASTICSEARCH_7 if the target cluster is running ElasticSearch 7 or prior. Use ELASTICSEARCH if the target cluster is running ElasticSearch 8 or higher. Use OPENSEARCH if the target cluster is running OpenSearch.
tokenStringfalsetrue" " (empty string)The token used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured.
apiKeyStringfalsetrue" " (empty string)The apiKey used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured.
canonicalKeyFieldsBooleanfalsefalsefalseWhether to sort the key fields for JSON and Avro or not. If it is set to true and the record key schema is JSON or AVRO, the serialized object does not consider the order of properties.
stripNonPrintableCharactersBooleanfalsefalsetrueWhether to remove all non-printable characters from the document or not. If it is set to true, all non-printable characters are removed from the document.
idHashingAlgorithmenum(NONE,SHA256,SHA512)falsefalseNONEHashing algorithm to use for the document id. This is useful in order to be compliant with the ElasticSearch _id hard limit of 512 bytes.
conditionalIdHashingBooleanfalsefalsefalseThis option only works if idHashingAlgorithm is set. If enabled, the hashing is performed only if the id is greater than 512 bytes otherwise the hashing is performed on each document in any case.
copyKeyFieldsBooleanfalsefalsefalseIf the message key schema is AVRO or JSON, the message key fields are copied into the ElasticSearch document.