{"id":3878,"date":"2025-11-24T10:00:16","date_gmt":"2025-11-24T15:00:16","guid":{"rendered":"https:\/\/www.mymiller.name\/wordpress\/?p=3878"},"modified":"2025-11-24T10:00:16","modified_gmt":"2025-11-24T15:00:16","slug":"building-robust-kafka-applications-with-spring-boot-and-avro-schema-registry","status":"publish","type":"post","link":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/building-robust-kafka-applications-with-spring-boot-and-avro-schema-registry\/","title":{"rendered":"Building Robust Kafka Applications with Spring Boot, and Avro Schema Registry"},"content":{"rendered":"\n<p>As a software architect, designing solutions that are scalable, maintainable, and resilient is paramount. In the world of event-driven architectures, Apache Kafka has become a cornerstone for high-throughput, low-latency data streaming. However, simply sending raw bytes over Kafka topics can lead to data inconsistency and make future evolution a nightmare. This is where <strong>schema support<\/strong> comes into play.<\/p>\n\n\n\n<p>In this comprehensive guide, we&#8217;ll walk through how to build a Spring Boot application that leverages Spring Kafka annotations, manages its dependencies with Gradle, and enforces a strong data contract using Apache Avro and the Confluent Schema Registry. By the end, you&#8217;ll have a clear understanding of how to create a robust and evolvable Kafka data pipeline.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Schema Support with Kafka?<\/h2>\n\n\n\n<p>Apache Kafka&#8217;s fundamental design is to treat messages as opaque byte arrays. While this provides incredible flexibility and performance, it offloads the responsibility of understanding message content to the producers and consumers. Without a defined schema:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Data Inconsistency:<\/strong> Different producers might send data in slightly different formats, leading to consumers failing to parse messages.<\/li>\n\n\n\n<li><strong>Schema Evolution Challenges:<\/strong> Changing message structures (adding\/removing fields) becomes a nightmare, potentially breaking downstream applications.<\/li>\n\n\n\n<li><strong>Lack of Type Safety:<\/strong> Developers work with raw bytes or generic maps, losing the benefits of compile-time type checking.<\/li>\n\n\n\n<li><strong>No Centralized Contract:<\/strong> There&#8217;s no single source of truth for your data&#8217;s structure, hindering collaboration and data governance.<\/li>\n<\/ul>\n\n\n\n<p><strong>Apache Avro<\/strong> provides a compact binary serialization format with a rich schema definition language (using JSON). When coupled with a <strong>Schema Registry<\/strong>, it solves these problems by providing:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Strong Data Contracts:<\/strong> Schemas enforce the structure of your data.<\/li>\n\n\n\n<li><strong>Efficient Serialization:<\/strong> Avro&#8217;s binary format is highly efficient, reducing network bandwidth and storage.<\/li>\n\n\n\n<li><strong>Schema Evolution:<\/strong> The Schema Registry manages schema versions and ensures compatibility rules (backward, forward, full compatibility) are met, allowing your data model to evolve gracefully.<\/li>\n\n\n\n<li><strong>Type Safety:<\/strong> Avro-generated classes allow you to work with strongly typed objects in your Java code.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Prerequisites<\/h2>\n\n\n\n<p>Before we start coding, ensure you have the following running on your machine:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Java Development Kit (JDK 17+):<\/strong> Spring Boot 3.x requires JDK 17 or higher.<\/li>\n\n\n\n<li><strong>Gradle:<\/strong> Our build tool of choice.<\/li>\n\n\n\n<li><strong>Docker &amp; Docker Compose:<\/strong> This is the easiest way to spin up a local Kafka cluster and Confluent Schema Registry.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Setting up Kafka and Schema Registry with Docker Compose<\/h3>\n\n\n\n<p>Create a <code>docker-compose.yml<\/code> file in your project root or a separate <code>infra<\/code> directory:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># docker-compose.yml\nversion: '3.7'\nservices:\n  zookeeper:\n    image: confluentinc\/cp-zookeeper:7.5.0\n    hostname: zookeeper\n    container_name: zookeeper\n    ports:\n      - \"2181:2181\"\n    environment:\n      ZOOKEEPER_CLIENT_PORT: 2181\n      ZOOKEEPER_TICK_TIME: 2000\n\n  kafka:\n    image: confluentinc\/cp-kafka:7.5.0\n    hostname: kafka\n    container_name: kafka\n    ports:\n      - \"9092:9092\" # Internal listener\n      - \"9093:9093\" # External listener for applications\n    environment:\n      KAFKA_BROKER_ID: 1\n      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT\n      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT:\/\/kafka:9092,PLAINTEXT_HOST:\/\/localhost:9093\n      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1\n      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0\n      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1\n      KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1\n    depends_on:\n      - zookeeper\n\n  schema-registry:\n    image: confluentinc\/cp-schema-registry:7.5.0\n    hostname: schema-registry\n    container_name: schema-registry\n    ports:\n      - \"8081:8081\"\n    environment:\n      SCHEMA_REGISTRY_HOST_NAME: schema-registry\n      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'kafka:9092'\n      SCHEMA_REGISTRY_LISTENERS: http:\/\/0.0.0.0:8081\n    depends_on:\n      - kafka\n<\/code><\/pre>\n\n\n\n<p>To start these services, navigate to the directory containing <code>docker-compose.yml<\/code> in your terminal and run:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker-compose up -d\n<\/code><\/pre>\n\n\n\n<p>This will start ZooKeeper, Kafka, and the Schema Registry in detached mode.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Project Setup with Gradle<\/h2>\n\n\n\n<p>First, initialize a new Spring Boot project (e.g., using Spring Initializr or your IDE).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><code>build.gradle<\/code> Configuration<\/h3>\n\n\n\n<p>Here&#8217;s the essential <code>build.gradle<\/code> content, including the Avro plugin for code generation:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ build.gradle\nplugins {\n    id 'java'\n    id 'org.springframework.boot' version '3.3.1' \/\/ Use a recent Spring Boot version\n    id 'io.spring.dependency-management' version '1.1.5'\n    id 'com.github.davidmc24.gradle.plugin.avro' version '1.8.0' \/\/ Avro plugin\n}\n\ngroup = 'com.example'\nversion = '0.0.1-SNAPSHOT'\nsourceCompatibility = '17'\n\nrepositories {\n    mavenCentral()\n    maven {\n        url 'https:\/\/packages.confluent.io\/maven\/' \/\/ Confluent Maven repository for serializers\n    }\n}\n\ndependencies {\n    implementation 'org.springframework.boot:spring-boot-starter-web'\n    implementation 'org.springframework.kafka:spring-kafka'\n    implementation 'io.confluent:kafka-avro-serializer:7.5.0' \/\/ Confluent Avro serializer\n    implementation 'org.apache.avro:avro:1.11.1' \/\/ Apache Avro library\n\n    testImplementation 'org.springframework.boot:spring-boot-starter-test'\n    testImplementation 'org.springframework.kafka:spring-kafka-test'\n    testRuntimeOnly 'org.junit.platform:junit-platform-launcher'\n}\n\n\/\/ Avro plugin configuration\navro {\n    fieldVisibility = 'PRIVATE' \/\/ Generate private fields with public getters\/setters\n    outputDir = file(\"$buildDir\/generated\/source\/avro\") \/\/ Output directory for generated Java classes\n}\n\nsourceSets {\n    main {\n        java {\n            srcDirs += files(\"$buildDir\/generated\/source\/avro\") \/\/ Add generated source to compilation path\n        }\n    }\n}\n\ntasks.named('test') {\n    useJUnitPlatform()\n}\n<\/code><\/pre>\n\n\n\n<p><strong>Key parts of the Gradle setup:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><code>com.github.davidmc24.gradle.plugin.avro<\/code>:<\/strong> This plugin is crucial. It automatically generates Java classes from your <code>.avsc<\/code> (Avro Schema) files.<\/li>\n\n\n\n<li><strong><code>avro {}<\/code> block:<\/strong> Configures the Avro plugin. We set <code>fieldVisibility<\/code> for cleaner generated code and specify <code>outputDir<\/code> to a build-generated folder.<\/li>\n\n\n\n<li><strong><code>sourceSets {}<\/code> block:<\/strong> We add the <code>outputDir<\/code> to the <code>main<\/code> Java source set. This tells Gradle to include the generated Avro Java classes in your project&#8217;s compilation path, making them available for use in your Spring application.<\/li>\n\n\n\n<li><strong>Confluent Maven Repository:<\/strong> Necessary to resolve the <code>kafka-avro-serializer<\/code> dependency.<\/li>\n<\/ul>\n\n\n\n<p>After configuring <code>build.gradle<\/code>, run <code>gradle clean build<\/code> or <code>gradle generateAvro<\/code> (if you explicitly use the Avro plugin&#8217;s task) to ensure the setup is correct and the plugin is recognized.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Define Your Avro Schema (<code>.avsc<\/code>)<\/h2>\n\n\n\n<p>Create a directory <code>src\/main\/resources\/avro<\/code> and define your Avro schema file (e.g., <code>PlayerCharacter.avsc<\/code>). This schema will serve as the contract for messages exchanged over Kafka.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ src\/main\/resources\/avro\/PlayerCharacter.avsc\n{\n  \"type\": \"record\",\n  \"name\": \"PlayerCharacter\",\n  \"namespace\": \"com.example.dnd.model\",\n  \"doc\": \"Represents a player character in an AD&amp;D 5E world expansion.\",\n  \"fields\": &#91;\n    {\n      \"name\": \"id\",\n      \"type\": \"string\",\n      \"doc\": \"Unique identifier for the character.\"\n    },\n    {\n      \"name\": \"name\",\n      \"type\": \"string\",\n      \"doc\": \"The character's name.\"\n    },\n    {\n      \"name\": \"race\",\n      \"type\": \"string\",\n      \"doc\": \"The character's race (e.g., Elf, Dwarf, Human).\"\n    },\n    {\n      \"name\": \"characterClass\",\n      \"type\": \"string\",\n      \"doc\": \"The character's class (e.g., Fighter, Wizard, Rogue).\"\n    },\n    {\n      \"name\": \"level\",\n      \"type\": \"int\",\n      \"default\": 1,\n      \"doc\": \"The character's current level.\"\n    },\n    {\n      \"name\": \"hitPoints\",\n      \"type\": \"int\",\n      \"doc\": \"The character's current hit points.\"\n    },\n    {\n      \"name\": \"alignment\",\n      \"type\": \"string\",\n      \"default\": \"Neutral\",\n      \"doc\": \"The character's moral and ethical alignment.\"\n    }\n  ]\n}\n<\/code><\/pre>\n\n\n\n<p>After placing this file, run <code>gradle build<\/code>. The Avro plugin will generate <code>PlayerCharacter.java<\/code> in <code>build\/generated\/source\/avro\/com\/example\/dnd\/model\/<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Spring Boot Configuration (<code>application.yml<\/code>)<\/h2>\n\n\n\n<p>With Kafka now configured as beans in <code>AppConfig.java<\/code>, your <code>application.yml<\/code> becomes much leaner regarding Kafka specifics. You&#8217;ll only need to define properties that are not managed directly by your bean definitions or are global Spring Boot properties.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># src\/main\/resources\/application.yml\n# No Kafka producer\/consumer specific properties here,\n# as they are now configured as @Bean definitions in AppConfig.java\n# Any other general Spring Boot properties can remain here.\nserver:\n  port: 8080 # Example: if you want to explicitly set the server port\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Kafka Configuration as Beans (<code>AppConfig.java<\/code>)<\/h2>\n\n\n\n<p>To manage Kafka configuration programmatically as Spring beans, create an <code>AppConfig.java<\/code> class. This provides more flexibility and centralized control over your Kafka client properties.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ src\/main\/java\/com\/example\/dnd\/config\/AppConfig.java\npackage com.example.dnd.config;\n\nimport com.example.dnd.model.PlayerCharacter;\nimport io.confluent.kafka.serializers.KafkaAvroDeserializer;\nimport io.confluent.kafka.serializers.KafkaAvroSerializer;\nimport org.apache.kafka.clients.consumer.ConsumerConfig;\nimport org.apache.kafka.clients.producer.ProducerConfig;\nimport org.apache.kafka.common.serialization.StringDeserializer;\nimport org.apache.kafka.common.serialization.StringSerializer;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.kafka.annotation.EnableKafka;\nimport org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;\nimport org.springframework.kafka.core.*;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\n@Configuration\n@EnableKafka \/\/ Enables Spring's Kafka listener infrastructure\npublic class AppConfig {\n\n    \/\/ Kafka broker address\n    private static final String BOOTSTRAP_SERVERS = \"localhost:9093\";\n    \/\/ Schema Registry URL\n    private static final String SCHEMA_REGISTRY_URL = \"http:\/\/localhost:8081\";\n    \/\/ Consumer group ID\n    private static final String CONSUMER_GROUP_ID = \"dnd-character-group\";\n\n    \/**\n     * Configures the Kafka ProducerFactory for sending PlayerCharacter messages.\n     * This factory defines how producers are created, including serializers and Schema Registry integration.\n     *\n     * @return DefaultKafkaProducerFactory for String key and PlayerCharacter value.\n     *\/\n    @Bean\n    public ProducerFactory&lt;String, PlayerCharacter&gt; producerFactory() {\n        Map&lt;String, Object&gt; props = new HashMap&lt;&gt;();\n        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);\n        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);\n        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);\n        \/\/ Schema Registry configuration for the producer\n        props.put(\"schema.registry.url\", SCHEMA_REGISTRY_URL);\n        \/\/ Optional: auto.register.schemas defaults to true, but can be explicitly set\n        \/\/ props.put(\"auto.register.schemas\", true);\n        return new DefaultKafkaProducerFactory&lt;&gt;(props);\n    }\n\n    \/**\n     * Configures the KafkaTemplate, which is the high-level API for sending messages to Kafka.\n     * It uses the producerFactory defined above.\n     *\n     * @param producerFactory The ProducerFactory bean.\n     * @return KafkaTemplate for sending messages.\n     *\/\n    @Bean\n    public KafkaTemplate&lt;String, PlayerCharacter&gt; kafkaTemplate(ProducerFactory&lt;String, PlayerCharacter&gt; producerFactory) {\n        return new KafkaTemplate&lt;&gt;(producerFactory);\n    }\n\n    \/**\n     * Configures the Kafka ConsumerFactory for receiving PlayerCharacter messages.\n     * This factory defines how consumers are created, including deserializers and Schema Registry integration.\n     *\n     * @return DefaultKafkaConsumerFactory for String key and PlayerCharacter value.\n     *\/\n    @Bean\n    public ConsumerFactory&lt;String, PlayerCharacter&gt; consumerFactory() {\n        Map&lt;String, Object&gt; props = new HashMap&lt;&gt;();\n        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);\n        props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP_ID);\n        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, \"earliest\"); \/\/ Start from the beginning if no offset\n        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); \/\/ Auto commit offsets for simplicity\n        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);\n        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);\n        \/\/ Schema Registry configuration for the consumer\n        props.put(\"schema.registry.url\", SCHEMA_REGISTRY_URL);\n        \/\/ Crucial: tells Avro deserializer to return specific Avro-generated classes\n        props.put(\"specific.avro.reader\", true);\n        return new DefaultKafkaConsumerFactory&lt;&gt;(props);\n    }\n\n    \/**\n     * Configures the ConcurrentKafkaListenerContainerFactory, which is used by @KafkaListener annotations.\n     * It uses the consumerFactory defined above.\n     *\n     * @param consumerFactory The ConsumerFactory bean.\n     * @return ConcurrentKafkaListenerContainerFactory for message listening.\n     *\/\n    @Bean\n    public ConcurrentKafkaListenerContainerFactory&lt;String, PlayerCharacter&gt; kafkaListenerContainerFactory(\n            ConsumerFactory&lt;String, PlayerCharacter&gt; consumerFactory) {\n        ConcurrentKafkaListenerContainerFactory&lt;String, PlayerCharacter&gt; factory =\n                new ConcurrentKafkaListenerContainerFactory&lt;&gt;();\n        factory.setConsumerFactory(consumerFactory);\n        \/\/ Optional: Configure batch listening if needed (requires List&lt;PlayerCharacter&gt; in listener method)\n        \/\/ factory.setBatchListener(true);\n        return factory;\n    }\n}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Spring Kafka Producer (with <code>KafkaTemplate<\/code>)<\/h2>\n\n\n\n<p>This class remains unchanged. The <code>KafkaTemplate<\/code> will now be automatically wired by Spring from the <code>AppConfig<\/code> bean definition.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ src\/main\/java\/com\/example\/dnd\/producer\/PlayerCharacterProducer.java\npackage com.example.dnd.producer;\n\nimport com.example.dnd.model.PlayerCharacter; \/\/ This is the generated Avro class\nimport org.springframework.kafka.core.KafkaTemplate;\nimport org.springframework.stereotype.Service;\n\n@Service\npublic class PlayerCharacterProducer {\n\n    private static final String TOPIC_NAME = \"dnd-characters\";\n\n    \/\/ Spring automatically injects KafkaTemplate based on your AppConfig bean definition\n    private final KafkaTemplate&lt;String, PlayerCharacter&gt; kafkaTemplate;\n\n    public PlayerCharacterProducer(KafkaTemplate&lt;String, PlayerCharacter&gt; kafkaTemplate) {\n        this.kafkaTemplate = kafkaTemplate;\n    }\n\n    \/**\n     * Sends a PlayerCharacter object to the Kafka topic.\n     * The KafkaAvroSerializer will handle serialization and schema registration.\n     *\n     * @param key The message key (e.g., character ID).\n     * @param character The PlayerCharacter object to send.\n     *\/\n    public void sendPlayerCharacter(String key, PlayerCharacter character) {\n        System.out.println(String.format(\"Producing message to topic %s: key=%s, value=%s\", TOPIC_NAME, key, character));\n        kafkaTemplate.send(TOPIC_NAME, key, character);\n        \/\/ For production, you'd typically add a callback to handle successful sends or failures:\n        \/*\n        kafkaTemplate.send(TOPIC_NAME, key, character).whenComplete((result, ex) -&gt; {\n            if (ex == null) {\n                System.out.println(\"Sent message successfully: topic=\" + result.getRecordMetadata().topic() +\n                                   \", partition=\" + result.getRecordMetadata().partition() +\n                                   \", offset=\" + result.getRecordMetadata().offset());\n            } else {\n                System.err.println(\"Failed to send message: \" + ex.getMessage());\n            }\n        });\n        *\/\n    }\n}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Spring Kafka Consumer (with <code>@KafkaListener<\/code> Annotations)<\/h2>\n\n\n\n<p>This class also remains unchanged. The <code>@KafkaListener<\/code> annotation will automatically use the <code>kafkaListenerContainerFactory<\/code> bean defined in <code>AppConfig<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ src\/main\/java\/com\/example\/dnd\/consumer\/PlayerCharacterConsumer.java\npackage com.example.dnd.consumer;\n\nimport com.example.dnd.model.PlayerCharacter; \/\/ This is the generated Avro class\nimport org.springframework.kafka.annotation.KafkaListener;\nimport org.springframework.kafka.support.KafkaHeaders;\nimport org.springframework.messaging.handler.annotation.Header;\nimport org.springframework.messaging.handler.annotation.Payload;\nimport org.springframework.stereotype.Service;\n\n@Service\npublic class PlayerCharacterConsumer {\n\n    \/**\n     * Listens for messages on the 'dnd-characters' topic.\n     * The KafkaAvroDeserializer automatically deserializes the message bytes\n     * into a PlayerCharacter object based on the schema from the Schema Registry.\n     *\n     * @param character The deserialized PlayerCharacter object (payload).\n     * @param topic The topic from which the message was received.\n     * @param partition The partition from which the message was received.\n     * @param offset The offset of the message within the partition.\n     *\/\n    @KafkaListener(topics = \"dnd-characters\", groupId = \"dnd-character-group\")\n    public void listen(@Payload PlayerCharacter character,\n                       @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,\n                       @Header(KafkaHeaders.RECEIVED_PARTITION) int partition,\n                       @Header(KafkaHeaders.OFFSET) long offset) {\n\n        System.out.println(String.format(\"### CONSUMED MESSAGE ###\"));\n        System.out.println(String.format(\"  Topic: %s, Partition: %d, Offset: %d\", topic, partition, offset));\n        System.out.println(String.format(\"  Character ID: %s\", character.getId()));\n        System.out.println(String.format(\"  Name: %s\", character.getName()));\n        System.out.println(String.format(\"  Race: %s\", character.getRace()));\n        System.out.println(String.format(\"  Class: %s\", character.getCharacterClass()));\n        System.out.println(String.format(\"  Level: %d\", character.getLevel()));\n        System.out.println(String.format(\"  HP: %d\", character.getHitPoints()));\n        System.out.println(String.format(\"  Alignment: %s\", character.getAlignment()));\n        System.out.println(\"########################\\n\");\n    }\n\n    \/\/ You can define multiple listeners for different topics or groups,\n    \/\/ or even for different message types if your configuration allows it.\n\n    \/\/ Example of a batch listener (requires additional configuration in KafkaConfig)\n    \/*\n    import java.util.List;\n    @KafkaListener(topics = \"dnd-characters-batch\", groupId = \"dnd-batch-group\", containerFactory = \"batchKafkaListenerContainerFactory\")\n    public void listenBatch(List&lt;PlayerCharacter&gt; characters) {\n        System.out.println(String.format(\"Consumed %d characters in batch.\", characters.size()));\n        characters.forEach(character -&gt; System.out.println(\"- Batch: \" + character.getName()));\n    }\n    *\/\n}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Example Usage (REST Controller)<\/h2>\n\n\n\n<p>This class also remains unchanged, as it depends on <code>PlayerCharacterProducer<\/code>, which is now a Spring-managed bean via <code>AppConfig<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ src\/main\/java\/com\/example\/dnd\/controller\/CharacterController.java\npackage com.example.dnd.controller;\n\nimport com.example.dnd.model.PlayerCharacter; \/\/ The generated Avro class\nimport com.example.dnd.producer.PlayerCharacterProducer;\nimport org.springframework.http.HttpStatus;\nimport org.springframework.http.ResponseEntity;\nimport org.springframework.web.bind.annotation.*;\n\nimport java.util.UUID;\n\n@RestController\n@RequestMapping(\"\/api\/characters\")\npublic class CharacterController {\n\n    private final PlayerCharacterProducer producer;\n\n    \/\/ Spring automatically injects PlayerCharacterProducer\n    public CharacterController(PlayerCharacterProducer producer) {\n        this.producer = producer;\n    }\n\n    \/**\n     * REST endpoint to create and send a new PlayerCharacter to Kafka.\n     * @param character The PlayerCharacter object sent in the request body.\n     * @return A response indicating success and the character ID.\n     *\/\n    @PostMapping\n    public ResponseEntity&lt;String&gt; createPlayerCharacter(@RequestBody PlayerCharacter character) {\n        \/\/ Assign a unique ID for the character if not provided\n        if (character.getId() == null || character.getId().isEmpty()) {\n            character.setId(UUID.randomUUID().toString());\n        }\n\n        \/\/ Send the character to Kafka\n        producer.sendPlayerCharacter(character.getId(), character);\n\n        return new ResponseEntity&lt;&gt;(\"Player Character sent to Kafka with ID: \" + character.getId(), HttpStatus.CREATED);\n    }\n}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Main Spring Boot Application<\/h2>\n\n\n\n<p>The main application class remains the same, as <code>@EnableKafka<\/code> is still needed, and it&#8217;s on <code>AppConfig<\/code> now.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ src\/main\/java\/com\/example\/dnd\/SpringKafkaAvroDemoApplication.java\npackage com.example.dnd;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\n\/\/ @EnableKafka is now on AppConfig.java\n\/\/ import org.springframework.kafka.annotation.EnableKafka;\n\n@SpringBootApplication\npublic class SpringKafkaAvroDemoApplication {\n\n    public static void main(String&#91;] args) {\n        SpringApplication.run(SpringKafkaAvroDemoApplication.class, args);\n    }\n}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Running the Application<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Start Docker Containers:<\/strong><code>docker-compose up -d<\/code><\/li>\n\n\n\n<li><strong>Build your Spring Boot application:<\/strong><code>.\/gradlew clean build<br><\/code>This step is vital as it triggers the Avro plugin to generate your <code>PlayerCharacter.java<\/code> class.<\/li>\n\n\n\n<li><strong>Run the Spring Boot application:<\/strong><code>java -jar build\/libs\/spring-kafka-avro-demo-0.0.1-SNAPSHOT.jar<br><\/code>(Adjust the JAR name if your version differs)You should see Spring Boot starting up and your Kafka listeners initialized.<\/li>\n\n\n\n<li><strong>Send a message using cURL or Postman:<\/strong>Open another terminal and send a POST request to your <code>\/api\/characters<\/code> endpoint.<code><br><\/code>You should see output in your Spring Boot application&#8217;s console indicating that the message was produced and then consumed, with the <code>PlayerCharacter<\/code> object correctly deserialized and its fields accessed.<\/li>\n<\/ol>\n\n\n\n<pre class=\"wp-block-code\"><code><code>curl -X POST http:\/\/localhost:8080\/api\/characters \\ -H \"Content-Type: application\/json\" \\ -d '{ \"name\": \"Arion\", \"race\": \"Half-Elf\", \"characterClass\": \"Bard\", \"level\": 3, \"hitPoints\": 25, \"alignment\": \"Chaotic Good\" }'<\/code><\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Observing Schema Registration<\/h3>\n\n\n\n<p>You can verify the schema was registered by checking the Schema Registry&#8217;s REST API:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>curl http:\/\/localhost:8081\/subjects\n<\/code><\/pre>\n\n\n\n<p>You should see <code>[\"dnd-characters-value\"]<\/code> listed.<\/p>\n\n\n\n<p>Then, to see the registered schema:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>curl http:\/\/localhost:8081\/subjects\/dnd-characters-value\/versions\/1\n<\/code><\/pre>\n\n\n\n<p>This will show you the JSON schema that was registered.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Benefits of This Approach<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Centralized Configuration:<\/strong> All Kafka-related client properties (bootstrap servers, serializers, deserializers, Schema Registry URL, group ID) are defined in one dedicated <code>@Configuration<\/code> class. This improves organization, especially in larger applications.<\/li>\n\n\n\n<li><strong>Programmatic Control:<\/strong> You have more fine-grained control over bean creation and can inject other services or dynamic values if needed, which is harder with <code>application.yml<\/code>.<\/li>\n\n\n\n<li><strong>Testability:<\/strong> Kafka beans can be easily mocked or replaced with test doubles for unit and integration testing.<\/li>\n\n\n\n<li><strong>Type Safety:<\/strong> Working directly with <code>PlayerCharacter<\/code> objects eliminates runtime <code>ClassCastException<\/code> issues and provides compile-time checks.<\/li>\n\n\n\n<li><strong>Data Consistency:<\/strong> The Schema Registry ensures that all data produced to the <code>dnd-characters<\/code> topic adheres to the defined Avro schema.<\/li>\n\n\n\n<li><strong>Schema Evolution:<\/strong> If you need to add a new optional field (e.g., <code>background<\/code>) to your <code>PlayerCharacter.avsc<\/code>, you can update the schema, regenerate the Java class, and new producers will use the new schema. Older consumers will still be able to read the messages, and new consumers will correctly interpret both old and new messages thanks to Avro&#8217;s compatibility rules and the Schema Registry.<\/li>\n\n\n\n<li><strong>Performance:<\/strong> Avro&#8217;s binary serialization is compact and efficient, ideal for high-volume data streams.<\/li>\n\n\n\n<li><strong>Clear Contracts:<\/strong> The <code>.avsc<\/code> file acts as a centralized, human-readable, and machine-enforceable contract for your data, improving collaboration across teams.<\/li>\n\n\n\n<li><strong>Spring Boot Convenience:<\/strong> Spring Boot and Spring Kafka annotations dramatically simplify the configuration and development of Kafka producers and consumers.<\/li>\n<\/ul>\n\n\n\n<p>By integrating Avro and the Confluent Schema Registry into your Spring Kafka applications with Gradle, you build a robust, future-proof, and easily manageable data streaming solution for your world-expansion data or any other complex domain.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Understanding Kafka Consumer Groups and Their Relation to Messages<\/h2>\n\n\n\n<p>In Kafka, <strong>consumer groups<\/strong> are a fundamental concept that enables scalable and fault-tolerant consumption of messages. They dictate how messages published to topics are distributed among multiple consumer instances.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What is a Consumer Group ID?<\/h3>\n\n\n\n<p>A <strong>Consumer Group ID<\/strong> is a unique identifier that you assign to a set of consumer instances that collectively consume messages from one or more Kafka topics. In our <code>AppConfig.java<\/code>, we&#8217;ve defined it as <code>dnd-character-group<\/code>.<\/p>\n\n\n\n<p><code>private static final String CONSUMER_GROUP_ID = \"dnd-character-group\";<\/code><\/p>\n\n\n\n<p>And then used it in our <code>ConsumerConfig<\/code>:<\/p>\n\n\n\n<p><code>props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP_ID);<\/code><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How Consumer Groups Relate to Messages and Topics<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Shared Consumption:<\/strong> All consumers within the <em>same<\/em> consumer group share the responsibility of reading messages from a topic. When a message is published to a topic, only <em>one<\/em> consumer instance within a given consumer group will receive that message.<\/li>\n\n\n\n<li><strong>Scalability:<\/strong><ul><li><strong>Within a Group:<\/strong> If you have a topic with multiple partitions (e.g., 3 partitions), and you run multiple consumer instances within the <em>same<\/em> group (e.g., 3 instances), each instance can consume from a different partition. This allows for parallel processing and increased throughput. If you add more consumer instances than partitions, the extra instances will be idle.<\/li><li><strong>Across Groups:<\/strong> If you need different applications or logical services to process the <em>same set of messages<\/em> from a topic independently, each application should belong to a <em>different<\/em> consumer group. Each group will receive its own copy of all messages.<\/li><\/ul>Consider our <code>dnd-characters<\/code> topic:\n<ul class=\"wp-block-list\">\n<li>If you run two instances of our Spring Boot application, both configured with <code>group-id: dnd-character-group<\/code>, Kafka will distribute the messages between them. Each message will be processed by only one of these instances.<\/li>\n\n\n\n<li>If you deploy another entirely different application (e.g., a &#8220;Character Archiver&#8221; service) that also needs to process all <code>PlayerCharacter<\/code> messages, you would configure it with a <em>different<\/em> <code>group-id<\/code> (e.g., <code>character-archiver-group<\/code>). Both <code>dnd-character-group<\/code> and <code>character-archiver-group<\/code> would receive a complete copy of all messages published to <code>dnd-characters<\/code>.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Offset Management:<\/strong> Kafka keeps track of the &#8220;offset&#8221; (the last consumed message position) for each consumer group per partition. This ensures that:\n<ul class=\"wp-block-list\">\n<li>If a consumer instance fails, another instance in the same group can take over its partitions and resume consumption from the last committed offset, preventing data loss or reprocessing.<\/li>\n\n\n\n<li>Messages are not missed or duplicated within a group during rebalances (when consumers join or leave a group).<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Load Balancing:<\/strong> Kafka automatically handles the distribution of partitions among the active consumers in a group. When a consumer joins or leaves a group, or when a partition is added to a topic, Kafka triggers a &#8220;rebalance&#8221; to redistribute the partitions evenly among the remaining or new consumers.<\/li>\n<\/ol>\n\n\n\n<p>In summary, the <code>consumer-group-id<\/code> is vital for defining the consumption behavior and scalability patterns of your Kafka consumers. It allows you to build highly available and horizontally scalable applications that reliably process event streams.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Sending Messages to Multiple Consumer Groups<\/h3>\n\n\n\n<p>This is a common point of confusion for those new to Kafka! You <strong>do not<\/strong> explicitly &#8220;send a message to multiple consumer groups.&#8221; Instead, you send a single message to a Kafka topic, and then <strong>each consumer group that is subscribed to that topic will receive its own copy of that message.<\/strong><\/p>\n\n\n\n<p>Let&#8217;s illustrate with our <code>dnd-characters<\/code> topic:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Producer&#8217;s Role:<\/strong> Our <code>PlayerCharacterProducer<\/code> sends a <code>PlayerCharacter<\/code> message to the <code>dnd-characters<\/code> topic:<code>producer.sendPlayerCharacter(character.getId(), character);<br><\/code>The producer&#8217;s job is simply to publish the message to the designated topic. It has no knowledge of how many consumer groups exist or are subscribed to that topic.<\/li>\n\n\n\n<li><strong>Kafka&#8217;s Role in Distribution:<\/strong> Once the message is in the <code>dnd-characters<\/code> topic, Kafka&#8217;s core functionality takes over:\n<ul class=\"wp-block-list\">\n<li><strong>Independent Copies:<\/strong> For <em>each unique consumer group<\/em> that is subscribed to <code>dnd-characters<\/code>, Kafka will deliver a copy of that message.<\/li>\n\n\n\n<li><strong>Group-Internal Distribution:<\/strong> Within each consumer group, Kafka ensures that <em>only one<\/em> consumer instance receives that particular message, distributing messages across the group&#8217;s instances (and their assigned partitions) for parallel processing.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<p><strong>Practical Example:<\/strong><\/p>\n\n\n\n<p>Imagine you have our Spring Boot application (with <code>group-id: dnd-character-group<\/code>) consuming <code>PlayerCharacter<\/code> events. Now, let&#8217;s say you introduce two new applications:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Application A (Character Archiver):<\/strong> This application is responsible for archiving all character data to a long-term storage solution. It subscribes to the <code>dnd-characters<\/code> topic with <code>group-id: character-archiver-group<\/code>.<\/li>\n\n\n\n<li><strong>Application B (Game Analytics):<\/strong> This application analyzes character creation patterns for game balancing purposes. It subscribes to the <code>dnd-characters<\/code> topic with <code>group-id: game-analytics-group<\/code>.<\/li>\n<\/ul>\n\n\n\n<p>When our <code>PlayerCharacterProducer<\/code> sends a single <code>PlayerCharacter<\/code> message to <code>dnd-characters<\/code>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>One instance of the Spring Boot application (within <code>dnd-character-group<\/code>) will receive and process it.<\/li>\n\n\n\n<li>One instance of the Character Archiver application (within <code>character-archiver-group<\/code>) will receive and process it.<\/li>\n\n\n\n<li>One instance of the Game Analytics application (within <code>game-analytics-group<\/code>) will receive and process it.<\/li>\n<\/ul>\n\n\n\n<p>All three applications, representing three distinct consumer groups, will independently process the <em>same message<\/em> from the topic. The producer simply publishes once, and Kafka handles the fan-out to all subscribed groups.<\/p>\n\n\n\n<p>This decoupling is a key strength of Kafka, allowing different services to react to the same events without direct dependencies between them. You achieve &#8220;sending to multiple groups&#8221; by simply configuring multiple <em>distinct consumer groups<\/em> to subscribe to the <em>same topic<\/em>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a software architect, designing solutions that are scalable, maintainable, and resilient is paramount. In the world of event-driven architectures, Apache Kafka has become a cornerstone for high-throughput, low-latency data streaming. However, simply sending raw bytes over Kafka topics can lead to data inconsistency and make future evolution a nightmare. This is where schema support [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3879,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[438],"tags":[69,466,319],"series":[],"class_list":["post-3878","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-spring_messaging","tag-java-2","tag-kafka","tag-spring"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif","jetpack-related-posts":[{"id":3884,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/mastering-polymorphic-data-in-spring-kafka-with-avro-union-types\/","url_meta":{"origin":3878,"position":0},"title":"Mastering Polymorphic Data in Spring Kafka with Avro Union Types","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? Imagine\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 3x"},"classes":[]},{"id":3881,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/mastering-polymorphic-data-in-spring-kafka-with-avro-with-dedicated-topics\/","url_meta":{"origin":3878,"position":1},"title":"Mastering Polymorphic Data in Spring Kafka with Avro with Dedicated Topics","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? In\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 3x"},"classes":[]},{"id":3928,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_databases\/%f0%9f%92%a1-implementing-cqrs-with-spring-boot-and-kafka\/","url_meta":{"origin":3878,"position":2},"title":"\ud83d\udca1 Implementing CQRS with Spring Boot and Kafka","author":"Jeffery Miller","date":"November 21, 2025","format":false,"excerpt":"As a software architect, I constantly look for patterns that enhance the scalability and maintainability of microservices. The Command Query Responsibility Segregation (CQRS) pattern is a powerful tool for this, especially when coupled with event-driven architecture (EDA) using Apache Kafka. CQRS separates the application into two distinct models: one for\u2026","rel":"","context":"In &quot;Spring Databases&quot;","block_context":{"text":"Spring Databases","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_databases\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 3x"},"classes":[]},{"id":3868,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_events\/streamlining-user-events-integrating-aws-cognito-with-kafka\/","url_meta":{"origin":3878,"position":3},"title":"Streamlining User Events: Integrating AWS Cognito with Kafka","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"In modern application architectures, understanding user behavior is crucial. Tracking events like logins, logouts, failed login attempts, and signups can provide valuable insights for analytics, security monitoring, and personalized user experiences. This post will guide you through the process of configuring AWS Cognito to send these events to an Apache\u2026","rel":"","context":"In &quot;Spring Events&quot;","block_context":{"text":"Spring Events","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_events\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 3x"},"classes":[]},{"id":3844,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/the-power-of-kafka-connect\/","url_meta":{"origin":3878,"position":4},"title":"The Power of Kafka Connect","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"Kafka Connect is a powerful framework for streaming data between Kafka and other systems in a scalable and reliable way. Connectors handle the complexities of data integration, allowing you to focus on your core application logic. Sink Connectors are used to export data from Kafka to other systems, and in\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 3x"},"classes":[]},{"id":3715,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/optimizing-spring-kafka-message-delivery-compression-batching-and-delays\/","url_meta":{"origin":3878,"position":5},"title":"Optimizing Spring Kafka Message Delivery: Compression, Batching, and Delays","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"Spring Kafka provides a powerful framework for interacting with Apache Kafka, but efficient message delivery requires some fine-tuning. Here\u2019s how to optimize your Spring Kafka producer using compression, batching, and small delays. 1. Compression Compressing messages before sending them to Kafka significantly reduces the overall data size, leading to: Lower\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 3x"},"classes":[]}],"jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3878","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/comments?post=3878"}],"version-history":[{"count":1,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3878\/revisions"}],"predecessor-version":[{"id":3880,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3878\/revisions\/3880"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media\/3879"}],"wp:attachment":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media?parent=3878"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/categories?post=3878"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/tags?post=3878"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/series?post=3878"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}