{"id":3884,"date":"2025-11-24T10:00:17","date_gmt":"2025-11-24T15:00:17","guid":{"rendered":"https:\/\/www.mymiller.name\/wordpress\/?p=3884"},"modified":"2025-11-24T10:00:17","modified_gmt":"2025-11-24T15:00:17","slug":"mastering-polymorphic-data-in-spring-kafka-with-avro-union-types","status":"publish","type":"post","link":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/mastering-polymorphic-data-in-spring-kafka-with-avro-union-types\/","title":{"rendered":"Mastering Polymorphic Data in Spring Kafka with Avro Union Types"},"content":{"rendered":"\n<p>As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? Imagine a single &#8220;event stream&#8221; topic that might contain a new customer&#8217;s <code>ProfileUpdate<\/code>, a <code>ProductView<\/code>, or a <code>CartAbandonment<\/code> event.<\/p>\n\n\n\n<p>Untyped data in Kafka leads to deserialization nightmares, schema evolution headaches, and brittle integrations. This is where schema management tools like Confluent Schema Registry, paired with a powerful serialization format like Apache Avro, become indispensable. This post will guide you through implementing a sophisticated solution: using Avro Union Types within Spring Kafka to seamlessly handle polymorphic data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Polymorphism Predicament in Event Streams<\/h2>\n\n\n\n<p>In traditional programming, polymorphism allows a single interface or base class to represent objects of different concrete types. In a Kafka topic, where messages are typically byte arrays, achieving this gracefully requires a structured approach. Without it, you&#8217;re left with:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Runtime Surprises:<\/strong> Consumers have no compile-time guarantee of what <code>data<\/code> type to expect, leading to <code>ClassCastException<\/code>s or silent data corruption.<\/li>\n\n\n\n<li><strong>Maintenance Nightmares:<\/strong> Any change to a message type, or the introduction of a new one, necessitates manual coordination across all producers and consumers, prone to errors.<\/li>\n\n\n\n<li><strong>Limited Interoperability:<\/strong> Different services trying to process the same topic might have conflicting interpretations of the message content.<\/li>\n<\/ul>\n\n\n\n<p>Avro Union Types, backed by the Schema Registry, solve this by providing a metadata-rich, self-describing serialization format.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Defining Our Polymorphic Avro Schemas<\/h2>\n\n\n\n<p>The heart of this solution lies in your Avro <code>.avsc<\/code> schema definitions. We&#8217;ll start by defining our individual data types, and then crucially, our generic <code>Message<\/code> wrapper will use an Avro Union to hold <em>any one<\/em> of these types in its <code>data<\/code> field.<\/p>\n\n\n\n<p>Let&#8217;s assume our system handles <code>Person<\/code> profiles, <code>Product<\/code> details, and <code>Order<\/code> information.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><code>Person.avsc<\/code><\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Person\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"firstName\", \"type\": \"string\"},\n    {\"name\": \"lastName\", \"type\": \"string\"},\n    {\"name\": \"age\", \"type\": &#91;\"int\", \"null\"], \"default\": 0}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\"><code>Product.avsc<\/code><\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Product\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"productId\", \"type\": \"string\"},\n    {\"name\": \"name\", \"type\": \"string\"},\n    {\"name\": \"price\", \"type\": \"double\"}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\"><code>Order.avsc<\/code><\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Order\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"orderId\", \"type\": \"string\"},\n    {\"name\": \"customerId\", \"type\": \"string\"},\n    {\"name\": \"totalAmount\", \"type\": \"double\"},\n    {\"name\": \"items\", \"type\": {\"type\": \"array\", \"items\": \"string\"}}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">The Pivotal <code>Message.avsc<\/code> with Union<\/h4>\n\n\n\n<p>Now, our <code>Message<\/code> schema defines its <code>data<\/code> field as a union of these types. The order matters for schema evolution, as we&#8217;ll discuss later. Including <code>\"null\"<\/code> in the union means the <code>data<\/code> field is optional.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Message\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"key\", \"type\": \"string\"},\n    {\"name\": \"user\", \"type\": {\"type\": \"string\", \"logicalType\": \"uuid\"}},\n    {\n      \"name\": \"data\",\n      \"type\": &#91;\n        \"null\",\n        \"com.example.schemas.Person\",\n        \"com.example.schemas.Product\",\n        \"com.example.schemas.Order\"\n      ],\n      \"default\": null\n    }\n  ]\n}\n<\/code><\/pre>\n\n\n\n<p>Automated Class Generation:<\/p>\n\n\n\n<p>For Java development, you&#8217;ll use an Avro build plugin (e.g., Maven or Gradle plugin) to automatically generate Java classes (Person, Product, Order, Message) from these .avsc files. These generated classes are crucial for type-safe interaction in your Spring Boot application.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Spring Boot and Kafka Setup<\/h2>\n\n\n\n<p>The foundational Spring Kafka and Schema Registry setup remains largely the same as for single-schema messages.<\/p>\n\n\n\n<p><strong>Key Dependencies:<\/strong> You&#8217;ll need <code>spring-boot-starter-web<\/code>, <code>spring-kafka<\/code>, <code>io.confluent:kafka-avro-serializer<\/code>, and <code>org.apache.avro:avro<\/code>.<\/p>\n\n\n\n<p><strong><code>application.yml<\/code> Configuration Highlights:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>spring:\n  kafka:\n    producer:\n      bootstrap-servers: localhost:9092\n      key-serializer: org.apache.kafka.common.serialization.StringSerializer\n      value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer\n      properties:\n        schema.registry.url: http:\/\/localhost:8081 # Your Schema Registry URL\n    consumer:\n      bootstrap-servers: localhost:9092\n      group-id: polymorphic-message-group\n      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer\n      value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer\n      properties:\n        schema.registry.url: http:\/\/localhost:8081\n        specific.avro.reader: true # CRUCIAL for deserializing to specific Avro classes\n<\/code><\/pre>\n\n\n\n<p>The <code>specific.avro.reader: true<\/code> property is vital. It tells <code>KafkaAvroDeserializer<\/code> to attempt to deserialize the Avro message into your generated Java classes rather than a generic <code>GenericRecord<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Producing Polymorphic Messages<\/h2>\n\n\n\n<p>On the producer side, sending different types of <code>data<\/code> payloads becomes straightforward. Your <code>KafkaTemplate<\/code> will continue to use <code>Object<\/code> as its value type, relying on <code>KafkaAvroSerializer<\/code> to handle the magic. Since your generated Avro classes (like <code>Person<\/code>, <code>Product<\/code>, <code>Order<\/code>) implement <code>org.apache.avro.specific.SpecificRecord<\/code>, you can use this interface for your <code>dataPayload<\/code> argument.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import com.example.schemas.Message;\nimport org.apache.avro.specific.SpecificRecord;\nimport org.springframework.kafka.core.KafkaTemplate;\nimport org.springframework.stereotype.Service;\n\nimport java.util.UUID;\n\n@Service\npublic class PolymorphicMessageProducer {\n\n    private final KafkaTemplate&lt;String, Object&gt; kafkaTemplate;\n    private static final String TOPIC = \"polymorphic-messages\";\n\n    \/\/ Constructor injection\n    public PolymorphicMessageProducer(KafkaTemplate&lt;String, Object&gt; kafkaTemplate) {\n        this.kafkaTemplate = kafkaTemplate;\n    }\n\n    public void sendMessage(String key, SpecificRecord dataPayload) {\n        \/\/ Avro's generated builder for Message handles the union automatically\n        Message message = Message.newBuilder()\n            .setKey(key)\n            .setUser(UUID.randomUUID().toString())\n            .setData(dataPayload) \/\/ This can be Person, Product, Order, etc.\n            .build();\n\n        kafkaTemplate.send(TOPIC, key, message)\n            .whenComplete((result, ex) -&gt; {\n                if (ex == null) {\n                    System.out.println(\"Sent message with data type: \" + dataPayload.getClass().getSimpleName() + \" to offset: \" + result.getRecordMetadata().offset());\n                } else {\n                    System.err.println(\"Failed to send message: \" + ex.getMessage());\n                }\n            });\n    }\n}\n<\/code><\/pre>\n\n\n\n<p><em>Note: In a real application, you&#8217;d likely have REST endpoints or service methods calling <code>sendMessage<\/code> with instances of <code>Person<\/code>, <code>Product<\/code>, or <code>Order<\/code> objects.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Consuming and Differentiating Polymorphic Messages<\/h2>\n\n\n\n<p>The consumer is where you&#8217;ll differentiate between the various data types. Thanks to <code>specific.avro.reader: true<\/code>, <code>message.getData()<\/code> will return the concrete Avro-generated object (<code>Person<\/code>, <code>Product<\/code>, or <code>Order<\/code>), allowing you to use <code>instanceof<\/code> checks for type-specific processing.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import com.example.schemas.Message;\nimport com.example.schemas.Person;\nimport com.example.schemas.Product;\nimport com.example.schemas.Order;\nimport org.springframework.kafka.annotation.KafkaListener;\nimport org.springframework.stereotype.Component;\n\n@Component\npublic class PolymorphicMessageConsumer {\n\n    @KafkaListener(topics = \"polymorphic-messages\", groupId = \"${spring.kafka.consumer.group-id}\")\n    public void listen(Message message) {\n        System.out.println(\"Received Message Key: \" + message.getKey() + \", User: \" + message.getUser());\n\n        Object data = message.getData(); \/\/ This will be an instance of Person, Product, or Order\n\n        if (data == null) {\n            System.out.println(\"  -&gt; Message contained no data payload.\");\n        } else if (data instanceof Person) {\n            Person personData = (Person) data;\n            System.out.println(\"  -&gt; Data Type: Person. Details: \" + personData.getFirstName() + \" \" + personData.getLastName());\n            \/\/ Process Person specific data\n        } else if (data instanceof Product) {\n            Product productData = (Product) data;\n            System.out.println(\"  -&gt; Data Type: Product. Details: \" + productData.getName() + \" (ID: \" + productData.getProductId() + \")\");\n            \/\/ Process Product specific data\n        } else if (data instanceof Order) {\n            Order orderData = (Order) data;\n            System.out.println(\"  -&gt; Data Type: Order. Details: Order ID: \" + orderData.getOrderId() + \", Total: \" + orderData.getTotalAmount());\n            \/\/ Process Order specific data\n        } else {\n            System.out.println(\"  -&gt; Unrecognized Data Type in Union: \" + data.getClass().getName());\n        }\n        System.out.println(\"---\");\n    }\n}\n<\/code><\/pre>\n\n\n\n<p>This pattern provides strong type safety at compile time and allows for flexible processing at runtime. For very complex scenarios with many types, consider implementing a Visitor pattern to avoid long <code>if-else if<\/code> chains.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Navigating Schema Evolution with Union Types<\/h2>\n\n\n\n<p>Schema evolution is a significant advantage of Avro, but with unions, it requires careful attention:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Adding New Types:<\/strong> When you introduce a new type (e.g., <code>CouponApplied<\/code>), you <strong>must<\/strong> append it to the <em>end<\/em> of the union list in <code>Message.avsc<\/code>. This is generally backward compatible: old consumers will ignore the new type, while new consumers will be able to process it.<code>\"type\": [ \"null\", \"com.example.schemas.Person\", \"com.example.schemas.Product\", \"com.example.schemas.Order\", \"com.example.schemas.CouponApplied\" \/\/ Add new types here ],<\/code><\/li>\n\n\n\n<li><strong>Removing Types:<\/strong> Removing a type from a union is <strong>not<\/strong> backward compatible. Old messages containing the removed type will cause deserialization failures for consumers using the new schema. Avoid this if you need full backward compatibility.<\/li>\n\n\n\n<li><strong>Reordering Types:<\/strong> Changing the order of types within a union is also <strong>not<\/strong> backward compatible, as Avro uses the index to encode\/decode the specific type.<\/li>\n<\/ol>\n\n\n\n<p>Always test schema changes thoroughly and ensure your Schema Registry&#8217;s compatibility settings (e.g., <code>BACKWARD<\/code>, <code>FORWARD<\/code>, <code>FULL<\/code>) are aligned with your evolution strategy.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Key Architectural Considerations<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Schema Management Discipline:<\/strong> With more schemas and union types, strict version control and clear communication between teams regarding schema changes become even more critical.<\/li>\n\n\n\n<li><strong>Performance vs. Flexibility:<\/strong> While Avro is highly efficient, having a very large union might introduce a tiny overhead compared to a single, fixed schema due to type identification during serialization\/deserialization. For most use cases, this is negligible.<\/li>\n\n\n\n<li><strong>Consumer Logic Complexity:<\/strong> As your union grows, the <code>instanceof<\/code> logic in your consumers can become unwieldy. Consider abstraction patterns like the Visitor pattern or a command-like pattern to decouple type-specific processing from the main listener.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>By strategically employing Avro Union Types within a generic message wrapper, coupled with Spring Kafka and Confluent Schema Registry, you can build a highly flexible and robust event-driven architecture. This pattern empowers you to send diverse types of data through a single Kafka topic, enforce strong schema contracts, and gracefully manage schema evolution, all while maintaining the high standards of a well-designed software solution. Embrace Avro unions to unlock true polymorphism in your Kafka streams!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? Imagine a single &#8220;event stream&#8221; topic [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3885,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[438],"tags":[69,466,319],"series":[],"class_list":["post-3884","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-spring_messaging","tag-java-2","tag-kafka","tag-spring"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif","jetpack-related-posts":[{"id":3878,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/building-robust-kafka-applications-with-spring-boot-and-avro-schema-registry\/","url_meta":{"origin":3884,"position":0},"title":"Building Robust Kafka Applications with Spring Boot, and Avro Schema Registry","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"As a software architect, designing solutions that are scalable, maintainable, and resilient is paramount. In the world of event-driven architectures, Apache Kafka has become a cornerstone for high-throughput, low-latency data streaming. However, simply sending raw bytes over Kafka topics can lead to data inconsistency and make future evolution a nightmare.\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 3x"},"classes":[]},{"id":3881,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/mastering-polymorphic-data-in-spring-kafka-with-avro-with-dedicated-topics\/","url_meta":{"origin":3884,"position":1},"title":"Mastering Polymorphic Data in Spring Kafka with Avro with Dedicated Topics","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? In\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif 3x"},"classes":[]},{"id":3868,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_events\/streamlining-user-events-integrating-aws-cognito-with-kafka\/","url_meta":{"origin":3884,"position":2},"title":"Streamlining User Events: Integrating AWS Cognito with Kafka","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"In modern application architectures, understanding user behavior is crucial. Tracking events like logins, logouts, failed login attempts, and signups can provide valuable insights for analytics, security monitoring, and personalized user experiences. This post will guide you through the process of configuring AWS Cognito to send these events to an Apache\u2026","rel":"","context":"In &quot;Spring Events&quot;","block_context":{"text":"Spring Events","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_events\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 3x"},"classes":[]},{"id":3928,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_databases\/%f0%9f%92%a1-implementing-cqrs-with-spring-boot-and-kafka\/","url_meta":{"origin":3884,"position":3},"title":"\ud83d\udca1 Implementing CQRS with Spring Boot and Kafka","author":"Jeffery Miller","date":"November 21, 2025","format":false,"excerpt":"As a software architect, I constantly look for patterns that enhance the scalability and maintainability of microservices. The Command Query Responsibility Segregation (CQRS) pattern is a powerful tool for this, especially when coupled with event-driven architecture (EDA) using Apache Kafka. CQRS separates the application into two distinct models: one for\u2026","rel":"","context":"In &quot;Spring Databases&quot;","block_context":{"text":"Spring Databases","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_databases\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 3x"},"classes":[]},{"id":3715,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/optimizing-spring-kafka-message-delivery-compression-batching-and-delays\/","url_meta":{"origin":3884,"position":4},"title":"Optimizing Spring Kafka Message Delivery: Compression, Batching, and Delays","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"Spring Kafka provides a powerful framework for interacting with Apache Kafka, but efficient message delivery requires some fine-tuning. Here\u2019s how to optimize your Spring Kafka producer using compression, batching, and small delays. 1. Compression Compressing messages before sending them to Kafka significantly reduces the overall data size, leading to: Lower\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/09\/management-1137648_1280-jpg.avif 3x"},"classes":[]},{"id":3844,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/the-power-of-kafka-connect\/","url_meta":{"origin":3884,"position":5},"title":"The Power of Kafka Connect","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"Kafka Connect is a powerful framework for streaming data between Kafka and other systems in a scalable and reliable way. Connectors handle the complexities of data integration, allowing you to focus on your core application logic. Sink Connectors are used to export data from Kafka to other systems, and in\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 3x"},"classes":[]}],"jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3884","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/comments?post=3884"}],"version-history":[{"count":1,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3884\/revisions"}],"predecessor-version":[{"id":3886,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3884\/revisions\/3886"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media\/3885"}],"wp:attachment":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media?parent=3884"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/categories?post=3884"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/tags?post=3884"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/series?post=3884"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}