{"id":3881,"date":"2025-12-24T10:00:17","date_gmt":"2025-12-24T15:00:17","guid":{"rendered":"https:\/\/www.mymiller.name\/wordpress\/?p=3881"},"modified":"2025-12-24T10:00:17","modified_gmt":"2025-12-24T15:00:17","slug":"mastering-polymorphic-data-in-spring-kafka-with-avro-with-dedicated-topics","status":"publish","type":"post","link":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/mastering-polymorphic-data-in-spring-kafka-with-avro-with-dedicated-topics\/","title":{"rendered":"Mastering Polymorphic Data in Spring Kafka with Avro with Dedicated Topics"},"content":{"rendered":"\n<p>As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? In our previous discussion, we explored using Avro Union Types within a single topic. Now, let&#8217;s explore an equally powerful and often simpler alternative: <strong>leveraging dedicated Kafka topics for each specific data type<\/strong>.<\/p>\n\n\n\n<p>This approach streamlines consumer logic and can provide clearer topic semantics, making it a strong contender for managing polymorphic data in your event-driven architectures.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Polymorphism Predicament and Topic Specialization<\/h2>\n\n\n\n<p>While Avro Union Types elegantly solve polymorphism within a single message field, sometimes the natural separation of data aligns better with dedicated topics. For instance, <code>ProfileUpdate<\/code> events might belong on a <code>user-profile-updates<\/code> topic, <code>ProductView<\/code> events on a <code>product-views<\/code> topic, and <code>CartAbandonment<\/code> events on an <code>e-commerce-cart-events<\/code> topic.<\/p>\n\n\n\n<p>This strategy offers:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Simplified Consumer Logic:<\/strong> Each consumer listener can be directly typed to the specific message it expects, eliminating the need for <code>instanceof<\/code> checks.<\/li>\n\n\n\n<li><strong>Clearer Topic Semantics:<\/strong> Topic names can clearly indicate the type of data they contain, improving discoverability and understanding across your organization.<\/li>\n\n\n\n<li><strong>Easier Access Control:<\/strong> Kafka ACLs can be applied per topic, allowing more granular permissions for different data types.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Defining Our Specialized Avro Schemas<\/h2>\n\n\n\n<p>Instead of a single <code>Message.avsc<\/code> with a union, we will define separate <code>Message<\/code> schemas, each specialized for a particular data type. The <code>Person<\/code>, <code>Product<\/code>, and <code>Order<\/code> schemas remain the same.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><code>Person.avsc<\/code> (as before)<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Person\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"firstName\", \"type\": \"string\"},\n    {\"name\": \"lastName\", \"type\": \"string\"},\n    {\"name\": \"age\", \"type\": &#91;\"int\", \"null\"], \"default\": 0}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\"><code>Product.avsc<\/code> (as before)<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Product\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"productId\", \"type\": \"string\"},\n    {\"name\": \"name\", \"type\": \"string\"},\n    {\"name\": \"price\", \"type\": \"double\"}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\"><code>Order.avsc<\/code> (as before)<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Order\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"orderId\", \"type\": \"string\"},\n    {\"name\": \"customerId\", \"type\": \"string\"},\n    {\"name\": \"totalAmount\", \"type\": \"double\"},\n    {\"name\": \"items\", \"type\": {\"type\": \"array\", \"items\": \"string\"}}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Dedicated Message Wrappers (<code>Message_Person.avsc<\/code>, etc.)<\/h4>\n\n\n\n<p>Now, each <code>Message<\/code> schema will directly reference its specific data type.<\/p>\n\n\n\n<p><strong><code>Message_Person.avsc<\/code><\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Message_Person\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"key\", \"type\": \"string\"},\n    {\"name\": \"user\", \"type\": {\"type\": \"string\", \"logicalType\": \"uuid\"}},\n    {\"name\": \"data\", \"type\": \"com.example.schemas.Person\"}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<p><strong><code>Message_Product.avsc<\/code><\/strong> (Similar for Product, replacing <code>Person<\/code> with <code>Product<\/code>)<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Message_Product\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"key\", \"type\": \"string\"},\n    {\"name\": \"user\", \"type\": {\"type\": \"string\", \"logicalType\": \"uuid\"}},\n    {\"name\": \"data\", \"type\": \"com.example.schemas.Product\"}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<p><strong><code>Message_Order.avsc<\/code><\/strong> (Similar for Order, replacing <code>Person<\/code> with <code>Order<\/code>)<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"type\": \"record\",\n  \"name\": \"Message_Order\",\n  \"namespace\": \"com.example.schemas\",\n  \"fields\": &#91;\n    {\"name\": \"key\", \"type\": \"string\"},\n    {\"name\": \"user\", \"type\": {\"type\": \"string\", \"logicalType\": \"uuid\"}},\n    {\"name\": \"data\", \"type\": \"com.example.schemas.Order\"}\n  ]\n}\n<\/code><\/pre>\n\n\n\n<p>Automated Class Generation:<\/p>\n\n\n\n<p>Ensure your Avro build plugin is configured to generate Java classes for Person, Product, Order, Message_Person, Message_Product, and Message_Order.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Spring Boot and Kafka Setup<\/h2>\n\n\n\n<p>The core Spring Kafka and Schema Registry setup remains largely unchanged.<\/p>\n\n\n\n<p><strong>Key Dependencies:<\/strong> <code>spring-boot-starter-web<\/code>, <code>spring-kafka<\/code>, <code>io.confluent:kafka-avro-serializer<\/code>, and <code>org.apache.avro:avro<\/code>.<\/p>\n\n\n\n<p><strong><code>application.yml<\/code> Configuration Highlights:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>spring:\n  kafka:\n    producer:\n      bootstrap-servers: localhost:9092\n      key-serializer: org.apache.kafka.common.serialization.StringSerializer\n      value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer\n      properties:\n        schema.registry.url: http:\/\/localhost:8081 # Your Schema Registry URL\n    consumer:\n      bootstrap-servers: localhost:9092\n      group-id: dedicated-topic-group # Unique consumer group ID\n      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer\n      value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer\n      properties:\n        schema.registry.url: http:\/\/localhost:8081\n        specific.avro.reader: true # CRUCIAL for deserializing to specific Avro classes\n<\/code><\/pre>\n\n\n\n<p>The <code>specific.avro.reader: true<\/code> property is still vital for the deserializer to return your generated Java classes.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Producing Messages to Dedicated Topics<\/h2>\n\n\n\n<p>On the producer side, you will now send messages to different Kafka topics based on the type of data payload. This means you might have multiple <code>KafkaTemplate<\/code> instances (or a single one with a dynamic topic name) or distinct producer services.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import com.example.schemas.Message_Person;\nimport com.example.schemas.Message_Product;\nimport com.example.schemas.Message_Order;\nimport com.example.schemas.Person;\nimport com.example.schemas.Product;\nimport com.example.schemas.Order;\nimport org.springframework.kafka.core.KafkaTemplate;\nimport org.springframework.stereotype.Service;\n\nimport java.util.UUID;\n\n@Service\npublic class DedicatedTopicProducer {\n\n    private final KafkaTemplate&lt;String, Object&gt; kafkaTemplate; \/\/ Still Object for generic Avro types\n    private static final String PERSON_TOPIC = \"person-data-topic\";\n    private static final String PRODUCT_TOPIC = \"product-data-topic\";\n    private static final String ORDER_TOPIC = \"order-data-topic\";\n\n    \/\/ Constructor injection\n    public DedicatedTopicProducer(KafkaTemplate&lt;String, Object&gt; kafkaTemplate) {\n        this.kafkaTemplate = kafkaTemplate;\n    }\n\n    public void sendPersonMessage(String key, Person personData) {\n        Message_Person message = Message_Person.newBuilder()\n            .setKey(key)\n            .setUser(UUID.randomUUID().toString())\n            .setData(personData)\n            .build();\n        kafkaTemplate.send(PERSON_TOPIC, key, message)\n            .whenComplete((result, ex) -&gt; {\n                if (ex == null) {\n                    System.out.println(\"Sent Person message to \" + PERSON_TOPIC + \" at offset: \" + result.getRecordMetadata().offset());\n                } else {\n                    System.err.println(\"Failed to send Person message: \" + ex.getMessage());\n                }\n            });\n    }\n\n    public void sendProductMessage(String key, Product productData) {\n        Message_Product message = Message_Product.newBuilder()\n            .setKey(key)\n            .setUser(UUID.randomUUID().toString())\n            .setData(productData)\n            .build();\n        kafkaTemplate.send(PRODUCT_TOPIC, key, message)\n            .whenComplete((result, ex) -&gt; {\n                if (ex == null) {\n                    System.out.println(\"Sent Product message to \" + PRODUCT_TOPIC + \" at offset: \" + result.getRecordMetadata().offset());\n                } else {\n                    System.err.println(\"Failed to send Product message: \" + ex.getMessage());\n                }\n            });\n    }\n\n    public void sendOrderMessage(String key, Order orderData) {\n        Message_Order message = Message_Order.newBuilder()\n            .setKey(key)\n            .setUser(UUID.randomUUID().toString())\n            .setData(orderData)\n            .build();\n        kafkaTemplate.send(ORDER_TOPIC, key, message)\n            .whenComplete((result, ex) -&gt; {\n                if (ex == null) {\n                    System.out.println(\"Sent Order message to \" + ORDER_TOPIC + \" at offset: \" + result.getRecordMetadata().offset());\n                } else {\n                    System.err.println(\"Failed to send Order message: \" + ex.getMessage());\n                }\n            });\n    }\n}\n<\/code><\/pre>\n\n\n\n<p><em>Note: In a real application, these <code>sendXxxMessage<\/code> methods would be called from various service layers or REST controllers.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Consuming from Dedicated Topics<\/h2>\n\n\n\n<p>The consumer side becomes much cleaner. Each <code>@KafkaListener<\/code> can subscribe to a specific topic and directly receive the strongly typed <code>Message_Person<\/code>, <code>Message_Product<\/code>, or <code>Message_Order<\/code> object. The <code>instanceof<\/code> checks are no longer necessary, as the type is known by the topic itself.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import com.example.schemas.Message_Person;\nimport com.example.schemas.Message_Product;\nimport com.example.schemas.Message_Order;\nimport com.example.schemas.Person;\nimport com.example.schemas.Product;\nimport com.example.schemas.Order;\nimport org.springframework.kafka.annotation.KafkaListener;\nimport org.springframework.stereotype.Component;\n\n@Component\npublic class DedicatedTopicConsumer {\n\n    \/\/ Listener for Person messages\n    @KafkaListener(topics = \"${person-data-topic}\", groupId = \"${spring.kafka.consumer.group-id}\")\n    public void listenPersonMessages(Message_Person message) {\n        System.out.println(\"Received Person Message Key: \" + message.getKey() + \", User: \" + message.getUser());\n        Person personData = message.getData();\n        if (personData != null) {\n            System.out.println(\"  -&gt; Person Details: \" + personData.getFirstName() + \" \" + personData.getLastName() + \", Age: \" + personData.getAge());\n        }\n        System.out.println(\"---\");\n    }\n\n    \/\/ Listener for Product messages\n    @KafkaListener(topics = \"${product-data-topic}\", groupId = \"${spring.kafka.consumer.group-id}\")\n    public void listenProductMessages(Message_Product message) {\n        System.out.println(\"Received Product Message Key: \" + message.getKey() + \", User: \" + message.getUser());\n        Product productData = message.getData();\n        if (productData != null) {\n            System.out.println(\"  -&gt; Product Details: ID=\" + productData.getProductId() + \", Name=\" + productData.getName() + \", Price=\" + productData.getPrice());\n        }\n        System.out.println(\"---\");\n    }\n\n    \/\/ Listener for Order messages\n    @KafkaListener(topics = \"${order-data-topic}\", groupId = \"${spring.kafka.consumer.group-id}\")\n    public void listenOrderMessages(Message_Order message) {\n        System.out.println(\"Received Order Message Key: \" + message.getKey() + \", User: \" + message.getUser());\n        Order orderData = message.getData();\n        if (orderData != null) {\n            System.out.println(\"  -&gt; Order Details: ID=\" + orderData.getOrderId() + \", Customer=\" + orderData.getCustomerId() + \", Total=\" + orderData.getTotalAmount() + \", Items=\" + orderData.getItems());\n        }\n        System.out.println(\"---\");\n    }\n}\n<\/code><\/pre>\n\n\n\n<p><em>Note: You would also need to define these topic names in your <code>application.yml<\/code> for the <code>@Value<\/code> annotations to resolve:<\/em><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># ... (existing kafka config)\nperson-data-topic: person-events\nproduct-data-topic: product-events\norder-data-topic: order-events\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Navigating Schema Evolution with Dedicated Topics<\/h2>\n\n\n\n<p>Schema evolution is still handled by Avro and the Schema Registry, but the rules apply to the specific message schema for each topic (e.g., <code>Message_Person.avsc<\/code> evolution independently of <code>Message_Product.avsc<\/code>).<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Adding Fields:<\/strong> You can add new fields with a default value or as nullable (<code>[\"type\", \"null\"]<\/code>) to any of your individual schemas (e.g., <code>Person.avsc<\/code> or <code>Message_Person.avsc<\/code>). This remains backward compatible.<\/li>\n\n\n\n<li><strong>Removing Fields:<\/strong> As with unions, removing fields is generally <strong>not<\/strong> backward compatible without careful planning.<\/li>\n\n\n\n<li><strong>Introducing New Data Types:<\/strong> When a new data type is introduced, you simply define its Avro schema, create a new dedicated <code>Message_NewType.avsc<\/code> wrapper, and set up a new Kafka topic and corresponding producer\/consumer. This cleanly isolates the new type without impacting existing topics.<\/li>\n<\/ul>\n\n\n\n<p>This strategy often simplifies schema evolution management since changes to one data type&#8217;s schema do not directly impact the serialization of other, unrelated data types within a union.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Key Architectural Considerations<\/h2>\n\n\n\n<p>Choosing between Avro Union Types and dedicated topics depends on your specific use case and architectural preferences:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Topic Proliferation:<\/strong> This strategy leads to more Kafka topics. While Kafka handles a large number of topics efficiently, it can increase operational overhead for monitoring, alerting, and security management if not well-managed.<\/li>\n\n\n\n<li><strong>Simplicity vs. Flexibility:<\/strong> Dedicated topics simplify consumer logic, as each listener focuses on a single message type. Union types offer extreme flexibility within a single topic but require more complex consumer routing.<\/li>\n\n\n\n<li><strong>Data Locality\/Ordering:<\/strong> If the order of <em>all<\/em> polymorphic events is crucial (e.g., a complex business process where <code>Person<\/code> updates, <code>Product<\/code> views, and <code>Order<\/code> creations must be processed in a strict global order), a single topic with union types might be preferred, as Kafka only guarantees order within a partition.<\/li>\n\n\n\n<li><strong>Schema Management Discipline:<\/strong> Regardless of the approach, disciplined schema definition, versioning, and compatibility testing with the Schema Registry remain paramount.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>For scenarios where different data types naturally align with distinct processing pipelines or require clear logical separation, adopting dedicated Kafka topics with specific Avro schemas for each data type offers a clean, maintainable, and type-safe solution. This strategy, alongside the power of Spring Kafka and Confluent Schema Registry, provides software architects with yet another robust tool to design highly flexible and resilient event-driven systems. By understanding the trade-offs, you can select the approach that best fits your system&#8217;s unique requirements.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? In our previous discussion, we explored [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3882,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[438],"tags":[69,466,319],"series":[],"class_list":["post-3881","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-spring_messaging","tag-java-2","tag-kafka","tag-spring"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/migration-8576653_1280.avif","jetpack-related-posts":[{"id":3884,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/mastering-polymorphic-data-in-spring-kafka-with-avro-union-types\/","url_meta":{"origin":3881,"position":0},"title":"Mastering Polymorphic Data in Spring Kafka with Avro Union Types","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"As a software architect, designing robust, scalable, and adaptable distributed systems is a constant pursuit. When working with Apache Kafka, a common challenge arises: how do you send messages that, while adhering to a generic wrapper, can carry different types of payloads based on the specific event or context? Imagine\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/plastic-5527530_1280.avif 3x"},"classes":[]},{"id":3878,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/building-robust-kafka-applications-with-spring-boot-and-avro-schema-registry\/","url_meta":{"origin":3881,"position":1},"title":"Building Robust Kafka Applications with Spring Boot, and Avro Schema Registry","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"As a software architect, designing solutions that are scalable, maintainable, and resilient is paramount. In the world of event-driven architectures, Apache Kafka has become a cornerstone for high-throughput, low-latency data streaming. However, simply sending raw bytes over Kafka topics can lead to data inconsistency and make future evolution a nightmare.\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 3x"},"classes":[]},{"id":3844,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/the-power-of-kafka-connect\/","url_meta":{"origin":3881,"position":2},"title":"The Power of Kafka Connect","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"Kafka Connect is a powerful framework for streaming data between Kafka and other systems in a scalable and reliable way. Connectors handle the complexities of data integration, allowing you to focus on your core application logic. Sink Connectors are used to export data from Kafka to other systems, and in\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/04\/ai-generated-8131434_1280-png.avif 3x"},"classes":[]},{"id":3868,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_events\/streamlining-user-events-integrating-aws-cognito-with-kafka\/","url_meta":{"origin":3881,"position":3},"title":"Streamlining User Events: Integrating AWS Cognito with Kafka","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"In modern application architectures, understanding user behavior is crucial. Tracking events like logins, logouts, failed login attempts, and signups can provide valuable insights for analytics, security monitoring, and personalized user experiences. This post will guide you through the process of configuring AWS Cognito to send these events to an Apache\u2026","rel":"","context":"In &quot;Spring Events&quot;","block_context":{"text":"Spring Events","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_events\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/05\/binary-7206874_1280.avif 3x"},"classes":[]},{"id":3928,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_databases\/%f0%9f%92%a1-implementing-cqrs-with-spring-boot-and-kafka\/","url_meta":{"origin":3881,"position":4},"title":"\ud83d\udca1 Implementing CQRS with Spring Boot and Kafka","author":"Jeffery Miller","date":"November 21, 2025","format":false,"excerpt":"As a software architect, I constantly look for patterns that enhance the scalability and maintainability of microservices. The Command Query Responsibility Segregation (CQRS) pattern is a powerful tool for this, especially when coupled with event-driven architecture (EDA) using Apache Kafka. CQRS separates the application into two distinct models: one for\u2026","rel":"","context":"In &quot;Spring Databases&quot;","block_context":{"text":"Spring Databases","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_databases\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/data-2899902_1280.avif 3x"},"classes":[]},{"id":3842,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/taming-the-stream-effective-unit-testing-with-kafka-in-spring-boot\/","url_meta":{"origin":3881,"position":5},"title":"Taming the Stream: Effective Unit Testing with Kafka in Spring Boot","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"Kafka\u2019s asynchronous, distributed nature introduces unique challenges to testing. Unlike traditional synchronous systems, testing Kafka interactions requires verifying message production, consumption, and handling potential asynchronous delays. This article explores strategies for robust unit testing of Kafka components within a Spring Boot application. Understanding the Testing Landscape Before diving into specifics,\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/06\/intro-7400243_640.jpg?fit=640%2C334&ssl=1&resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/06\/intro-7400243_640.jpg?fit=640%2C334&ssl=1&resize=350%2C200 1x, https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/06\/intro-7400243_640.jpg?fit=640%2C334&ssl=1&resize=525%2C300 1.5x"},"classes":[]}],"jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3881","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/comments?post=3881"}],"version-history":[{"count":1,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3881\/revisions"}],"predecessor-version":[{"id":3883,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3881\/revisions\/3883"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media\/3882"}],"wp:attachment":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media?parent=3881"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/categories?post=3881"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/tags?post=3881"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/series?post=3881"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}