{"id":3859,"date":"2025-12-24T10:01:24","date_gmt":"2025-12-24T15:01:24","guid":{"rendered":"https:\/\/www.mymiller.name\/wordpress\/?p=3859"},"modified":"2025-12-24T10:01:24","modified_gmt":"2025-12-24T15:01:24","slug":"building-reactive-applications-with-spring-webflux-r2dbc-kafka-and-more","status":"publish","type":"post","link":"https:\/\/www.mymiller.name\/wordpress\/spring-reactive\/building-reactive-applications-with-spring-webflux-r2dbc-kafka-and-more\/","title":{"rendered":"Building Reactive Applications with Spring: WebFlux, R2DBC, Kafka, and More"},"content":{"rendered":"\n<div class=\"wp-block-jetpack-markdown\"><h2>1. Introduction to Reactive Programming in the Spring Ecosystem<\/h2>\n<p>The modern application landscape demands systems that are not only functional but also highly responsive, resilient under failure, elastic under varying load, and efficient in resource utilization. Traditional imperative programming models, particularly those relying on blocking I\/O and a thread-per-request architecture, often struggle to meet these demands, especially in high-concurrency, I\/O-bound scenarios.[1, 2] This inherent limitation, where threads block and wait for I\/O operations (like database queries or network calls) to complete, leads to inefficient use of resources (CPU, memory) and scalability bottlenecks.[3, 4, 5, 6, 7, 8]<\/p>\n<p>Reactive programming emerged as a paradigm specifically designed to address these challenges.[9, 10] It offers an alternative approach centered around asynchronous data streams and the propagation of change.[9, 11, 12, 13, 14]<\/p>\n<h3>What is Reactive Programming?<\/h3>\n<p>Reactive programming is an asynchronous, non-blocking, event-driven paradigm focused on data streams.[6, 7, 11, 12, 14, 15, 16, 17] Instead of requesting data and blocking until it arrives (pull-based), reactive systems react to events or data as they become available (push-based).[6, 11, 12, 18] This allows applications to remain responsive and efficiently utilize resources, as threads are not idle while waiting for I\/O.[10, 16, 19]<\/p>\n<p>The principles of reactive systems are often summarized by the (<a href=\"https:\/\/www.reactivemanifesto.org\/\">https:\/\/www.reactivemanifesto.org\/<\/a>), which outlines four core characteristics [10, 20]:<\/p>\n<ol>\n<li><strong>Responsive:<\/strong> The system responds in a timely manner if at all possible. Responsiveness is the cornerstone of usability and utility.<\/li>\n<li><strong>Resilient:<\/strong> The system stays responsive in the face of failure. Resilience is achieved by replication, containment, isolation, and delegation.<\/li>\n<li><strong>Elastic:<\/strong> The system stays responsive under varying workload. Reactive systems can react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs.<\/li>\n<li><strong>Message-Driven:<\/strong> Reactive systems rely on asynchronous message-passing between components to ensure loose coupling, isolation, location transparency, and provide the means to delegate errors as messages.<\/li>\n<\/ol>\n<p>These characteristics translate directly into tangible benefits, such as improved scalability, better resource utilization (handling more requests with fewer threads and less memory), and enhanced fault tolerance.[7, 9, 10, 13, 15, 16, 21]<\/p>\n<p>A critical concept within reactive programming is <strong>Backpressure<\/strong>. Backpressure is a mechanism that allows a consumer of data to signal to the producer how much data it can handle, preventing the producer from overwhelming the consumer.[3, 5, 7, 9, 11, 13, 15, 17, 20, 21, 22, 23, 24, 25, 26] This flow control is essential for building stable and resilient systems that don\u2019t crash or lose data under high load. The ((<a href=\"https:\/\/www.google.com\/search?q=https:\/\/github.com\/reactive-streams\/reactive-streams-jvm\/blob\/master\/README.md%23specification\">https:\/\/www.google.com\/search?q=https:\/\/github.com\/reactive-streams\/reactive-streams-jvm\/blob\/master\/README.md%23specification<\/a>)), adopted in Java 9, standardizes this interaction between asynchronous components, defining interfaces like <code>Publisher<\/code> and <code>Subscriber<\/code>.[3, 6, 11, 19, 20, 22, 27, 28, 29, 30]<\/p>\n<p>The limitations of the traditional thread-per-request model, particularly its inefficiency with I\/O-bound tasks and difficulty scaling under high concurrency, were a primary driver for the development and adoption of reactive frameworks like those found in the Spring ecosystem.[1, 2, 3, 4, 5, 6, 7, 8, 9, 19, 31] Reactive programming isn\u2019t merely a stylistic choice; it represents a fundamental architectural shift aimed at overcoming performance bottlenecks inherent in blocking models.<\/p>\n<h3>The Role of Project Reactor (<code>Mono<\/code>, <code>Flux<\/code>)<\/h3>\n<p>Within the Spring ecosystem, the foundation for reactive programming is <strong>Project Reactor<\/strong>.[3, 4, 6, 9, 12, 14, 15, 17, 19, 20, 21, 32] Reactor is a fourth-generation reactive library, built on the Reactive Streams specification, providing efficient, non-blocking, backpressure-enabled implementations.[3, 6, 19, 20] It introduces two core publisher types:<\/p>\n<ol>\n<li>\n<p><strong><code>Mono&lt;T&gt;<\/code><\/strong>: Represents a reactive sequence of <strong>zero or one<\/strong> item (<code>0..1<\/code>).[33, 3, 4, 5, 9, 12, 34, 15, 19, 22, 32, 35, 36, 37] It is ideal for asynchronous operations that are expected to return at most a single result (like fetching a single database record by ID) or just signal completion (like a <code>void<\/code> method).<\/p>\n<ul>\n<li><em>Creation Example:<\/em> <code>Mono&lt;String&gt; data = Mono.just(&quot;Hello&quot;);<\/code> [35, 38]<\/li>\n<li><em>Creation Example:<\/em> <code>Mono&lt;String&gt; noData = Mono.empty();<\/code> [36, 38]<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong><code>Flux&lt;T&gt;<\/code><\/strong>: Represents a reactive sequence of <strong>zero or more<\/strong> items (<code>0..N<\/code>).[33, 3, 4, 5, 9, 12, 34, 15, 19, 22, 32, 36, 37, 38] It is used for handling streams of data, such as multiple results from a database query, continuous event streams, or data chunks over a network connection.<\/p>\n<ul>\n<li><em>Creation Example:<\/em> <code>Flux&lt;String&gt; sequence = Flux.just(&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;);<\/code> [33, 36, 38]<\/li>\n<li><em>Creation Example:<\/em> <code>Flux&lt;Integer&gt; numbers = Flux.range(1, 5);<\/code> [38]<\/li>\n<li><em>Creation Example:<\/em> <code>List&lt;String&gt; items = Arrays.asList(&quot;a&quot;, &quot;b&quot;); Flux&lt;String&gt; fromList = Flux.fromIterable(items);<\/code> [38]<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>Reactor provides a rich vocabulary of <strong>operators<\/strong> (<code>map<\/code>, <code>flatMap<\/code>, <code>filter<\/code>, <code>zip<\/code>, <code>merge<\/code>, etc.) that allow developers to compose, transform, filter, and combine these asynchronous streams in a declarative way.[33, 9, 10, 13, 34, 19, 25, 35, 36, 39] These operators are fundamental to building complex reactive logic.<\/p>\n<p>Crucially, <code>Mono<\/code> and <code>Flux<\/code> are <strong>cold publishers<\/strong>. This means they do not start emitting data until a <code>Subscriber<\/code> subscribes to them.[33, 11, 34, 35, 36, 38] The act of subscribing triggers the execution of the entire reactive pipeline.<\/p>\n<h3>Overview of Spring\u2019s Reactive Stack<\/h3>\n<p>Recognizing the need for reactive solutions, the Spring Framework introduced a parallel reactive stack alongside its traditional Servlet-based stack (Spring MVC, Spring Data JPA) starting with version 5.[2, 4, 21, 40, 41, 42] This reactive stack is built upon Project Reactor and aims to enable the development of fully non-blocking applications.[5, 6, 8, 9, 14, 15, 16, 17, 43, 44, 45, 46, 47, 48]<\/p>\n<p>Key components of the Spring reactive ecosystem covered in this report include:<\/p>\n<ul>\n<li><strong>Spring WebFlux:<\/strong> The core reactive web framework ([2, 3, 4, 5, 6, 7, 9, 11, 12, 14, 15, 17, 19, 20, 21, 22, 23, 31, 32, 36, 37, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68]).<\/li>\n<li><strong>Spring Data R2DBC:<\/strong> For reactive access to relational databases ([9, 23, 27, 28, 30, 37, 47, 54, 55, 56, 57, 58, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92]).<\/li>\n<li><strong>Spring Kafka Reactive Support:<\/strong> For reactive interaction with Apache Kafka, primarily via Reactor Kafka integration ([21, 23, 52, 53, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119]).<\/li>\n<li><strong>Spring Data Reactive Repositories (NoSQL):<\/strong> For MongoDB, Cassandra, Redis ([21, 29, 30, 120, 121, 122, 123, 124, 125]).<\/li>\n<li><strong>Spring Security Reactive:<\/strong> Securing reactive applications ([7, 21, 46, 59, 60, 61, 62, 63, 126, 127]).<\/li>\n<li><strong>Spring Cloud Gateway:<\/strong> A reactive API Gateway ([21, 61, 128, 129, 130, 131, 132, 133, 134, 135]).<\/li>\n<li><strong>Spring Session Reactive:<\/strong> Managing user sessions reactively ([136, 137, 138, 139, 140, 141, 142, 143]).<\/li>\n<\/ul>\n<p>Adopting reactive programming, however, represents a significant paradigm shift. It goes beyond simply replacing return types with <code>Mono<\/code> or <code>Flux<\/code>. It requires developers to think asynchronously, understand the intricacies of operator chains, manage backpressure effectively, and potentially grapple with new debugging complexities.[2, 10, 13, 17, 18, 25, 26, 48, 144] The learning curve can be steeper compared to traditional imperative approaches, and careful consideration is needed to determine if the performance and scalability benefits justify the increased complexity for a specific project.[2, 6, 13, 26, 31, 144]<\/p>\n<h2>2. Spring WebFlux: Building Reactive Web Applications<\/h2>\n<p>Spring WebFlux is the reactive-stack web framework introduced in Spring Framework 5.0.[40, 41, 44] It provides a fully non-blocking alternative to the traditional Spring Web MVC, designed from the ground up to leverage reactive programming principles and handle large numbers of concurrent requests efficiently with minimal hardware resources.[3, 4, 19, 22]<\/p>\n<h3>Core Concepts<\/h3>\n<p>Understanding the core concepts of WebFlux is essential for building effective reactive web applications.<\/p>\n<ul>\n<li><strong>Non-blocking I\/O:<\/strong> The fundamental principle of WebFlux is its non-blocking nature.[5, 6, 7, 8, 9, 11, 12, 14, 15, 16, 17, 19, 20, 26, 44, 45, 47, 64] Unlike traditional blocking I\/O where a thread waits for an operation (like a database query or network call) to complete, WebFlux allows threads to initiate an operation and then immediately become available to handle other tasks. When the operation finishes, a notification (event or callback) triggers further processing.[19] This prevents threads from being idle and is key to handling high concurrency efficiently.<\/li>\n<li><strong>Event Loop Model:<\/strong> WebFlux typically runs on servers like Netty (the default), Undertow, or even Servlet 3.1+ containers configured for non-blocking I\/O.[3, 4, 6, 19, 45] These servers utilize an event loop model.[3, 5, 6, 8, 19, 36, 64] A small, fixed number of threads (often equal to the number of CPU cores, known as event loop workers) handle incoming requests.[19] When a request involves a potentially blocking operation, the event loop thread registers a callback and returns to the loop to process other events. Once the I\/O operation completes, the event loop is notified, and a thread executes the callback.[19] This contrasts sharply with the traditional thread-per-request model where each request occupies a thread for its entire duration, potentially leading to thread exhaustion under load.[1, 2, 3, 4, 5, 6, 7, 8, 9, 31, 67, 145]<\/li>\n<li><strong>Project Reactor Integration:<\/strong> WebFlux is built upon Project Reactor.[3, 4, 6, 9, 12, 14, 15, 17, 19, 20, 21, 32] Request and response data, as well as other asynchronous operations, are represented using Reactor\u2019s <code>Mono<\/code> (for 0\u20261 items) and <code>Flux<\/code> (for 0\u2026N items) publishers.[3, 4, 19, 20, 21, 22] This allows developers to use Reactor\u2019s rich operator library to compose asynchronous logic declaratively.<\/li>\n<\/ul>\n<p>The following table summarizes the key differences between Spring MVC and Spring WebFlux:<\/p>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:left\">Feature<\/th>\n<th style=\"text-align:left\">Spring MVC<\/th>\n<th style=\"text-align:left\">Spring WebFlux<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"text-align:left\"><strong>Programming Model<\/strong><\/td>\n<td style=\"text-align:left\">Imperative, Blocking<\/td>\n<td style=\"text-align:left\">Reactive, Non-blocking<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\"><strong>I\/O Model<\/strong><\/td>\n<td style=\"text-align:left\">Synchronous<\/td>\n<td style=\"text-align:left\">Asynchronous<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\"><strong>Concurrency Model<\/strong><\/td>\n<td style=\"text-align:left\">Thread-per-request<\/td>\n<td style=\"text-align:left\">Event Loop (few threads)<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\"><strong>Key Dependency<\/strong><\/td>\n<td style=\"text-align:left\">Servlet API<\/td>\n<td style=\"text-align:left\">Project Reactor<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\"><strong>Primary Use Case<\/strong><\/td>\n<td style=\"text-align:left\">General Web Apps, CPU-bound tasks<\/td>\n<td style=\"text-align:left\">I\/O-bound tasks, High Concurrency<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\"><strong>Server Support<\/strong><\/td>\n<td style=\"text-align:left\">Servlet Containers (Tomcat, Jetty)<\/td>\n<td style=\"text-align:left\">Netty, Undertow, Servlet 3.1+<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\"><strong>HTTP Client<\/strong><\/td>\n<td style=\"text-align:left\"><code>RestTemplate<\/code> (blocking)<\/td>\n<td style=\"text-align:left\"><code>WebClient<\/code> (non-blocking)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><em>Data Sources: [2, 3, 4, 6, 9, 19, 31, 36, 40]<\/em><\/p>\n<h3>Programming Models<\/h3>\n<p>Spring WebFlux offers two distinct programming models for defining endpoints [19, 40, 44, 45, 50, 51]:<\/p>\n<ul>\n<li><strong>Annotated Controllers:<\/strong> This model closely resembles the familiar Spring MVC approach, using annotations to define controllers and map requests to handler methods.[6, 7, 19, 20, 41, 43, 44, 45, 50, 51] It provides a smoother transition for developers already experienced with Spring MVC.\n<ul>\n<li>\n<p><strong>Annotations:<\/strong> Uses standard annotations like <code>@RestController<\/code>, <code>@RequestMapping<\/code>, <code>@GetMapping<\/code>, <code>@PostMapping<\/code>, <code>@PutMapping<\/code>, <code>@DeleteMapping<\/code>, <code>@PathVariable<\/code>, <code>@RequestParam<\/code>, and <code>@RequestBody<\/code>.[19, 41, 43, 45, 50, 54]<\/p>\n<\/li>\n<li>\n<p><strong>Reactive Return Types:<\/strong> Controller methods return reactive types, typically <code>Mono&lt;T&gt;<\/code> for single responses or <code>Flux&lt;T&gt;<\/code> for multiple or streamed responses.[3, 5, 11, 34, 32, 36, 43, 44, 45, 46, 49, 50, 51, 54, 146, 147] Spring handles subscribing to these publishers and writing the results to the HTTP response non-blockingly.<\/p>\n<\/li>\n<li>\n<p><strong>Reactive Request Body:<\/strong> Unlike MVC, WebFlux controllers can directly accept reactive types for the request body, such as <code>@RequestBody Mono&lt;User&gt; userMono<\/code> or <code>@RequestBody Flux&lt;Event&gt; eventFlux<\/code>.[19, 45]<\/p>\n<\/li>\n<li>\n<p>**Example (<code>@RestController<\/code>):**java\nimport org.springframework.web.bind.annotation.*;\nimport reactor.core.publisher.Flux;\nimport reactor.core.publisher.Mono;<\/p>\n<p>@RestController\n@RequestMapping(\u201c\/api\/items\u201d)\npublic class ItemController {<\/p>\n<pre><code>private final ItemService itemService; \/\/ Assuming a reactive ItemService\n\npublic ItemController(ItemService itemService) {\n    this.itemService = itemService;\n}\n\n@GetMapping(&quot;\/{id}&quot;)\npublic Mono&lt;Item&gt; getItemById(@PathVariable String id) {\n    return itemService.findById(id); \/\/ Returns Mono&lt;Item&gt;\n}\n\n@GetMapping\npublic Flux&lt;Item&gt; getAllItems() {\n    return itemService.findAll(); \/\/ Returns Flux&lt;Item&gt;\n}\n\n@PostMapping\npublic Mono&lt;Item&gt; createItem(@RequestBody Mono&lt;Item&gt; itemMono) {\n    \/\/ Process the Mono&lt;Item&gt; reactively\n    return itemMono.flatMap(itemService::save); \/\/ Returns Mono&lt;Item&gt;\n}\n<\/code><\/pre>\n<p>}<\/p>\n<pre><code>*[3, 36, 43, 44, 45, 50, 51, 54]*\n\n<\/code><\/pre>\n<\/li>\n<\/ul>\n<\/li>\n<li><strong>Functional Endpoints:<\/strong> This model provides a lambda-based, lightweight, and functional alternative for defining routes and handling requests.[3, 11, 19, 40, 44, 45, 49, 51, 148] It gives the application full control over the request handling lifecycle from start to finish, contrasting with the callback nature of annotated controllers.[19]\n<ul>\n<li><strong><code>RouterFunction&lt;ServerResponse&gt;<\/code>:<\/strong> Defines the routing rules. It\u2019s a function that takes a <code>ServerRequest<\/code> and returns a <code>Mono&lt;HandlerFunction&lt;ServerResponse&gt;&gt;<\/code>. Routes are typically defined using the <code>RouterFunctions.route()<\/code> builder and <code>RequestPredicates<\/code> (e.g., <code>GET(&quot;\/path&quot;)<\/code>, <code>POST(&quot;\/path&quot;)<\/code>, <code>accept(MediaType.APPLICATION_JSON)<\/code>).[3, 4, 19, 43, 44, 49, 50, 148, 149, 150, 151, 152, 153, 154, 155] Routes can be composed using methods like <code>.and()<\/code> or nested using <code>.nest()<\/code>.[152]<\/li>\n<li><strong><code>HandlerFunction&lt;ServerResponse&gt;<\/code>:<\/strong> Represents the function that handles a request once a route matches. It takes a <code>ServerRequest<\/code> and returns a <code>Mono&lt;ServerResponse&gt;<\/code>.[3, 4, 34, 43, 44, 50, 148, 149, 150, 152, 154] This is where the core request processing logic resides.<\/li>\n<li><strong><code>ServerRequest<\/code>:<\/strong> Provides immutable access to request details like method, URI, headers, path variables (<code>.pathVariable(&quot;name&quot;)<\/code>), query parameters (<code>.queryParam(&quot;key&quot;)<\/code>), and the request body (<code>.bodyToMono(Class)<\/code>, <code>.bodyToFlux(Class)<\/code>, <code>.body(BodyExtractors)<\/code>).[50, 148, 150, 151, 152, 153]<\/li>\n<li><strong><code>ServerResponse<\/code>:<\/strong> Used to build the HTTP response immutably. Provides a builder pattern starting with status methods (<code>ok()<\/code>, <code>created(URI)<\/code>, <code>noContent()<\/code>, <code>notFound()<\/code>, etc.) and methods to set headers (<code>.header()<\/code>, <code>.contentType()<\/code>) and the body (<code>.bodyValue(Object)<\/code>, <code>.body(Publisher, Class)<\/code>).[4, 34, 43, 50, 148, 150, 151, 152, 154]<\/li>\n<li><strong>Example (Functional Endpoint):<\/strong><pre><code class=\"language-java\">import org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.http.MediaType;\nimport org.springframework.web.reactive.function.server.*;\nimport reactor.core.publisher.Mono;\n\nimport static org.springframework.web.reactive.function.server.RequestPredicates.*;\nimport static org.springframework.web.reactive.function.server.RouterFunctions.route;\n\n\/\/ Assume ItemHandler class with methods like getItem, listItems, createItem\n\/\/ Each handler method takes ServerRequest and returns Mono&lt;ServerResponse&gt;\n@Configuration\npublic class ItemRouter {\n\n    @Bean\n    public RouterFunction&lt;ServerResponse&gt; itemRoutes(ItemHandler itemHandler) { \/\/ Inject the handler\n        return route(GET(&quot;\/functional\/items\/{id}&quot;).and(accept(MediaType.APPLICATION_JSON)), itemHandler::getItem)\n              .andRoute(GET(&quot;\/functional\/items&quot;).and(accept(MediaType.APPLICATION_JSON)), itemHandler::listItems)\n              .andRoute(POST(&quot;\/functional\/items&quot;).and(contentType(MediaType.APPLICATION_JSON)), itemHandler::createItem);\n    }\n}\n\n\/\/ Example Handler Method in ItemHandler\n\/\/ public Mono&lt;ServerResponse&gt; getItem(ServerRequest request) {\n\/\/     String id = request.pathVariable(&quot;id&quot;);\n\/\/     Mono&lt;Item&gt; itemMono = itemService.findById(id);\n\/\/     return itemMono.flatMap(item -&gt; ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).bodyValue(item))\n\/\/                  .switchIfEmpty(ServerResponse.notFound().build());\n\/\/ }\n<\/code><\/pre>\n<em>[3, 4, 43, 50, 148, 149, 150, 151, 152, 153, 154]<\/em><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>While annotated controllers offer a familiar path from Spring MVC, functional endpoints align more closely with functional programming paradigms. They grant developers more explicit control over the request lifecycle and can lead to more composable and potentially more testable routing configurations, especially for complex scenarios.[19, 150, 151, 152] The choice often hinges on team familiarity and the complexity of the routing logic required.<\/p>\n<h3>Reactive <code>WebClient<\/code><\/h3>\n<p>A fully reactive application requires non-blocking communication not just on the server-side but also when making calls to external services. The traditional <code>RestTemplate<\/code> is blocking and therefore unsuitable for use within a WebFlux application, as it would block the event loop thread.[8, 11, 49, 51]<\/p>\n<p>Spring provides <strong><code>WebClient<\/code><\/strong> as the modern, non-blocking, reactive alternative for performing HTTP requests.[7, 11, 40, 42, 44, 45, 46, 49, 51, 60] It integrates seamlessly with Project Reactor, using <code>Mono<\/code> and <code>Flux<\/code> to handle request bodies and responses asynchronously.<\/p>\n<ul>\n<li><strong>Basic Usage:<\/strong><pre><code class=\"language-java\">import org.springframework.web.reactive.function.client.WebClient;\nimport reactor.core.publisher.Flux;\nimport reactor.core.publisher.Mono;\n\n\/\/ Typically created via WebClient.builder() or injected as a bean\nWebClient client = WebClient.create(&quot;[http:\/\/example.org](http:\/\/example.org)&quot;);\n\n\/\/ Example: GET request expecting a single object (Mono)\nMono&lt;UserDetails&gt; userMono = client.get()\n      .uri(&quot;\/users\/{id}&quot;, userId)\n      .accept(MediaType.APPLICATION_JSON)\n      .retrieve() \/\/ Gets the response body\n      .bodyToMono(UserDetails.class); \/\/ Converts body to Mono&lt;UserDetails&gt;\n\n\/\/ Example: GET request expecting multiple objects (Flux)\nFlux&lt;Event&gt; eventFlux = client.get()\n      .uri(&quot;\/events&quot;)\n      .accept(MediaType.APPLICATION_STREAM_JSON)\n      .retrieve()\n      .bodyToFlux(Event.class); \/\/ Converts body to Flux&lt;Event&gt;\n\n\/\/ Example: POST request sending a Mono and expecting a Mono\nMono&lt;User&gt; newUserMono = Mono.just(new User(...));\nMono&lt;User&gt; createdUserMono = client.post()\n      .uri(&quot;\/users&quot;)\n      .contentType(MediaType.APPLICATION_JSON)\n      .body(newUserMono, User.class) \/\/ Send Mono&lt;User&gt; as request body\n      .retrieve()\n      .bodyToMono(User.class);\n<\/code><\/pre>\n<em>[46, 49, 51]<\/em><\/li>\n<\/ul>\n<p>Using <code>WebClient<\/code> is essential for maintaining the non-blocking nature of a WebFlux application end-to-end. When the server (e.g., Netty) and the <code>WebClient<\/code> (using the Reactor Netty connector) run within the same application, they can share event loop resources efficiently, further optimizing performance.[19, 44] The necessity of a non-blocking client like <code>WebClient<\/code> stems directly from the non-blocking server architecture of WebFlux; using a blocking client would fundamentally undermine the reactive model\u2019s benefits.[8, 11, 49, 51]<\/p>\n<h2>3. Spring Data R2DBC: Reactive Relational Database Access<\/h2>\n<p>A significant challenge in building end-to-end reactive applications has been interacting with traditional relational databases. Standard Java database access APIs like JDBC (Java Database Connectivity) are inherently blocking.[1, 8, 27, 28, 55, 56] When a JDBC operation is performed (e.g., executing a query, fetching results), the calling thread blocks until the database responds. This blocking behavior is incompatible with the non-blocking philosophy of reactive frameworks like Spring WebFlux, as it would tie up event loop threads and negate performance benefits.[8]<\/p>\n<h3>Why R2DBC?<\/h3>\n<p>To bridge this gap, the <strong>R2DBC (Reactive Relational Database Connectivity)<\/strong> specification was created.[9, 12, 27, 28, 30, 37, 55, 56, 69, 70, 71, 72, 74, 88, 134] R2DBC defines a standard Service Provider Interface (SPI) for accessing SQL databases using reactive, non-blocking patterns based on the Reactive Streams specification.[27, 28, 78] It allows developers to interact with relational databases asynchronously, receiving results as <code>Mono<\/code> or <code>Flux<\/code> streams, making it suitable for integration into reactive applications.<\/p>\n<p>Several popular databases now have R2DBC driver implementations, including PostgreSQL, H2, MySQL, MariaDB, Microsoft SQL Server, and Oracle.[27, 28, 47, 54, 55, 71, 73, 78, 81, 83, 85, 86, 89, 91]<\/p>\n<p><strong>Spring Data R2DBC<\/strong> builds upon the R2DBC specification, providing familiar Spring abstractions like templates and repositories to simplify reactive database access.[28, 55, 76, 83, 88]<\/p>\n<p>It\u2019s crucial to understand that Spring Data R2DBC is <strong>not<\/strong> a direct reactive replacement for JPA (Java Persistence API) or ORM (Object-Relational Mapping) frameworks like Hibernate.[28, 69, 72, 83] It provides basic object mapping and repository support but lacks advanced ORM features such as:<\/p>\n<ul>\n<li>Lazy loading<\/li>\n<li>Caching (first\/second level)<\/li>\n<li>Automatic relationship management (e.g., <code>@OneToMany<\/code>, <code>@ManyToMany<\/code>) defined via annotations<\/li>\n<li>Rich query languages like JPQL or HQL [72]<\/li>\n<\/ul>\n<p>This means developers often need to write more native SQL queries and handle relationships manually compared to using JPA.[69, 72] R2DBC represents a trade-off: gaining non-blocking database access often comes at the cost of the higher-level abstractions and conveniences provided by mature ORM frameworks. Given that R2DBC is a relatively newer technology compared to the decades-old JDBC\/JPA standards [28, 85], its ecosystem is still evolving. This difference in maturity and feature set means that while R2DBC is excellent for enabling reactive data access, it might be more suitable for applications with simpler data models or where the performance benefits of non-blocking I\/O outweigh the development overhead of managing SQL and relationships more manually.[28, 69]<\/p>\n<h3>Configuration<\/h3>\n<p>Setting up Spring Data R2DBC involves adding dependencies and configuring how the application connects to the database.<\/p>\n<ul>\n<li><strong>Dependencies:<\/strong> You need the <code>spring-boot-starter-data-r2dbc<\/code> dependency, which brings in Spring Data R2DBC support. Additionally, you must include the specific R2DBC driver dependency for your target database (e.g., <code>io.r2dbc:r2dbc-postgresql<\/code>, <code>io.r2dbc:r2dbc-h2<\/code>, <code>dev.miku:r2dbc-mysql<\/code>).[28, 47, 54, 55, 56, 58, 69, 75, 76, 81, 83, 85, 86]<\/li>\n<li><strong><code>ConnectionFactory<\/code>:<\/strong> This interface is the central piece of R2DBC configuration, analogous to <code>DataSource<\/code> in JDBC.[55, 56, 75, 76, 78, 79, 85, 156] It represents a factory for creating connections to the database.\n<ul>\n<li><strong>Via <code>application.properties<\/code>\/<code>yml<\/code>:<\/strong> Spring Boot provides auto-configuration for <code>ConnectionFactory<\/code>. You can configure the connection details using properties like <code>spring.r2dbc.url<\/code>, <code>spring.r2dbc.username<\/code>, and <code>spring.r2dbc.password<\/code>.[37, 47, 54, 75, 76, 79, 83, 86, 89, 91] The URL follows the format <code>r2dbc:&lt;driver&gt;:\/\/&lt;host&gt;:&lt;port&gt;\/&lt;database&gt;[?options]<\/code>.[27, 54, 79, 83, 85, 91]<pre><code class=\"language-yaml\"># Example application.yml configuration for PostgreSQL\nspring:\n  r2dbc:\n    url: r2dbc:postgresql:\/\/localhost:5432\/mydatabase\n    username: user\n    password: password\n    pool:\n      enabled: true # Enable connection pooling\n      initial-size: 5\n      max-size: 10\n<\/code><\/pre>\n<em>[47, 54, 75, 79, 83, 86, 91]<\/em><\/li>\n<li><strong>Via Java Configuration:<\/strong> You can manually define a <code>ConnectionFactory<\/code> bean, typically by extending <code>AbstractR2dbcConfiguration<\/code> or simply defining the bean in a <code>@Configuration<\/code> class.[55, 56, 75, 77, 78, 79, 83, 85, 156] This approach gives more control and is necessary if not using Spring Boot or when configuring multiple databases.[77] The <code>@EnableR2dbcRepositories<\/code> annotation is usually required when using manual Java configuration to scan for repository interfaces.[55, 56, 75, 77, 79]<pre><code class=\"language-java\">import io.r2dbc.spi.ConnectionFactories;\nimport io.r2dbc.spi.ConnectionFactory;\nimport io.r2dbc.spi.ConnectionFactoryOptions;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.data.r2dbc.repository.config.EnableR2dbcRepositories;\n\nimport static io.r2dbc.spi.ConnectionFactoryOptions.*;\n\n@Configuration\n@EnableR2dbcRepositories \/\/ Enable scanning for R2DBC repositories\npublic class R2dbcConfig {\n\n    @Bean\n    public ConnectionFactory connectionFactory() {\n        return ConnectionFactories.get(\n            ConnectionFactoryOptions.builder()\n              .option(DRIVER, &quot;postgresql&quot;)\n              .option(HOST, &quot;localhost&quot;)\n              .option(PORT, 5432)\n              .option(USER, &quot;user&quot;)\n              .option(PASSWORD, &quot;password&quot;)\n              .option(DATABASE, &quot;mydatabase&quot;)\n              .build());\n    }\n}\n<\/code><\/pre>\n<em>[55, 56, 75, 77, 78, 79, 83, 85, 156]<\/em><\/li>\n<\/ul>\n<\/li>\n<li><strong>Connection Pooling:<\/strong> For production applications, connection pooling is essential for performance. The <code>r2dbc-pool<\/code> library provides pooling capabilities.[28, 79, 80] Spring Boot will often auto-configure pooling if <code>r2dbc-pool<\/code> is on the classpath and <code>spring.r2dbc.pool.enabled=true<\/code> (which is often the default).[80] Configuration options are available under <code>spring.r2dbc.pool.*<\/code>.[79, 80]<\/li>\n<li><strong>Schema Initialization:<\/strong> Spring Boot can automatically execute SQL scripts named <code>schema.sql<\/code> (for DDL) and <code>data.sql<\/code> (for DML) found in the classpath upon startup.[37, 47, 75, 76, 156] This is facilitated by the <code>ConnectionFactoryInitializer<\/code> bean, which is auto-configured by Spring Boot.<\/li>\n<\/ul>\n<h3>Interacting with the Database<\/h3>\n<p>Spring Data R2DBC provides several ways to interact with the database, offering different levels of abstraction:<\/p>\n<ul>\n<li>\n<p><strong>Using <code>DatabaseClient<\/code>:<\/strong>\nThis is the core, fluent API provided by the <code>spring-r2dbc<\/code> module (originally part of Spring Data R2DBC, now in Spring Framework core).[70, 84, 156] It offers a flexible way to execute arbitrary SQL statements reactively.[56, 70, 75, 77, 78, 79, 82, 83, 84, 156, 157] It handles resource management (opening\/closing connections) and translates R2DBC exceptions into Spring\u2019s <code>DataAccessException<\/code> hierarchy.[70]<\/p>\n<ul>\n<li><strong>Execution Flow:<\/strong> You start with <code>databaseClient.sql(&quot;YOUR SQL HERE&quot;)<\/code>, then optionally <code>.bind()<\/code> parameters, then use <code>.fetch()<\/code> to specify result consumption, followed by a terminal operator like <code>.first()<\/code>, <code>.one()<\/code>, <code>.all()<\/code>, or <code>.rowsUpdated()<\/code>, or <code>.then()<\/code> for fire-and-forget updates.[70, 156]<\/li>\n<li><strong>Parameter Binding:<\/strong> Supports named parameters (<code>:paramName<\/code>) and positional parameters (index-based <code>bind(0, value)<\/code>).[70, 156] Can also bind properties from objects (<code>bindProperties(object)<\/code>) or values from a map (<code>bindValues(map)<\/code>).[70]<\/li>\n<li><strong>Result Mapping:<\/strong> The <code>.map((row, rowMetadata) -&gt;...)<\/code> operator allows mapping each <code>Row<\/code> object in the result set to a domain object.[70, 156] You retrieve column values using <code>row.get(&quot;column_name&quot;, TargetType.class)<\/code>.[70] Remember that Reactive Streams forbid <code>null<\/code> emissions, so null handling within the mapping function is necessary.[70]<\/li>\n<li><strong>CRUD Examples (Conceptual):<\/strong><pre><code class=\"language-java\">\/\/ INSERT\nMono&lt;Void&gt; insertOp = databaseClient.sql(&quot;INSERT INTO users(name, email) VALUES(:name, :email)&quot;)\n  .bind(&quot;name&quot;, user.getName())\n  .bind(&quot;email&quot;, user.getEmail())\n  .then();\n\n\/\/ SELECT ONE\nMono&lt;User&gt; selectOneOp = databaseClient.sql(&quot;SELECT id, name, email FROM users WHERE id = :id&quot;)\n  .bind(&quot;id&quot;, userId)\n  .map((row, meta) -&gt; new User(row.get(&quot;id&quot;, Long.class), row.get(&quot;name&quot;, String.class), row.get(&quot;email&quot;, String.class)))\n  .one(); \/\/ Use.first() if 0 or 1 result is okay,.one() expects exactly 1\n\n\/\/ SELECT ALL\nFlux&lt;User&gt; selectAllOp = databaseClient.sql(&quot;SELECT id, name, email FROM users&quot;)\n   .map((row, meta) -&gt; new User(row.get(&quot;id&quot;, Long.class), row.get(&quot;name&quot;, String.class), row.get(&quot;email&quot;, String.class)))\n   .all();\n\n\/\/ UPDATE\nMono&lt;Integer&gt; updateOp = databaseClient.sql(&quot;UPDATE users SET email = :email WHERE id = :id&quot;)\n  .bind(&quot;email&quot;, newEmail)\n  .bind(&quot;id&quot;, userId)\n  .fetch().rowsUpdated(); \/\/ Returns the number of updated rows\n\n\/\/ DELETE\nMono&lt;Integer&gt; deleteOp = databaseClient.sql(&quot;DELETE FROM users WHERE id = :id&quot;)\n  .bind(&quot;id&quot;, userId)\n  .fetch().rowsUpdated(); \/\/ Returns the number of deleted rows\n<\/code><\/pre>\n<em>[28, 70, 83, 84, 156]<\/em><\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Using <code>ReactiveCrudRepository<\/code>:<\/strong>\nThis provides the familiar repository abstraction pattern from Spring Data.[9, 28, 30, 37, 47, 54, 55, 56, 58, 69, 72, 76, 77, 81, 82, 83, 85, 86, 87, 88] You define an interface extending <code>ReactiveCrudRepository&lt;EntityType, IdType&gt;<\/code> (or <code>ReactiveSortingRepository<\/code>), and Spring Data R2DBC automatically provides implementations for standard CRUD methods, returning <code>Mono<\/code> or <code>Flux<\/code>.[28, 47, 54, 55, 56, 58, 69, 76, 77, 81, 83, 84, 85, 86]<\/p>\n<ul>\n<li><strong>Standard Methods:<\/strong> Includes <code>save(entity)<\/code>, <code>saveAll(entities)<\/code>, <code>findById(id)<\/code>, <code>findAll()<\/code>, <code>count()<\/code>, <code>deleteById(id)<\/code>, <code>delete(entity)<\/code>, <code>deleteAll()<\/code>.[28, 47, 54, 69, 76, 77, 83, 85, 86]<\/li>\n<li><strong>Custom Queries:<\/strong> You can define custom query methods using the <code>@Query<\/code> annotation, providing native SQL statements.[47, 56, 69, 72, 76, 77, 85, 86, 87] Named parameters (<code>:paramName<\/code>) in the query are bound to method arguments with the same name. Query derivation (generating queries from method names) has limited support compared to Spring Data JPA.[75, 122]<\/li>\n<li><strong>Example Repository:<\/strong><pre><code class=\"language-java\">import org.springframework.data.r2dbc.repository.Query;\nimport org.springframework.data.repository.reactive.ReactiveCrudRepository;\nimport reactor.core.publisher.Flux;\nimport reactor.core.publisher.Mono;\n\npublic interface UserRepository extends ReactiveCrudRepository&lt;User, Long&gt; {\n\n    Flux&lt;User&gt; findByEmailContaining(String emailPart); \/\/ Derived query (limited support)\n\n    @Query(&quot;SELECT * FROM users WHERE age &gt; :minAge&quot;)\n    Flux&lt;User&gt; findUsersOlderThan(int minAge);\n\n    @Query(&quot;UPDATE users SET status = :status WHERE id = :id&quot;)\n    Mono&lt;Integer&gt; updateUserStatus(Long id, String status);\n}\n<\/code><\/pre>\n<em>[28, 47, 56, 69, 72, 76, 77, 85, 86, 87]<\/em><\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Using <code>R2dbcEntityTemplate<\/code>:<\/strong>\nIntroduced later, <code>R2dbcEntityTemplate<\/code> offers a convenient, entity-centric API similar to <code>JdbcTemplate<\/code> or <code>MongoTemplate<\/code>.[28, 55, 77, 85, 88] It simplifies common operations like inserting, selecting, updating, and deleting entities using a fluent API.<\/p>\n<ul>\n<li><strong>Example Usage:<\/strong><pre><code class=\"language-java\">import org.springframework.data.r2dbc.core.R2dbcEntityTemplate;\nimport org.springframework.data.relational.core.query.Criteria;\nimport org.springframework.data.relational.core.query.Query;\nimport reactor.core.publisher.Flux;\nimport reactor.core.publisher.Mono;\n\n\/\/ Assume 'template' is an injected R2dbcEntityTemplate bean\nR2dbcEntityTemplate template;\n\n\/\/ INSERT\nMono&lt;User&gt; savedUser = template.insert(User.class).using(newUser);\n\n\/\/ SELECT ONE by ID\nMono&lt;User&gt; userById = template.select(User.class).matching(Query.query(Criteria.where(&quot;id&quot;).is(userId))).one();\n\n\/\/ SELECT ALL\nFlux&lt;User&gt; allUsers = template.select(User.class).all();\n\n\/\/ SELECT with Criteria\nFlux&lt;User&gt; activeUsers = template.select(User.class)\n                          .matching(Query.query(Criteria.where(&quot;status&quot;).is(&quot;ACTIVE&quot;)))\n                          .all();\n\n\/\/ UPDATE\nMono&lt;User&gt; updatedUser = template.update(userToUpdate); \/\/ Assumes userToUpdate has ID set\n\n\/\/ DELETE\nMono&lt;Integer&gt; deletedCount = template.delete(User.class).matching(Query.query(Criteria.where(&quot;id&quot;).is(userId))).all();\n<\/code><\/pre>\n<em>[28, 85]<\/em><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Spring Data R2DBC provides these different abstraction levels (<code>DatabaseClient<\/code>, <code>R2dbcEntityTemplate<\/code>, <code>ReactiveCrudRepository<\/code>) allowing developers to select the approach that best fits their project\u2019s needs, balancing the need for control over SQL execution with the convenience of higher-level abstractions.[28, 55, 70, 72, 76, 77, 85]<\/p>\n<h2>4. Spring Kafka: Reactive Messaging<\/h2>\n<p>Integrating asynchronous messaging systems like Apache Kafka into reactive applications requires non-blocking interaction patterns. While standard <code>spring-kafka<\/code> provides robust Kafka integration, its core listener model (<code>@KafkaListener<\/code>) is generally based on a blocking, thread-per-consumer approach, which is not ideal for a fully reactive stack.[107, 114]<\/p>\n<p>To achieve true non-blocking Kafka integration within the Spring ecosystem, developers typically leverage <strong>Project Reactor Kafka<\/strong> (<code>reactor-kafka<\/code>), a dedicated library providing reactive APIs for Kafka producers and consumers.[93, 95, 96, 97, 98, 99, 100, 101, 103, 104, 105, 108, 110, 111, 112, 114, 158] Spring provides lightweight wrappers and configuration support to simplify the use of Reactor Kafka within Spring applications. Alternatively, <strong>Spring Cloud Stream<\/strong> offers a higher-level abstraction with a reactive Kafka binder.[21, 23, 52, 94, 115, 116, 117, 118, 119]<\/p>\n<p>This reliance on the external <code>reactor-kafka<\/code> library is a key point; Spring\u2019s reactive Kafka support primarily consists of integrating and simplifying this library, rather than a completely independent reactive implementation within <code>spring-kafka<\/code> itself.[98, 99, 104, 110, 112, 114] Consequently, understanding the core concepts and configuration options of Reactor Kafka (<code>SenderOptions<\/code>, <code>ReceiverOptions<\/code>) is essential.[98, 99, 100, 105, 110, 112]<\/p>\n<h3>Reactive Producer<\/h3>\n<p>Sending messages to Kafka reactively involves using Reactor Kafka\u2019s <code>KafkaSender<\/code>, often via Spring\u2019s <code>ReactiveKafkaProducerTemplate<\/code>.<\/p>\n<ul>\n<li>\n<p><strong>Configuration:<\/strong><\/p>\n<ul>\n<li><strong>Dependencies:<\/strong> Ensure <code>org.springframework.kafka:spring-kafka<\/code> and <code>io.projectreactor.kafka:reactor-kafka<\/code> are included in the project.[93, 95, 110]<\/li>\n<li><strong><code>SenderOptions<\/code>:<\/strong> This Reactor Kafka class holds the configuration for the underlying <code>KafkaProducer<\/code> (bootstrap servers, serializers, acknowledgments, retries, etc.).[98, 99, 100, 101, 102, 103, 105] It also includes reactive-specific options like <code>maxInFlight<\/code> (to control backpressure by limiting concurrent sends) and <code>stopOnError<\/code>.[105]<\/li>\n<li><strong><code>ReactiveKafkaProducerTemplate<\/code> Bean:<\/strong> This Spring template wraps the <code>KafkaSender<\/code>.[98, 99, 100, 101, 102, 103, 104, 158] It\u2019s typically configured as a Spring bean. Spring Boot can simplify this by injecting <code>KafkaProperties<\/code> (from <code>application.properties<\/code>\/<code>yml<\/code>) which are used to build the <code>SenderOptions<\/code>.<pre><code class=\"language-java\">import org.springframework.boot.autoconfigure.kafka.KafkaProperties;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.kafka.core.reactive.ReactiveKafkaProducerTemplate;\nimport reactor.kafka.sender.SenderOptions;\nimport java.util.Map;\n\n@Configuration\npublic class KafkaProducerConfig {\n\n    @Bean\n    public SenderOptions&lt;String, MyEvent&gt; senderOptions(KafkaProperties kafkaProperties) {\n        Map&lt;String, Object&gt; props = kafkaProperties.buildProducerProperties();\n        \/\/ Optionally override or add properties\n        \/\/ props.put(ProducerConfig.ACKS_CONFIG, &quot;all&quot;);\n        return SenderOptions.&lt;String, MyEvent&gt;create(props)\n                 .maxInFlight(1024); \/\/ Example reactive option\n    }\n\n    @Bean\n    public ReactiveKafkaProducerTemplate&lt;String, MyEvent&gt; reactiveKafkaProducerTemplate(\n            SenderOptions&lt;String, MyEvent&gt; senderOptions) {\n        return new ReactiveKafkaProducerTemplate&lt;&gt;(senderOptions);\n    }\n}\n<\/code><\/pre>\n<em>[100, 101, 102, 103, 105]<\/em><\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Sending Messages:<\/strong><\/p>\n<ul>\n<li>The <code>send()<\/code> methods of <code>ReactiveKafkaProducerTemplate<\/code> are used to send messages. Common variants include <code>send(String topic, V value)<\/code>, <code>send(String topic, K key, V value)<\/code>, or <code>send(ProducerRecord&lt;K, V&gt; record)<\/code>.[98, 100, 101, 158]<\/li>\n<li>Each <code>send()<\/code> call returns a <code>Mono&lt;SenderResult&lt;T&gt;&gt;<\/code>.[98, 100, 158] The <code>SenderResult<\/code> contains metadata about the sent message, such as the topic, partition, and offset, accessible via <code>result.recordMetadata()<\/code>.[100, 158] The <code>T<\/code> in <code>SenderResult&lt;T&gt;<\/code> is a correlation metadata type, often <code>Void<\/code> if no specific correlation is needed.<\/li>\n<li><strong>Handling the Result:<\/strong> Since <code>send()<\/code> is asynchronous, you must subscribe to the returned <code>Mono<\/code> to trigger the send operation and handle its outcome. This is typically done using operators like <code>doOnSuccess<\/code>, <code>doOnError<\/code>, <code>then<\/code>, or by integrating the <code>Mono<\/code> into a larger reactive chain.<pre><code class=\"language-java\">import org.springframework.stereotype.Service;\nimport reactor.core.publisher.Mono;\n\n@Service\npublic class EventPublisher {\n\n    private final ReactiveKafkaProducerTemplate&lt;String, MyEvent&gt; template;\n    private final String topic = &quot;my-events&quot;;\n\n    public EventPublisher(ReactiveKafkaProducerTemplate&lt;String, MyEvent&gt; template) {\n        this.template = template;\n    }\n\n    public Mono&lt;Void&gt; publishEvent(MyEvent event) {\n        String key = event.getId(); \/\/ Example key\n        return template.send(topic, key, event)\n              .doOnSuccess(result -&gt; System.out.println(\n                    &quot;Sent event &quot; + key + &quot; to offset: &quot; + result.recordMetadata().offset()))\n              .doOnError(error -&gt; System.err.println(\n                    &quot;Failed to send event &quot; + key + &quot;: &quot; + error.getMessage()))\n              .then(); \/\/ Return Mono&lt;Void&gt; indicating completion\/error\n    }\n}\n<\/code><\/pre>\n<em>[100, 101, 158]<\/em><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Reactive Consumer<\/h3>\n<p>Consuming messages reactively involves using Reactor Kafka\u2019s <code>KafkaReceiver<\/code> or Spring\u2019s <code>ReactiveKafkaConsumerTemplate<\/code>.<\/p>\n<ul>\n<li>\n<p><strong>Configuration:<\/strong><\/p>\n<ul>\n<li><strong>Dependencies:<\/strong> Same as producer: <code>spring-kafka<\/code>, <code>reactor-kafka<\/code>.[93, 95, 110]<\/li>\n<li><strong><code>ReceiverOptions<\/code>:<\/strong> This is the central configuration object for the reactive consumer.[99, 105, 108, 110, 111, 112] It defines:\n<ul>\n<li>Kafka consumer properties (bootstrap servers, group ID, deserializers, auto offset reset policy, etc.) using <code>ConsumerConfig<\/code> keys.<\/li>\n<li>Topic subscription(s) using <code>.subscription(Collection&lt;String&gt; topics)<\/code> or <code>.subscription(Pattern pattern)<\/code>.[105]<\/li>\n<li>Commit strategy (e.g., commit interval <code>.commitInterval()<\/code>, batch size <code>.commitBatchSize()<\/code>).[105]<\/li>\n<li>Assignment\/Revocation listeners (<code>.addAssignListener()<\/code>, <code>.addRevokeListener()<\/code>) for custom offset management (e.g., seeking to beginning\/end).[99, 105, 111]<\/li>\n<\/ul>\n<\/li>\n<li><strong><code>KafkaReceiver<\/code> \/ <code>ReactiveKafkaConsumerTemplate<\/code> Bean:<\/strong> Similar to the producer, you configure <code>ReceiverOptions<\/code> as a bean and then use it to create either a <code>KafkaReceiver<\/code> bean directly or a <code>ReactiveKafkaConsumerTemplate<\/code> bean.[99, 105, 110, 111, 112]<pre><code class=\"language-java\">import org.apache.kafka.clients.consumer.ConsumerConfig;\nimport org.springframework.boot.autoconfigure.kafka.KafkaProperties;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.kafka.core.reactive.ReactiveKafkaConsumerTemplate;\nimport reactor.kafka.receiver.ReceiverOptions;\nimport java.util.Collections;\nimport java.util.Map;\nimport java.time.Duration;\n\n@Configuration\npublic class KafkaConsumerConfig {\n\n    @Bean\n    public ReceiverOptions&lt;String, MyEvent&gt; receiverOptions(KafkaProperties kafkaProperties) {\n        Map&lt;String, Object&gt; props = kafkaProperties.buildConsumerProperties();\n        \/\/ props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, &quot;earliest&quot;);\n        return ReceiverOptions.&lt;String, MyEvent&gt;create(props)\n              .subscription(Collections.singletonList(&quot;my-events&quot;))\n              .commitInterval(Duration.ofSeconds(5)); \/\/ Example: Commit offsets every 5s\n                \/\/.commitBatchSize(100); \/\/ Example: Or commit after 100 messages acknowledged\n    }\n\n    @Bean\n    public ReactiveKafkaConsumerTemplate&lt;String, MyEvent&gt; reactiveKafkaConsumerTemplate(\n            ReceiverOptions&lt;String, MyEvent&gt; receiverOptions) {\n        return new ReactiveKafkaConsumerTemplate&lt;&gt;(receiverOptions);\n    }\n}\n<\/code><\/pre>\n<em>[99, 105, 110, 111]<\/em><\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Consuming Messages:<\/strong><\/p>\n<ul>\n<li>The primary way to consume is via the <code>receive()<\/code> method (on <code>KafkaReceiver<\/code> or <code>ReactiveKafkaConsumerTemplate<\/code>), which returns a <code>Flux&lt;ReceiverRecord&lt;K, V&gt;&gt;<\/code>.[99, 108, 109, 110, 112] Each <code>ReceiverRecord<\/code> contains the consumed message (key, value, headers, etc.) and a <code>ReceiverOffset<\/code> object used for manual acknowledgment.<\/li>\n<li>Other receive methods exist, such as <code>receiveAutoAck()<\/code> (acknowledges before processing), <code>receiveAtmostOnce()<\/code> (acknowledges after successful poll), and <code>receiveExactlyOnce()<\/code> (for transactional processing).[110, 112]<\/li>\n<li><strong>Processing the <code>Flux<\/code>:<\/strong> You process the stream of <code>ReceiverRecord<\/code>s using Reactor operators. Common patterns involve <code>flatMap<\/code> or <code>concatMap<\/code> to handle processing (potentially asynchronous) for each message.<\/li>\n<li><strong>Manual Acknowledgement:<\/strong> To ensure messages are processed reliably (\u201cat-least-once\u201d semantics), manual acknowledgment is typically used. After successfully processing a message, you call <code>record.receiverOffset().acknowledge()<\/code>.[99, 108, 110, 112] This signals to Kafka that the message has been processed, allowing the offset to be committed according to the configured strategy (e.g., interval or batch size). Error handling is crucial here; acknowledgment should typically only happen upon successful processing. Retries might be implemented before acknowledging or sending to a dead-letter topic.[108]<pre><code class=\"language-java\">import org.springframework.stereotype.Service;\nimport reactor.core.publisher.Flux;\nimport reactor.kafka.receiver.ReceiverRecord;\nimport javax.annotation.PostConstruct;\n\n@Service\npublic class EventConsumer {\n\n    private final ReactiveKafkaConsumerTemplate&lt;String, MyEvent&gt; template;\n    private final EventProcessingService processingService; \/\/ Assume reactive service\n\n    public EventConsumer(ReactiveKafkaConsumerTemplate&lt;String, MyEvent&gt; template, EventProcessingService processingService) {\n        this.template = template;\n        this.processingService = processingService;\n    }\n\n    @PostConstruct \/\/ Start consuming when the bean is ready\n    public void consumeEvents() {\n        template.receive() \/\/ Returns Flux&lt;ReceiverRecord&lt;String, MyEvent&gt;&gt;\n          .flatMap(record -&gt; {\n                System.out.println(&quot;Received key=&quot; + record.key() + &quot;, value=&quot; + record.value() +\n                                   &quot; from topic=&quot; + record.topic() + &quot;, partition=&quot; + record.partition() +\n                                   &quot;, offset=&quot; + record.offset());\n                \/\/ Process the event reactively\n                return processingService.process(record.value())\n                      .doOnSuccess(v -&gt; record.receiverOffset().acknowledge()) \/\/ Acknowledge on success\n                      .doOnError(e -&gt; System.err.println(&quot;Processing failed for offset &quot; + record.offset() + &quot;: &quot; + e.getMessage()));\n                        \/\/ Add retry or dead-letter logic here if needed\n            })\n          .subscribe(); \/\/ Start the consumption\n    }\n}\n<\/code><\/pre>\n<em>[95, 99, 108, 110, 112]<\/em><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Brief Overview: Spring Cloud Stream Reactive Kafka Binder<\/h3>\n<p>Spring Cloud Stream provides a higher-level, opinionated framework for building message-driven microservices.[21, 23, 115, 116, 117, 118] It uses a binder abstraction to connect to different messaging systems like Kafka or RabbitMQ.[94, 115, 116, 117, 119, 159, 160]<\/p>\n<p>For Kafka, it offers a specific <strong>reactive binder<\/strong> (<code>spring-cloud-stream-binder-kafka-reactive<\/code>).[23, 94] Unlike the standard Kafka binder (which limits reactivity to the function execution), the reactive binder leverages Reactor Kafka (<code>KafkaReceiver<\/code> and <code>KafkaSender<\/code>) internally to provide full end-to-end reactive processing and automatic backpressure handling.[94]<\/p>\n<p>It promotes a functional programming model where you define beans of type <code>java.util.function.Function&lt;Flux&lt;In&gt;, Flux&lt;Out&gt;&gt;<\/code>, <code>Consumer&lt;Flux&lt;In&gt;&gt;<\/code>, or <code>Supplier&lt;Flux&lt;Out&gt;&gt;<\/code> to process message streams.[23, 94, 115, 117, 118, 119] Spring Cloud Stream handles the binding of these functions to Kafka topics based on configuration.<\/p>\n<pre><code class=\"language-java\">\/\/ Example Spring Cloud Stream function bean\n@Bean\npublic Function&lt;Flux&lt;String&gt;, Flux&lt;String&gt;&gt; process() {\n    return flux -&gt; flux\n          .map(String::toUpperCase)\n          .log();\n}\n<\/code><\/pre>\n<pre><code class=\"language-yaml\"># Example application.yml for Spring Cloud Stream\nspring:\n  cloud:\n    stream:\n      function:\n        definition: process # Links to the bean name\n      bindings:\n        process-in-0: # Input binding for 'process' function\n          destination: input-topic\n        process-out-0: # Output binding for 'process' function\n          destination: output-topic\n      kafka:\n        binder:\n          brokers: localhost:9092\n          # Use reactive binder:\n          # Add spring-cloud-stream-binder-kafka-reactive dependency\n<\/code><\/pre>\n<p><em>[23, 94, 115, 118]<\/em><\/p>\n<p>Developers integrating reactive Kafka have a choice: the lower-level control of Reactor Kafka (via <code>KafkaReceiver<\/code>\/<code>KafkaSender<\/code> or Spring\u2019s reactive templates), or the higher-level abstraction and conventions of the Spring Cloud Stream reactive binder.[23, 94, 99, 110, 112, 114, 115, 118] The best choice depends on the specific needs for control, broker-agnosticism, and adherence to Spring Cloud Stream\u2019s programming model.<\/p>\n<h2>5. Exploring Other Reactive Spring Projects<\/h2>\n<p>Beyond WebFlux, R2DBC, and Kafka integration, the Spring ecosystem offers reactive capabilities in several other key areas, enabling the construction of more comprehensive end-to-end reactive systems.[21] This demonstrates a commitment by the Spring team to provide reactive alternatives across various domains, allowing developers to maintain a non-blocking architecture throughout their application stack.[7, 29, 30, 59, 60, 61, 62, 63, 120, 121, 122, 123, 124, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143]<\/p>\n<h3>Spring Data Reactive Repositories (NoSQL)<\/h3>\n<p>Spring Data extends its familiar repository abstraction to support reactive programming models for several popular NoSQL databases.[21, 30, 120, 121, 123, 124] This support, however, is contingent on the availability of underlying reactive drivers for the specific database technology.[29, 121, 122, 123, 124] If a database only offers blocking drivers (like traditional JDBC), true non-blocking integration within a reactive Spring application is not feasible without workarounds that compromise the reactive model.[123]<\/p>\n<ul>\n<li><strong>MongoDB:<\/strong> Spring Data MongoDB provides extensive reactive support through <code>ReactiveMongoRepository<\/code> (extending <code>ReactiveCrudRepository<\/code> and <code>ReactiveSortingRepository<\/code>) and <code>ReactiveMongoTemplate<\/code>.[7, 21, 29, 30, 120, 121, 122, 123, 124, 145, 161] This integration relies on the official MongoDB Reactive Streams Java Driver (<code>mongodb-driver-reactivestreams<\/code>).[29, 122, 124] Configuration typically involves adding the <code>spring-boot-starter-data-mongodb-reactive<\/code> dependency and configuring connection properties (e.g., <code>spring.data.mongodb.uri<\/code>).[122] Repository interfaces return <code>Mono<\/code> and <code>Flux<\/code> for database operations.<\/li>\n<li><strong>Apache Cassandra:<\/strong> Spring Data Cassandra also offers reactive repository support, leveraging Cassandra\u2019s asynchronous Java driver.[21, 29, 30, 120, 121, 123, 124, 125] Similar to MongoDB, you define repository interfaces extending reactive base interfaces and interact with data using <code>Mono<\/code> and <code>Flux<\/code>. Configuration involves dependencies like <code>spring-boot-starter-data-cassandra-reactive<\/code>.<\/li>\n<li><strong>Redis:<\/strong> Spring Data Redis provides reactive capabilities primarily through the Lettuce driver, which is the only major Java Redis client with built-in reactive support.[121] Reactive interaction happens at the connection level via <code>ReactiveRedisConnection<\/code> and its command methods (e.g., <code>ReactiveStringCommands<\/code>, <code>ReactiveHashCommands<\/code>, etc.), which operate on <code>ByteBuffer<\/code> for efficiency.[121] Spring Data also offers <code>ReactiveRedisTemplate<\/code> and reactive repository support (<code>@EnableRedisRepositories<\/code>) for higher-level abstractions.[21, 29, 30, 120, 121, 123, 124, 125, 136, 137, 138, 140, 143]<\/li>\n<\/ul>\n<h3>Spring Security Reactive<\/h3>\n<p>Securing web applications is crucial, and Spring Security provides comprehensive features for reactive applications built with WebFlux.[7, 21, 46, 59, 60, 61, 62, 63, 126, 127] It integrates seamlessly into the reactive pipeline using non-blocking components.<\/p>\n<ul>\n<li><strong>Core Components:<\/strong>\n<ul>\n<li><strong><code>SecurityWebFilterChain<\/code>:<\/strong> The central piece of reactive security configuration. It\u2019s a chain of <code>WebFilter<\/code> instances that apply security rules to incoming requests. Multiple chains can be defined, ordered, and matched based on request paths or other attributes.[59, 126]<\/li>\n<li><strong><code>ServerHttpSecurity<\/code>:<\/strong> A builder used within configuration to define the <code>SecurityWebFilterChain<\/code>. It provides methods to configure authentication mechanisms (HTTP Basic, form login, OAuth2), authorization rules (<code>authorizeExchange()<\/code>), CSRF protection, header manipulation, etc., in a reactive way.[62, 126]<\/li>\n<li><strong><code>ReactiveUserDetailsService<\/code>:<\/strong> An interface responsible for loading user-specific data (username, password, authorities) reactively. Implementations typically fetch user details from a database or other identity store non-blockingly.[59, 62, 126] Spring provides <code>MapReactiveUserDetailsService<\/code> for in-memory user storage.[62, 126]<\/li>\n<li><strong><code>ReactiveAuthenticationManager<\/code>:<\/strong> Performs the actual authentication process reactively, typically using the details loaded by a <code>ReactiveUserDetailsService<\/code>.[126] Often implicitly configured when using standard authentication methods like form login or HTTP basic.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Configuration:<\/strong> Reactive security is enabled by adding the <code>spring-boot-starter-security<\/code> dependency and annotating a configuration class with <code>@EnableWebFluxSecurity<\/code>.[46, 59, 61, 62, 126] Beans of type <code>SecurityWebFilterChain<\/code> are defined to customize security rules.<pre><code class=\"language-java\">import org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity;\nimport org.springframework.security.config.web.server.ServerHttpSecurity;\nimport org.springframework.security.core.userdetails.MapReactiveUserDetailsService;\nimport org.springframework.security.core.userdetails.User;\nimport org.springframework.security.core.userdetails.UserDetails;\nimport org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;\nimport org.springframework.security.crypto.password.PasswordEncoder;\nimport org.springframework.security.web.server.SecurityWebFilterChain;\n\nimport static org.springframework.security.config.Customizer.withDefaults;\n\n@Configuration\n@EnableWebFluxSecurity\npublic class SecurityConfig {\n\n    @Bean\n    public SecurityWebFilterChain springSecurityFilterChain(ServerHttpSecurity http) {\n        http\n          .authorizeExchange(exchanges -&gt; exchanges\n              .pathMatchers(&quot;\/public\/**&quot;).permitAll() \/\/ Allow public access\n              .anyExchange().authenticated()       \/\/ Require auth for everything else\n            )\n          .httpBasic(withDefaults()) \/\/ Enable HTTP Basic auth\n          .formLogin(withDefaults()); \/\/ Enable Form login\n            \/\/.csrf(csrf -&gt; csrf.disable()); \/\/ Disable CSRF for simplicity if needed\n        return http.build();\n    }\n\n    @Bean\n    public MapReactiveUserDetailsService userDetailsService() {\n        UserDetails user = User.builder()\n          .username(&quot;user&quot;)\n          .password(passwordEncoder().encode(&quot;password&quot;))\n          .roles(&quot;USER&quot;)\n          .build();\n        return new MapReactiveUserDetailsService(user);\n    }\n\n    @Bean\n    public PasswordEncoder passwordEncoder() {\n        return new BCryptPasswordEncoder();\n    }\n}\n<\/code><\/pre>\n<em>[59, 61, 62, 126]<\/em><\/li>\n<li><strong>Reactive Method Security:<\/strong> Spring Security also supports securing reactive methods (returning <code>Mono<\/code> or <code>Flux<\/code>) using annotations like <code>@PreAuthorize<\/code> by adding the <code>@EnableReactiveMethodSecurity<\/code> annotation.[62]<\/li>\n<\/ul>\n<h3>Spring Cloud Gateway<\/h3>\n<p>For microservice architectures, an API Gateway acts as a single entry point, handling concerns like routing, security, rate limiting, and monitoring. <strong>Spring Cloud Gateway<\/strong> is the reactive gateway solution from the Spring Cloud portfolio, built entirely on Spring WebFlux, Project Reactor, and Netty.[21, 61, 128, 129, 130, 131, 132, 133, 134, 135]<\/p>\n<ul>\n<li><strong>Core Reactive Features:<\/strong>\n<ul>\n<li><strong>Reactive Routing:<\/strong> Routes requests to downstream services based on predicates. Predicates match request attributes like path, host, headers, query parameters, HTTP method, etc\u2026 [129, 130, 131, 132, 133, 135] Routes are defined via configuration (<code>application.yml<\/code>) or Java DSL (<code>RouteLocatorBuilder<\/code>).<\/li>\n<li><strong>Reactive Filtering:<\/strong> Filters modify requests and responses flowing through the gateway. Filters can be applied globally or specific to routes.[129, 130, 132, 133, 135] Built-in filters handle tasks like adding\/removing headers, rewriting paths, rate limiting (<code>RequestRateLimiter<\/code>), circuit breaking (<code>CircuitBreaker<\/code>), security (<code>TokenRelay<\/code>), etc\u2026 [130, 131, 132] Custom filters can also be created.<\/li>\n<li><strong>Non-blocking Foundation:<\/strong> Built on WebFlux, it handles all I\/O non-blockingly, making it highly scalable and efficient for managing traffic to potentially many microservices.[129, 130, 131, 132, 134, 135]<\/li>\n<\/ul>\n<\/li>\n<li><strong>Other Features:<\/strong> Integrates with Spring Cloud DiscoveryClient (like Eureka) for dynamic routing [131, 132, 133, 134], supports load balancing using Spring Cloud LoadBalancer [132, 133], provides Actuator endpoints for monitoring [132, 133], and supports WebSockets.[135] Commercial extensions (Tanzu Spring) add features like enhanced SSO and access control.[128]<\/li>\n<\/ul>\n<h3>Spring Session Reactive<\/h3>\n<p>Managing user sessions in a distributed, reactive environment requires a non-blocking approach. <strong>Spring Session<\/strong> provides reactive support, integrating with Spring WebFlux\u2019s <code>WebSession<\/code> abstraction.[136, 137, 139, 140, 143]<\/p>\n<ul>\n<li><strong><code>WebSession<\/code>:<\/strong> The reactive counterpart to the Servlet API\u2019s <code>HttpSession<\/code>.[136, 139, 140, 143]<\/li>\n<li><strong><code>ReactiveSessionRepository<\/code>:<\/strong> An interface for saving, retrieving, and deleting sessions reactively. Spring Session provides implementations backed by various data stores [136, 137, 138, 139, 140, 141, 143]:\n<ul>\n<li>Redis (<code>Spring Session Data Redis<\/code>).[136, 137, 138, 139, 140]<\/li>\n<li>MongoDB (<code>Spring Session MongoDB<\/code>).[139]<\/li>\n<li>JDBC (<code>Spring Session JDBC<\/code>).[139, 140]<\/li>\n<li>Hazelcast (<code>Spring Session Hazelcast<\/code>).[139, 143]<\/li>\n<\/ul>\n<\/li>\n<li><strong>Integration:<\/strong> Integration is typically enabled via annotations like <code>@EnableRedisWebSession<\/code>, <code>@EnableMongoWebSession<\/code>, etc\u2026 [136, 138, 140, 143] These annotations register a custom <code>WebSessionManager<\/code> bean backed by the corresponding <code>ReactiveSessionRepository<\/code>.[140] This allows WebFlux applications to use Spring Session for centralized, potentially clustered, session management without blocking operations.<\/li>\n<\/ul>\n<p>The breadth of these reactive modules highlights the completeness of Spring\u2019s reactive ecosystem. Developers can build sophisticated, fully reactive applications, covering web interactions, data persistence (SQL and NoSQL), messaging, security, API gateways, and session management, all within the familiar Spring framework.[21]<\/p>\n<h2>6. Integrating Reactive Components: An End-to-End View<\/h2>\n<p>Building a truly reactive application involves more than just using individual reactive components; it requires integrating them seamlessly to ensure the entire request processing pipeline is non-blocking from start to finish. Any blocking operation introduced at any stage can potentially bottleneck the system and negate the benefits of the reactive architecture.[1, 8, 48, 161]<\/p>\n<h3>Conceptual Flow of a Reactive Request<\/h3>\n<p>Consider a typical request lifecycle in a microservices application built using the Spring reactive stack:<\/p>\n<ol>\n<li><strong>Request Arrival:<\/strong> An incoming HTTP request hits the server, typically managed by a non-blocking runtime like Netty.[19]<\/li>\n<li><strong>API Gateway (Optional):<\/strong> If using Spring Cloud Gateway, the request first passes through its reactive filter chain. Predicates match the request to a route, and filters (e.g., security, rate limiting, path rewriting) modify the request reactively before forwarding it.[129, 130, 132, 133, 135]<\/li>\n<li><strong>WebFlux Handling:<\/strong> The request reaches the target microservice\u2019s WebFlux <code>DispatcherHandler<\/code>. Based on the configuration (Annotated Controller or Functional Endpoint), the request is routed to the appropriate handler method or <code>HandlerFunction<\/code> on an event loop thread.[19, 40, 152]<\/li>\n<li><strong>Security Interception:<\/strong> Spring Security Reactive intercepts the request via its <code>SecurityWebFilterChain<\/code> to perform authentication and authorization checks non-blockingly.[59, 126] Session information might be retrieved reactively using Spring Session Reactive.[136, 140]<\/li>\n<li><strong>Service Logic:<\/strong> The controller\/handler invokes service layer methods. These methods orchestrate the business logic, potentially involving calls to other reactive components.<\/li>\n<li><strong>Reactive Data Access:<\/strong> If database interaction is needed, the service layer calls methods on Spring Data reactive repositories (R2DBC for SQL, or reactive variants for NoSQL like MongoDB).[29, 55, 69, 122] These repository methods return <code>Mono<\/code> or <code>Flux<\/code>, and the interaction with the database driver occurs non-blockingly.<\/li>\n<li><strong>Reactive Messaging:<\/strong> If the service needs to publish an event or communicate asynchronously, it uses a reactive Kafka producer (e.g., <code>ReactiveKafkaProducerTemplate<\/code>) to send a message non-blockingly.[23, 94, 99, 100] Conversely, reactive consumers (<code>KafkaReceiver<\/code>) might be listening for incoming messages on separate reactive streams.<\/li>\n<li><strong>Composition with Operators:<\/strong> Results from various asynchronous operations (database calls, external API calls via <code>WebClient<\/code>, Kafka sends) are combined and transformed using Project Reactor operators (<code>flatMap<\/code>, <code>zip<\/code>, <code>map<\/code>, <code>filter<\/code>, etc.) to build the final response stream.<\/li>\n<li><strong>Response Writing:<\/strong> The final <code>Mono<\/code> or <code>Flux<\/code> representing the response is returned to WebFlux. The framework subscribes to it and writes the data asynchronously back to the client through the non-blocking server (e.g., Netty).[3, 19, 22]<\/li>\n<\/ol>\n<p>Illustrative examples like a reactive microservice handling CRUD operations with R2DBC and potentially interacting via Kafka [53, 54, 92] or a stock analytics application processing Kafka streams and persisting to a database reactively [23] showcase this end-to-end integration.<\/p>\n<h3>Key Considerations for Building Fully Reactive Systems<\/h3>\n<p>Successfully building and operating fully reactive systems requires attention to several critical aspects:<\/p>\n<ul>\n<li><strong>End-to-End Non-Blocking:<\/strong> This is paramount. Any blocking call within the reactive pipeline, especially on an event loop thread, can severely degrade performance and scalability, potentially freezing the application.[8, 48] This includes not only database access (use R2DBC\/reactive NoSQL drivers, not JDBC\/JPA directly) and HTTP calls (use <code>WebClient<\/code>, not <code>RestTemplate<\/code>), but also any third-party library interactions. Tools like <strong>BlockHound<\/strong> can be integrated during development and testing to detect accidental blocking calls.[161] If a blocking call is unavoidable, it <em>must<\/em> be explicitly offloaded to a separate, dedicated thread pool using Reactor\u2019s scheduler operators like <code>publishOn(Schedulers.boundedElastic())<\/code> or <code>subscribeOn(Schedulers.boundedElastic())<\/code>. However, this is a workaround and less efficient than maintaining a fully non-blocking flow.[8, 19]<\/li>\n<li><strong>Thread Model and Schedulers:<\/strong> Reactive applications typically operate with a small number of event loop threads for handling I\/O and request processing.[8, 19] CPU-intensive tasks or unavoidable blocking calls should be moved off these threads using Reactor\u2019s Schedulers (<code>Schedulers.parallel()<\/code> for CPU-bound, <code>Schedulers.boundedElastic()<\/code> for blocking I\/O) via operators like <code>publishOn()<\/code> (changes the thread for downstream operators) and <code>subscribeOn()<\/code> (changes the thread for the source emission and upstream operators).[8, 19, 25] Understanding how these schedulers interact is crucial for performance tuning and avoiding unexpected behavior.<\/li>\n<li><strong>Error Handling:<\/strong> Traditional <code>try-catch<\/code> blocks are often ineffective for handling errors within asynchronous, declarative reactive chains. Errors in reactive streams are propagated as terminal signals. Reactor provides specific operators for error handling, such as:\n<ul>\n<li><code>onErrorReturn(fallbackValue)<\/code>: Emit a default value upon error.<\/li>\n<li><code>onErrorResume(fallbackPublisher)<\/code>: Switch to a fallback <code>Mono<\/code> or <code>Flux<\/code> upon error.[36, 45, 70, 108, 151]<\/li>\n<li><code>onErrorMap(exceptionMapper)<\/code>: Transform one exception type into another.<\/li>\n<li><code>retry(N)<\/code> or <code>retryWhen(RetrySpec)<\/code>: Resubscribe to the source upon error, potentially with backoff strategies.[9, 13, 25, 108, 115]\nImplementing robust error handling strategies within the reactive pipeline is essential for resilience.[9, 13, 17, 25, 32, 36, 45, 51, 94, 108, 110, 115, 151]<\/li>\n<\/ul>\n<\/li>\n<li><strong>Debugging Complexity:<\/strong> Debugging reactive applications can be challenging.[9, 10, 13, 17, 25, 26, 48] Stack traces often don\u2019t reflect the logical call chain due to the asynchronous nature and operator fusion. Techniques and tools that help include:\n<ul>\n<li>Detailed logging within operators (<code>.log()<\/code>).<\/li>\n<li>Using Reactor\u2019s debugging features (e.g., Hooks.onOperatorDebug(), checkpoint()).<\/li>\n<li>The Reactor Debug Agent for more informative stack traces (with some performance overhead).<\/li>\n<li>Carefully testing individual reactive components and chains.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Transaction Management:<\/strong> Handling transactions across asynchronous operations requires specific support. Spring provides reactive transaction management capabilities. For R2DBC, this often involves using <code>TransactionalOperator<\/code> for programmatic transaction demarcation or <code>@Transactional<\/code> on methods returning <code>Mono<\/code>\/<code>Flux<\/code> when using <code>R2dbcTransactionManager<\/code>.[28, 82, 156] For Kafka, Reactor Kafka\u2019s <code>KafkaSender<\/code> offers transactional send methods (<code>.sendTransactionally()<\/code>) that work with a <code>TransactionManager<\/code>.[98, 112]<\/li>\n<\/ul>\n<p>Building fully reactive systems introduces operational complexities alongside performance benefits. Understanding the asynchronous flow, the nuances of error handling, and the challenges of debugging is critical.[9, 10, 13, 17, 19, 25, 26, 48] The shift requires not just adopting new libraries but also a different mindset regarding concurrency, state management, and failure recovery compared to traditional imperative, blocking systems.<\/p>\n<h2>7. Conclusion<\/h2>\n<p>The Spring ecosystem provides a comprehensive and robust suite of tools for building reactive applications. From the foundational reactive web framework <strong>Spring WebFlux<\/strong> [21] to reactive data access solutions like <strong>Spring Data R2DBC<\/strong> for relational databases [55, 69] and <strong>Spring Data Reactive Repositories<\/strong> for NoSQL stores like MongoDB, Cassandra, and Redis [21, 29, 30, 120, 121, 122, 123, 124], Spring enables non-blocking interactions throughout the data layer. Integration with messaging systems like <strong>Apache Kafka<\/strong> is facilitated through reactive wrappers around Project Reactor Kafka [23, 94], and security is handled non-blockingly by <strong>Spring Security Reactive<\/strong>.[59, 126] Furthermore, <strong>Spring Cloud Gateway<\/strong> offers a reactive API gateway solution [128, 131], and <strong>Spring Session Reactive<\/strong> manages user sessions without blocking.[136, 139, 140] This extensive support allows developers to construct end-to-end reactive systems primarily within the Spring framework.<\/p>\n<h3>Benefits and Considerations<\/h3>\n<p>Adopting the reactive stack with Spring offers significant advantages, particularly for certain types of applications:<\/p>\n<ul>\n<li>\n<p><strong>Benefits:<\/strong><\/p>\n<ul>\n<li><strong>Scalability:<\/strong> The non-blocking, event-driven architecture allows applications to handle a high number of concurrent users and requests with significantly fewer threads compared to traditional blocking models.[1, 3, 5, 6, 7, 8, 9, 10, 13, 14, 15, 16, 17, 19, 20, 21, 26, 31, 47, 64]<\/li>\n<li><strong>Resource Efficiency:<\/strong> Fewer threads translate to lower memory consumption and reduced CPU overhead from context switching, leading to more efficient use of hardware resources.[1, 6, 7, 8, 9, 10, 13, 14, 15, 16, 17, 19, 21, 26, 42, 64]<\/li>\n<li><strong>Responsiveness:<\/strong> Applications remain responsive under load, especially those involving I\/O-bound operations (network calls, database access), as threads don\u2019t block waiting for responses.[6, 7, 9, 10, 12, 13, 14, 16, 17, 64]<\/li>\n<li><strong>Resilience:<\/strong> Features like backpressure prevent components from being overwhelmed, and the reactive programming model offers robust patterns for handling errors and failures within asynchronous streams.[6, 9, 10, 13, 14, 25]<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Considerations:<\/strong><\/p>\n<ul>\n<li><strong>Learning Curve:<\/strong> Reactive programming involves a different paradigm (asynchronous streams, operators, backpressure) that can be challenging for developers accustomed to imperative, blocking code.[2, 6, 13, 26, 31, 144]<\/li>\n<li><strong>Debugging Complexity:<\/strong> Tracing execution flow and diagnosing errors in asynchronous, multi-stage reactive pipelines can be more difficult than debugging synchronous code.[9, 10, 13, 17, 25, 26, 48]<\/li>\n<li><strong>End-to-End Non-Blocking Requirement:<\/strong> The full benefits are realized only when the entire stack, including all dependencies and integrations, is non-blocking. Introducing blocking calls can create severe performance issues.[1, 8, 48, 161]<\/li>\n<li><strong>Ecosystem Maturity:<\/strong> While core components are robust, some parts of the reactive ecosystem (like R2DBC compared to JPA) are newer and may lack certain features or tooling maturity.[28, 69, 72]<\/li>\n<li><strong>Potential Overhead:<\/strong> For very simple, low-concurrency applications, the overhead of the reactive machinery might lead to slightly higher processing time per request compared to a simple blocking model.[19, 161]<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Guidance on When to Choose the Reactive Stack<\/h3>\n<p>The decision to use Spring WebFlux and the reactive stack versus the traditional Spring MVC stack is a significant architectural choice that should be based on specific project needs and constraints.[2, 3, 6, 8, 19, 31, 42, 64, 144] It is not a universally superior approach but rather a powerful tool for specific problem domains.<\/p>\n<p><strong>Choose the Reactive Stack (WebFlux, R2DBC, etc.) when:<\/strong><\/p>\n<ul>\n<li><strong>High Concurrency \/ Scalability is Required:<\/strong> Applications expecting a large number of simultaneous connections or requests, especially those involving significant I\/O wait times (e.g., microservices calling other services, database-intensive operations).[1, 2, 3, 5, 6, 7, 9, 13, 14, 15, 16, 17, 20, 21, 26, 31, 47, 64]<\/li>\n<li><strong>Real-time Data Streaming:<\/strong> Applications involving WebSockets, Server-Sent Events (SSE), or other forms of real-time data push\/streaming.[3, 6, 14, 16, 17]<\/li>\n<li><strong>Resource Efficiency is Critical:<\/strong> Environments where minimizing thread count and memory usage is important (e.g., cloud-native deployments, high-density hosting).[15, 16, 19, 26, 64]<\/li>\n<li><strong>Building Fully Reactive Systems:<\/strong> When integrating with other inherently reactive systems (e.g., reactive databases, message queues with reactive clients) to maintain non-blocking behavior end-to-end.<\/li>\n<li><strong>Functional Programming Alignment:<\/strong> Teams comfortable with or preferring a functional, declarative programming style for handling asynchronous operations.<\/li>\n<\/ul>\n<p><strong>Consider Traditional Spring MVC when:<\/strong><\/p>\n<ul>\n<li><strong>Simpler CRUD Applications:<\/strong> Standard request-response applications with moderate concurrency requirements where the complexity of reactive programming might not be justified.[2, 6, 31, 64]<\/li>\n<li><strong>CPU-Bound Workloads:<\/strong> Applications where the primary bottleneck is CPU processing rather than I\/O waiting.[3, 31]<\/li>\n<li><strong>Existing Blocking Dependencies:<\/strong> Projects heavily reliant on blocking libraries (e.g., JDBC, JPA, blocking network clients) where migrating the entire dependency chain to non-blocking alternatives is impractical or too costly.[8, 19, 42]<\/li>\n<li><strong>Team Familiarity:<\/strong> Development teams are primarily experienced with traditional imperative\/synchronous programming and the learning curve for reactive is a significant barrier.[2, 31, 144]<\/li>\n<\/ul>\n<p><strong>The Impact of Virtual Threads (Project Loom):<\/strong><\/p>\n<p>It is also important to acknowledge the evolving landscape of concurrency in Java with the introduction of <strong>Virtual Threads<\/strong> (available as a preview feature in earlier JDKs and finalized in JDK 21).[18, 25, 42, 144, 162] Virtual Threads aim to make blocking I\/O significantly cheaper by allowing a massive number of virtual threads to run on a small number of platform (OS) threads. When a virtual thread blocks on I\/O, its underlying platform thread is released to do other work, rather than being held idle.<\/p>\n<p>Spring Framework 6 and Spring Boot 3 are designed to work seamlessly with Virtual Threads, particularly within the Spring MVC stack.[42] This means that traditional Spring MVC applications, when run on a JDK with Virtual Threads enabled and configured, can achieve significant improvements in scalability for I\/O-bound workloads <em>without<\/em> requiring a shift to the reactive programming model.[18, 42, 162]<\/p>\n<p>While reactive programming still offers distinct advantages in terms of its explicit handling of data streams, operator composition, and built-in backpressure mechanisms, Virtual Threads provide an alternative path to scalability for many common web application scenarios.[18] This development may influence the decision-making process, potentially making Spring MVC with Virtual Threads a viable option for use cases that might previously have strongly indicated a need for Spring WebFlux solely to overcome blocking I\/O limitations. The choice between reactive programming and Virtual Threads becomes more nuanced, depending on whether the application benefits more from the reactive <em>programming model<\/em> itself or simply needs efficient handling of concurrent blocking I\/O.<\/p>\n<p>In conclusion, Spring\u2019s reactive ecosystem offers a powerful set of tools for building modern, scalable, and resilient applications. Understanding the core components like WebFlux, R2DBC, and reactive Kafka integration, along with the broader reactive support across Spring Data, Security, Cloud Gateway, and Session, enables developers to make informed decisions and effectively leverage reactive programming when the use case demands it.<\/p>\n<pre><code><\/code><\/pre>\n<\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":3758,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[465],"tags":[],"series":[],"class_list":["post-3859","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-spring-reactive"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/10\/ai-generated-8180209_1280-jpg.avif","jetpack-related-posts":[{"id":3890,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/spring-cloud-stream\/","url_meta":{"origin":3859,"position":0},"title":"Spring Cloud Stream","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"Spring Cloud Stream is a framework for building highly scalable, event-driven microservices that are connected by a shared messaging system. In simple terms, it's a powerful tool that takes away the complexity of communicating with message brokers like RabbitMQ or Apache Kafka, allowing you to focus purely on your application's\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2023\/11\/network-3152677_640.jpg?fit=640%2C427&ssl=1&resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2023\/11\/network-3152677_640.jpg?fit=640%2C427&ssl=1&resize=350%2C200 1x, https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2023\/11\/network-3152677_640.jpg?fit=640%2C427&ssl=1&resize=525%2C300 1.5x"},"classes":[]},{"id":3947,"url":"https:\/\/www.mymiller.name\/wordpress\/spring\/spring4\/goodbye-boilerplate-mastering-declarative-http-clients-in-spring-boot\/","url_meta":{"origin":3859,"position":1},"title":"Goodbye Boilerplate: Mastering Declarative HTTP Clients in Spring Boot","author":"Jeffery Miller","date":"December 19, 2025","format":false,"excerpt":"For years, calling remote REST APIs in Spring Boot meant one of two things: wrestling with the aging, blocking RestTemplate, or writing verbose, reactive boilerplate with WebClient. While libraries like Spring Cloud Feign offered a cleaner, declarative approach, they required extra dependencies and configuration. With the arrival of Spring Framework\u2026","rel":"","context":"In &quot;Spring4&quot;","block_context":{"text":"Spring4","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring\/spring4\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/putty-3678638_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/putty-3678638_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/putty-3678638_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/putty-3678638_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/putty-3678638_1280.avif 3x"},"classes":[]},{"id":3919,"url":"https:\/\/www.mymiller.name\/wordpress\/spring\/unleashing-scalability-spring-boot-and-java-virtual-threads\/","url_meta":{"origin":3859,"position":2},"title":"Unleashing Scalability: Spring Boot and Java Virtual Threads","author":"Jeffery Miller","date":"November 18, 2025","format":false,"excerpt":"Java has long been a powerhouse for enterprise applications, and Spring Boot has made developing them an absolute dream. But even with Spring Boot's magic, a persistent bottleneck has challenged developers: the overhead of traditional thread-per-request models when dealing with blocking I\/O operations. Think database calls, external API integrations, or\u2026","rel":"","context":"In &quot;Spring&quot;","block_context":{"text":"Spring","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 3x"},"classes":[]},{"id":3475,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_sockets\/spring-boot-with-rsocket\/","url_meta":{"origin":3859,"position":3},"title":"Spring Boot with RSocket","author":"Jeffery Miller","date":"December 24, 2025","format":false,"excerpt":"RSocket, a powerful messaging protocol, is a perfect fit for building reactive microservices in a Spring Boot environment. This article will guide you through integrating RSocket with Spring Boot using both Maven and Gradle build systems. We'll explore adding the necessary dependencies, configuration options, and basic usage examples. Getting Started:\u2026","rel":"","context":"In &quot;Spring Sockets&quot;","block_context":{"text":"Spring Sockets","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_sockets\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/03\/technology-3374916_640.jpg?fit=640%2C261&ssl=1&resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/03\/technology-3374916_640.jpg?fit=640%2C261&ssl=1&resize=350%2C200 1x, https:\/\/i0.wp.com\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/03\/technology-3374916_640.jpg?fit=640%2C261&ssl=1&resize=525%2C300 1.5x"},"classes":[]},{"id":3951,"url":"https:\/\/www.mymiller.name\/wordpress\/java\/scaling-streams-mastering-virtual-threads-in-spring-boot-4-and-java-25\/","url_meta":{"origin":3859,"position":4},"title":"Scaling Streams: Mastering Virtual Threads in Spring Boot 4 and Java 25","author":"Jeffery Miller","date":"December 22, 2025","format":false,"excerpt":"As a software architect, I\u2019ve seen the industry shift from heavy platform threads to reactive streams, and finally to the \"best of both worlds\": Virtual Threads. With the recent release of Spring Boot 4.0 and Java 25 (LTS), Project Loom's innovations have officially become the bedrock of high-concurrency enterprise Java.\u2026","rel":"","context":"In &quot;JAVA&quot;","block_context":{"text":"JAVA","link":"https:\/\/www.mymiller.name\/wordpress\/category\/java\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_wqijejwqijejwqij-scaled.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_wqijejwqijejwqij-scaled.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_wqijejwqijejwqij-scaled.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_wqijejwqijejwqij-scaled.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_wqijejwqijejwqij-scaled.avif 3x"},"classes":[]},{"id":3878,"url":"https:\/\/www.mymiller.name\/wordpress\/spring_messaging\/building-robust-kafka-applications-with-spring-boot-and-avro-schema-registry\/","url_meta":{"origin":3859,"position":5},"title":"Building Robust Kafka Applications with Spring Boot, and Avro Schema Registry","author":"Jeffery Miller","date":"November 24, 2025","format":false,"excerpt":"As a software architect, designing solutions that are scalable, maintainable, and resilient is paramount. In the world of event-driven architectures, Apache Kafka has become a cornerstone for high-throughput, low-latency data streaming. However, simply sending raw bytes over Kafka topics can lead to data inconsistency and make future evolution a nightmare.\u2026","rel":"","context":"In &quot;Spring Messaging&quot;","block_context":{"text":"Spring Messaging","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring_messaging\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/06\/ai-generated-7947638_1280.avif 3x"},"classes":[]}],"jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3859","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/comments?post=3859"}],"version-history":[{"count":3,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3859\/revisions"}],"predecessor-version":[{"id":3864,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3859\/revisions\/3864"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media\/3758"}],"wp:attachment":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media?parent=3859"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/categories?post=3859"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/tags?post=3859"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/series?post=3859"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}