{"id":3951,"date":"2025-12-22T10:00:00","date_gmt":"2025-12-22T15:00:00","guid":{"rendered":"https:\/\/www.mymiller.name\/wordpress\/?p=3951"},"modified":"2025-12-18T11:09:43","modified_gmt":"2025-12-18T16:09:43","slug":"scaling-streams-mastering-virtual-threads-in-spring-boot-4-and-java-25","status":"publish","type":"post","link":"https:\/\/www.mymiller.name\/wordpress\/java\/scaling-streams-mastering-virtual-threads-in-spring-boot-4-and-java-25\/","title":{"rendered":"Scaling Streams: Mastering Virtual Threads in Spring Boot 4 and Java 25"},"content":{"rendered":"\n<p>As a software architect, I\u2019ve seen the industry shift from heavy platform threads to reactive streams, and finally to the &#8220;best of both worlds&#8221;: <strong>Virtual Threads<\/strong>. With the recent release of <strong>Spring Boot 4.0<\/strong> and <strong>Java 25 (LTS)<\/strong>, Project Loom&#8217;s innovations have officially become the bedrock of high-concurrency enterprise Java.<\/p>\n\n\n\n<p>Today, we\u2019re going to look at a modern architectural challenge: scaling intelligent data pipelines using <strong>Spring Boot 4<\/strong>, <strong>Spring AI<\/strong>, and <strong>DL4J<\/strong> by &#8220;injecting&#8221; virtual threads and managing state with <strong>Scoped Values<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Spring Boot 4 and Java 25?<\/h2>\n\n\n\n<p>Spring Boot 4.0 is designed from the ground up for the Java 25 ecosystem. While Spring Boot 3 introduced initial support, version 4.0 treats Virtual Threads as a first-class citizen, enabling them by default for most I\/O-bound operations. This allows us to handle millions of concurrent tasks\u2014like LLM orchestrations\u2014without the cognitive overhead of reactive programming (Project Reactor).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Problem: The &#8220;I\/O Wall&#8221; in Streams<\/h2>\n\n\n\n<p>Java&#8217;s standard parallel streams use the <code>ForkJoinPool.commonPool()<\/code>. If your stream performs blocking I\/O\u2014such as calling an LLM via <strong>Spring AI<\/strong> or running a multi-layered prediction via <strong>DL4J<\/strong>\u2014the common pool quickly saturates. This leads to thread starvation and brings your entire application to a crawl.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/\/ Traditional Parallel Stream (Dangerous for I\/O)\nlist.parallelStream()\n    .map(data -&gt; springAiClient.generate(data)) \/\/ Blocks common pool threads!\n    .collect(Collectors.toList());\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">The Solution: Seamless Virtual Thread Injection<\/h2>\n\n\n\n<p>In Java 25, we can maintain the declarative beauty of Streams but offload the &#8220;heavy&#8221; part of the pipeline to Virtual Threads. Spring Boot 4 makes this incredibly easy.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1. Enable Virtual Threads<\/h3>\n\n\n\n<p>In Spring Boot 4, virtual threads are often enabled by default if the JVM supports them, but you can ensure it in your <code>application.yml<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>spring:\n  threads:\n    virtual:\n      enabled: true\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">2. Context Management with Scoped Values<\/h3>\n\n\n\n<p>When spawning millions of virtual threads, <code>ThreadLocal<\/code> is an anti-pattern due to memory overhead and potential leaks. Java 25&#8217;s <strong>Scoped Values<\/strong> provide a lightweight, immutable, and thread-safe alternative for sharing context (like Tenant IDs or Security Tokens).<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>public class SecurityContext {\n    \/\/ ScopedValue is the modern, lightweight replacement for ThreadLocal\n    public static final ScopedValue&lt;String&gt; TENANT_ID = ScopedValue.newInstance();\n}\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">3. The Stream Chain with Spring AI and DL4J<\/h3>\n\n\n\n<p>Here is how we integrate <strong>Spring AI<\/strong> for summarization and <strong>DL4J<\/strong> for deep learning inference, all while keeping our virtual threads context-aware.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@Service\npublic class IntelligenceService {\n\n    private final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();\n    \n    @Autowired\n    private ChatClient chatClient; \/\/ Spring AI\n\n    public List&lt;AnalysisResult&gt; processIntelligencePipeline(List&lt;Document&gt; docs, String tenantId) {\n        \/\/ Bind the context using ScopedValue\n        return ScopedValue.where(SecurityContext.TENANT_ID, tenantId).call(() -&gt; \n            docs.stream()\n                .filter(doc -&gt; !doc.isEmpty())\n                \n                \/\/ Step 1: Offload LLM Summarization to a Virtual Thread\n                .map(doc -&gt; CompletableFuture.supplyAsync(() -&gt; {\n                    String currentTenant = SecurityContext.TENANT_ID.get();\n                    \/\/ Call Spring AI (Blocking I\/O is now \"free\")\n                    String summary = chatClient.prompt(doc.getContent()).call().content();\n                    return new IntermediateResult(doc.getId(), summary);\n                }, executor))\n                \n                .toList().stream()\n                .map(CompletableFuture::join)\n                \n                \/\/ Step 2: Offload DL4J Deep Learning Inference\n                .map(res -&gt; CompletableFuture.supplyAsync(() -&gt; {\n                    \/\/ DL4J Model Prediction\n                    INDArray tensor = prepareTensor(res.getSummary());\n                    return new AnalysisResult(res.getId(), model.predict(tensor));\n                }, executor))\n                \n                .toList().stream()\n                .map(CompletableFuture::join)\n                .collect(Collectors.toList())\n        );\n    }\n}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Deep Dive: Breaking Down the Intelligence Pipeline<\/h2>\n\n\n\n<p>To truly appreciate the architectural elegance of this pattern, let&#8217;s break down the <code>processIntelligencePipeline<\/code> method step by step:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Context Wrapper (<code>ScopedValue.where<\/code>)<\/h3>\n\n\n\n<p>The entire stream is wrapped in a <code>ScopedValue.where(...).call(...)<\/code> block. In Java 25, this binds the <code>tenantId<\/code> to the current execution scope. Unlike <code>ThreadLocal<\/code>, which is inherited (and copied) by child threads, Scoped Values are efficiently shared with the Virtual Threads spawned within this block. This is critical when you have 100,000+ concurrent requests; the memory savings are massive.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The First Concurrency Injection (Spring AI)<\/h3>\n\n\n\n<p>We use <code>.map(...)<\/code> to transform each <code>Document<\/code> into a <code>CompletableFuture<\/code>. By passing our <code>virtualThreadExecutor<\/code> to <code>supplyAsync<\/code>, we ensure that the high-latency call to <strong>Spring AI<\/strong> (which might take 500ms or more) runs on a Virtual Thread.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Architectural Benefit:<\/strong> While the Virtual Thread waits for the LLM response, it &#8220;unmounts&#8221; from its carrier thread, allowing the CPU to process other tasks.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">The Barrier Sync (<code>toList().stream().map(join)<\/code>)<\/h3>\n\n\n\n<p>Because standard Java Streams are lazy, we must call <code>.toList()<\/code> to trigger the execution of all asynchronous tasks. We then immediately reopen the stream and call <code>join()<\/code>. This acts as a non-blocking barrier, ensuring all AI summaries are completed before proceeding to the next stage.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Second Concurrency Injection (DL4J)<\/h3>\n\n\n\n<p>Once we have our summaries, we repeat the pattern for the <strong>DL4J<\/strong> inference. Deep learning predictions can be CPU-intensive (tensor preparation) or I\/O-intensive (if offloading to a GPU\/Remote Server).<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Why Virtual Threads here?<\/strong> By using them again, we decouple the &#8220;Data Science&#8221; logic from the main application flow, ensuring that even if one model prediction takes longer, it doesn&#8217;t block the rest of the stream elements.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">The Final Collection<\/h3>\n\n\n\n<p>The final <code>.collect(Collectors.toList())<\/code> returns the fully hydrated <code>AnalysisResult<\/code> objects to the caller. The beauty of this approach is that the caller sees a simple, synchronous method signature, while underneath, the JVM has orchestrated a highly concurrent, context-aware AI pipeline.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Key Architectural Takeaways<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Reactive vs. Virtual<\/strong>: With Spring Boot 4 and Java 25, the need for WebFlux and Project Reactor is diminishing for most business applications. You get the same scalability with simple, imperative code.<\/li>\n\n\n\n<li><strong>Memory Efficiency<\/strong>: Replacing <code>ThreadLocal<\/code> with <code>ScopedValue<\/code> is non-negotiable when dealing with the high thread counts that Virtual Threads enable.<\/li>\n\n\n\n<li><strong>The &#8220;Wait-State&#8221; is Free<\/strong>: Because Virtual Threads unmount from the carrier thread during I\/O (like waiting for a Spring AI response), your CPU stays busy doing actual work instead of waiting for network packets.<\/li>\n\n\n\n<li><strong>DL4J Integration<\/strong>: Even with compute-heavy ML libraries like DL4J, using Virtual Threads for the pre-processing and post-processing I\/O steps ensures that the GPU or CPU-bound inference isn&#8217;t bottlenecked by data ingestion.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Spring Boot 4.0 and Java 25 have fundamentally changed how we design high-throughput systems. By leveraging Virtual Threads and Scoped Values, we can build sophisticated AI-integrated pipelines that are easy to write, easy to debug, and incredibly fast.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a software architect, I\u2019ve seen the industry shift from heavy platform threads to reactive streams, and finally to the &#8220;best of both worlds&#8221;: Virtual Threads. With the recent release of Spring Boot 4.0 and Java 25 (LTS), Project Loom&#8217;s innovations have officially become the bedrock of high-concurrency enterprise Java. Today, we\u2019re going to look [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3952,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[280,443,483],"tags":[429,69,319,484,475],"series":[],"class_list":["post-3951","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-java","category-spring_ai","category-spring4","tag-ai","tag-java-2","tag-spring","tag-spring4","tag-virtual-threads"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_wqijejwqijejwqij-scaled.avif","jetpack-related-posts":[{"id":3919,"url":"https:\/\/www.mymiller.name\/wordpress\/spring\/unleashing-scalability-spring-boot-and-java-virtual-threads\/","url_meta":{"origin":3951,"position":0},"title":"Unleashing Scalability: Spring Boot and Java Virtual Threads","author":"Jeffery Miller","date":"November 18, 2025","format":false,"excerpt":"Java has long been a powerhouse for enterprise applications, and Spring Boot has made developing them an absolute dream. But even with Spring Boot's magic, a persistent bottleneck has challenged developers: the overhead of traditional thread-per-request models when dealing with blocking I\/O operations. Think database calls, external API integrations, or\u2026","rel":"","context":"In &quot;Spring&quot;","block_context":{"text":"Spring","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/11\/fiber-4814456_1280.avif 3x"},"classes":[]},{"id":3912,"url":"https:\/\/www.mymiller.name\/wordpress\/uncategorized\/spring-boot-4-0-whats-next-for-the-modern-java-architect\/","url_meta":{"origin":3951,"position":1},"title":"Spring Boot 4.0: What&#8217;s Next for the Modern Java Architect?","author":"Jeffery Miller","date":"September 24, 2025","format":false,"excerpt":"A Forward-Looking Comparison of Spring Boot 3.x and 4.0 Staying on top of the rapidly evolving Java ecosystem is paramount for any software architect. The shift from Spring Boot 2.x to 3.x brought significant changes, notably the move to Jakarta EE. Now, with the horizon of Spring Boot 4.0 and\u2026","rel":"","context":"Similar post","block_context":{"text":"Similar post","link":""},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/09\/per-2056740_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/09\/per-2056740_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/09\/per-2056740_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/09\/per-2056740_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/09\/per-2056740_1280.avif 3x"},"classes":[]},{"id":3944,"url":"https:\/\/www.mymiller.name\/wordpress\/spring\/spring4\/goodbye-resilience4j-native-fault-tolerance-in-spring-boot-4\/","url_meta":{"origin":3951,"position":2},"title":"Goodbye Resilience4j? Native Fault Tolerance in Spring Boot 4","author":"Jeffery Miller","date":"December 18, 2025","format":false,"excerpt":"For years, the standard advice for building resilient Spring Boot microservices was simple: add Resilience4j. It became the Swiss Army knife for circuit breakers, rate limiters, and retries. However, with the release of Spring Boot 4, the landscape has shifted. The framework now promotes a \"batteries-included\" philosophy for fault tolerance.\u2026","rel":"","context":"In &quot;Spring4&quot;","block_context":{"text":"Spring4","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring\/spring4\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/iduino-uno-r3b-1699990_1280.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/iduino-uno-r3b-1699990_1280.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/iduino-uno-r3b-1699990_1280.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/iduino-uno-r3b-1699990_1280.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/iduino-uno-r3b-1699990_1280.avif 3x"},"classes":[]},{"id":3740,"url":"https:\/\/www.mymiller.name\/wordpress\/springboot\/threading-in-spring-a-comprehensive-guide\/","url_meta":{"origin":3951,"position":3},"title":"Threading in Spring: A Comprehensive Guide","author":"Jeffery Miller","date":"December 23, 2025","format":false,"excerpt":"Threading is a crucial aspect of building modern, high-performance applications. It allows you to execute multiple tasks concurrently, improving responsiveness and utilizing system resources effectively. Spring Framework provides robust support for managing and using threads, simplifying development and ensuring efficiency. This article explores thread usage in Spring, delves into different\u2026","rel":"","context":"In &quot;Springboot&quot;","block_context":{"text":"Springboot","link":"https:\/\/www.mymiller.name\/wordpress\/category\/springboot\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/10\/ai-generated-8248619_1280-jpg.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/10\/ai-generated-8248619_1280-jpg.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/10\/ai-generated-8248619_1280-jpg.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/10\/ai-generated-8248619_1280-jpg.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2024\/10\/ai-generated-8248619_1280-jpg.avif 3x"},"classes":[]},{"id":3961,"url":"https:\/\/www.mymiller.name\/wordpress\/spring\/spring4\/architecting-spring-boot-4-with-official-spring-grpc-support\/","url_meta":{"origin":3951,"position":4},"title":"Architecting Spring Boot 4 with Official Spring gRPC Support","author":"Jeffery Miller","date":"January 15, 2026","format":false,"excerpt":"For years, the Spring community relied on excellent third-party starters (like net.devh) to bridge the gap between Spring Boot and gRPC. With the evolution of Spring Boot 4 and the official Spring gRPC project, we now have native support that aligns perfectly with Spring's dependency injection, observability, and configuration models.\u2026","rel":"","context":"In &quot;Spring4&quot;","block_context":{"text":"Spring4","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring\/spring4\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2026\/01\/Gemini_Generated_Image_3yqio33yqio33yqi.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2026\/01\/Gemini_Generated_Image_3yqio33yqio33yqi.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2026\/01\/Gemini_Generated_Image_3yqio33yqio33yqi.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2026\/01\/Gemini_Generated_Image_3yqio33yqio33yqi.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2026\/01\/Gemini_Generated_Image_3yqio33yqio33yqi.avif 3x"},"classes":[]},{"id":3954,"url":"https:\/\/www.mymiller.name\/wordpress\/spring-batch\/architecting-batch-systems-with-spring-boot-4-0-and-spring-framework-7-0\/","url_meta":{"origin":3951,"position":5},"title":"Architecting Batch Systems with Spring Boot 4.0 and Spring Framework 7.0","author":"Jeffery Miller","date":"December 23, 2025","format":false,"excerpt":"With the release of Spring Boot 4.0 and Spring Framework 7.0, the batch processing landscape has evolved to embrace Java 25, Jakarta EE 11, and built-in resilience patterns. This guide provides a professional architectural blueprint for setting up a high-performance Spring Batch server. 1. Technical Baseline Java: 17 (Baseline) \/\u2026","rel":"","context":"In &quot;Spring Batch&quot;","block_context":{"text":"Spring Batch","link":"https:\/\/www.mymiller.name\/wordpress\/category\/spring-batch\/"},"img":{"alt_text":"","src":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_mmtkyammtkyammtk.avif","width":350,"height":200,"srcset":"https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_mmtkyammtkyammtk.avif 1x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_mmtkyammtkyammtk.avif 1.5x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_mmtkyammtkyammtk.avif 2x, https:\/\/www.mymiller.name\/wordpress\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_mmtkyammtkyammtk.avif 3x"},"classes":[]}],"jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3951","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/comments?post=3951"}],"version-history":[{"count":1,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3951\/revisions"}],"predecessor-version":[{"id":3953,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/posts\/3951\/revisions\/3953"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media\/3952"}],"wp:attachment":[{"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/media?parent=3951"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/categories?post=3951"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/tags?post=3951"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/www.mymiller.name\/wordpress\/wp-json\/wp\/v2\/series?post=3951"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}