The landscape of AI is rapidly evolving, moving beyond simple request-response models to more sophisticated, agentic systems. These systems empower Large Language Models (LLMs) to not just generate text, but to act within your applications, making them an active and integral part of your business logic. Spring AI is at the forefront of this evolution, offering powerful capabilities for integrating LLMs into your Spring applications. Today, we’re going to dive deep into one of the most exciting features: function calling, and how it enables the creation of truly agentic workflows.

Beyond Text Generation: The Power of Function Calling

Traditionally, an LLM might answer a query by generating a response based on its training data. But what if that answer requires real-time data or the execution of a specific business process? This is where function calling shines. Function calling allows you to expose your Spring service methods as “tools” that the LLM can invoke. The LLM, when faced with a query that can be best answered by executing one of these tools, will generate a structured call to that function, including any necessary parameters. Your application then intercepts this call, executes the corresponding service method, and feeds the result back to the LLM for further processing or final response generation.

Imagine an LLM that can not only tell you about product availability but can actually check the inventory in real-time, or even place an order. This is the power of agentic workflows enabled by function calling.

How it Works: A Spring AI Perspective

Let’s break down the core components of creating agentic workflows with Spring AI and function calling.

1. Defining Your Tools (Spring Services):

The first step is to identify the business logic you want to expose to your LLM. These will typically be methods within your existing Spring services. Consider methods that perform actions, retrieve specific data, or interact with external systems.

Java

@Service
public class ProductService {

    public String getProductAvailability(String productId) {
        // Simulate checking a database or external API
        if ("PROD-001".equals(productId)) {
            return "Product PROD-001 is in stock. Quantity: 10.";
        } else if ("PROD-002".equals(productId)) {
            return "Product PROD-002 is out of stock.";
        }
        return "Unknown product: " + productId;
    }

    public String placeOrder(String productId, int quantity) {
        // Simulate placing an order
        if (quantity > 0) {
            return String.format("Order placed successfully for %d units of %s.", quantity, productId);
        }
        return "Cannot place order with zero or negative quantity.";
    }
}

2. Describing Your Tools for the LLM:

The LLM needs to understand what functions are available and how to use them. Spring AI provides a way to register these functions with the LLM model. You’ll typically define a FunctionCallback that describes the function’s name, description, and the parameters it expects. This description is crucial for the LLM to intelligently decide when and how to call your methods.

Java

@Configuration
public class FunctionConfig {

    @Bean
    public FunctionCallback productAvailabilityFunction(ProductService productService) {
        return new FunctionCallbackWrapper<ProductService.ProductAvailabilityRequest, String>(
            "getProductAvailability",
            "Gets the availability of a product by its ID",
            ProductService.ProductAvailabilityRequest.class,
            request -> productService.getProductAvailability(request.productId())
        );
    }

    @Bean
    public FunctionCallback placeOrderFunction(ProductService productService) {
        return new FunctionCallbackWrapper<ProductService.PlaceOrderRequest, String>(
            "placeOrder",
            "Places an order for a product with a specified quantity",
            ProductService.PlaceOrderRequest.class,
            request -> productService.placeOrder(request.productId(), request.quantity())
        );
    }

    // Define record classes for function parameters
    record ProductAvailabilityRequest(String productId) {}
    record PlaceOrderRequest(String productId, int quantity) {}
}

3. Orchestrating the Agentic Workflow:

Now, you’ll use the ChatClient from Spring AI, configured with your registered functions, to interact with the LLM. When a user’s prompt suggests the need for one of your exposed functions, the LLM will generate a tool_code message containing the function call. Your application then executes this function and sends the result back to the LLM as a tool_response message. This iterative process allows the LLM to continue the conversation with the newly acquired information.

Java

@RestController
public class AgentController {

    private final ChatClient chatClient;

    public AgentController(ChatClient.Builder chatClientBuilder, List<FunctionCallback> functionCallbacks) {
        this.chatClient = chatClientBuilder
            .functionCallbacks(functionCallbacks)
            .build();
    }

    @GetMapping("/chat")
    public String chat(@RequestParam String message) {
        PromptTemplate promptTemplate = new PromptTemplate(message);
        ChatResponse response = chatClient.call(promptTemplate.create());

        // The LLM might directly answer, or it might suggest a function call.
        // Spring AI handles the orchestration of calling the function and feeding the result back.
        // For a more advanced agent, you might need to inspect 'response.getCallArguments()'
        // and manually invoke services if you're not using the automatic FunctionCallbackWrapper.

        return response.getResult().getOutput().getContent();
    }
}

A Practical Example: The Intelligent Shopping Assistant

Let’s envision a scenario: an intelligent shopping assistant that can answer questions about product availability and even help place orders.

User Prompt: “Is product PROD-001 in stock?”

  1. LLM receives prompt: The LLM analyzes the prompt and, based on the function descriptions it was provided, determines that getProductAvailability is the most relevant tool.
  2. LLM generates function call: The LLM sends a structured message to your application, indicating it wants to call getProductAvailability with productId: "PROD-001".
  3. Application executes function: Your Spring application, through the configured FunctionCallbackWrapper, invokes productService.getProductAvailability("PROD-001").
  4. Application returns result to LLM: The ProductService returns “Product PROD-001 is in stock. Quantity: 10.” This result is sent back to the LLM.
  5. LLM generates final response: The LLM, now armed with the real-time stock information, can generate a natural language response: “Yes, product PROD-001 is currently in stock with 10 units available.”

User Prompt: “Can you place an order for 2 units of PROD-001?”

The workflow is similar, but the LLM would now choose the placeOrder function, passing productId: "PROD-001" and quantity: 2. The application would execute the placeOrder method, and the LLM would then confirm the order.

Benefits of Agentic Workflows with Function Calling

  • Real-time Data Integration: LLMs can access and act upon the latest information within your systems.
  • Automated Business Processes: Empower your LLM to trigger complex business logic and workflows.
  • Enhanced User Experience: Provide more accurate, dynamic, and actionable responses to user queries.
  • Reduced Hallucinations: By grounding responses in actual data and actions, you mitigate the risk of the LLM generating incorrect or fabricated information.
  • Scalability and Maintainability: Your business logic remains within your well-tested Spring services, while the LLM acts as an intelligent orchestrator.

Looking Ahead

Spring AI’s function calling capabilities unlock a new dimension of possibilities for building intelligent applications. As these agentic workflows become more sophisticated, we can expect to see LLMs taking on increasingly complex roles, moving from being mere assistants to active participants in our digital ecosystems. The future of AI in enterprise applications is bright, and Spring AI is providing the tools to build it.

Are you ready to empower your LLMs with the ability to act? Start experimenting with Spring AI’s function calling today and transform your applications into truly intelligent agents.


Discover more from GhostProgrammer - Jeff Miller

Subscribe to get the latest posts sent to your email.

By Jeffery Miller

I am known for being able to quickly decipher difficult problems to assist development teams in producing a solution. I have been called upon to be the Team Lead for multiple large-scale projects. I have a keen interest in learning new technologies, always ready for a new challenge.