As a software architect building cloud-native solutions, you know that working with cloud services like AWS S3 can be a bit tricky in a local development environment. You don’t want to constantly connect to a remote bucket, and setting up complex local testing environments can be a pain.
But what if you could have a fully functional, S3-compatible object storage service running right on your machine?
Meet MinIO. It’s an open-source, high-performance object storage server that is completely compatible with the Amazon S3 API. This means you can develop your Spring Boot applications locally, pointing them to MinIO, and then seamlessly switch to AWS S3 in production with zero code changes.
This article will walk you through the entire process, from setting up MinIO with Docker Compose to connecting your Spring Boot application using the AWS SDK for Java.
Why MinIO for Local Development?
MinIO’s core strength is its adherence to the S3 API. This “S3-compatible” nature is what makes it so powerful. It also has a number of features that make it perfect for developers:
- High Performance: Optimized for AI/ML and analytics workloads, it’s fast and efficient.
- Scalability: While we’ll use a single-node setup for this guide, MinIO is designed to scale horizontally to petabytes.
- Lightweight: It’s designed to run anywhere, from your local machine to a large-scale Kubernetes cluster.
- Data Persistence: We can configure it to persist data even after the container is stopped.
Step 1: Setting up MinIO with Docker Compose
Docker Compose is the perfect tool for this because it allows us to define and run a multi-container application with a single command.
Create a file named docker-compose.yml
in your project root and add the following:
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio_server
# Map port 9000 (MinIO API) and 9001 (MinIO Console/UI)
ports:
- "9000:9000"
- "9001:9001"
# Set environment variables for the root user credentials
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
# Command to start the MinIO server and specify the console port
command: server /data --console-address ":9001"
# Volume to persist data across container restarts
volumes:
- minio_data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
mc:
image: minio/mc:latest
container_name: minio_client
depends_on:
- minio
entrypoint: /bin/sh
command: -c "
/usr/bin/mc alias set local http://minio:9000 minioadmin minioadmin;
/usr/bin/mc mb local/my-spring-bucket;
tail -f /dev/null
"
volumes:
minio_data: {}
This file defines two services:
minio
: The core MinIO server that stores our data. We’ve exposed ports9000
(for the S3 API) and9001
(for the web console).mc
: An optional but handy MinIO Client container that automatically creates a bucket namedmy-spring-bucket
for us on startup.
To start the containers, open a terminal in the same directory as the file and run:
docker compose up -d
You can now access the MinIO web console at http://localhost:9001
using the credentials minioadmin
/minioadmin
.
Step 2: Connecting Your Spring Boot Application
Now that our local MinIO server is running, we’ll connect our Spring Boot application to it using the official AWS SDK for Java v2.
Add Gradle Dependencies
First, add the necessary dependencies to your build.gradle
file.
// build.gradle
dependencies {
// Use the BOM for managing AWS SDK versions
implementation platform('software.amazon.awssdk:bom:2.20.108')
implementation 'software.amazon.awssdk:s3'
// Spring Boot and other standard dependencies
implementation 'org.springframework.boot:spring-boot-starter'
// ...
}
Configure Your Application
Next, configure the connection properties in your application.yml
file.
# src/main/resources/application.yml
minio:
endpoint: http://localhost:9000
access-key: minioadmin
secret-key: minioadmin
bucket-name: my-spring-bucket
Create the S3 Client Configuration
Because the AWS SDK is designed for AWS, you need a configuration bean to override the default settings and point it to your local MinIO instance. This is a crucial step!
// src/main/java/com/example/config/MinioClientConfig.java
package com.example.config;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Bean;
import org.springframework.beans.factory.annotation.Value;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import java.net.URI;
@Configuration
public class MinioClientConfig {
@Value("${minio.endpoint}")
private String endpoint;
@Value("${minio.access-key}")
private String accessKey;
@Value("${minio.secret-key}")
private String secretKey;
@Bean
public S3Client s3Client() {
return S3Client.builder()
// 1. Specify the local MinIO endpoint URL
.endpointOverride(URI.create(endpoint))
// 2. Set static credentials from application.yml
.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create(accessKey, secretKey)
))
// 3. MinIO requires Path Style Access for a local setup
.forcePathStyle(true)
// 4. Region is required by the SDK but can be arbitrary for MinIO
.region(Region.of("us-east-1"))
.build();
}
}
The forcePathStyle(true)
setting is especially important as it ensures the bucket name is part of the URL path, which is how MinIO handles requests.
Implement the Service Class
Finally, create a service that uses the S3Client
to interact with your MinIO storage.
// src/main/java/com/example/service/MinioService.java
package com.example.service;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.DeleteObjectRequest;
import java.io.InputStream;
@Service
public class MinioService {
private final S3Client s3Client;
@Value("${minio.bucket-name}")
private String bucketName;
public MinioService(S3Client s3Client) {
this.s3Client = s3Client;
}
/**
* Uploads a file to the MinIO bucket.
*/
public void uploadFile(String objectKey, InputStream inputStream, long size, String contentType) {
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.contentType(contentType)
.contentLength(size)
.build();
s3Client.putObject(putObjectRequest, RequestBody.fromInputStream(inputStream, size));
System.out.println("Successfully uploaded object: " + objectKey);
}
/**
* Downloads a file from the MinIO bucket.
*/
public InputStream downloadFile(String objectKey) {
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();
return s3Client.getObject(getObjectRequest);
}
/**
* Deletes a file from the MinIO bucket.
*/
public void deleteFile(String objectKey) {
DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();
s3Client.deleteObject(deleteObjectRequest);
System.out.println("Successfully deleted object: " + objectKey);
}
}
The methods here use the S3 client to interact with the MinIO server in a familiar way.
Conclusion
By following these simple steps, you have a powerful and flexible development environment with MinIO. This approach allows you to work offline and test your file storage logic without incurring cloud costs or dealing with network latency. When you’re ready to deploy, all you have to do is update your application.yml
with the production AWS S3 credentials, and your application will work seamlessly.
Discover more from GhostProgrammer - Jeff Miller
Subscribe to get the latest posts sent to your email.