Welcome to the last article in a series dedicated to integrating a Spring Boot Kotlin app with AWS S3 Object Storage, in which we will focus on integration testing with LocalStack and Testcontainers. And although we will focus on Object Storage, the approach we will use can be easily replicated with other AWS services.
I can guarantee that you will benefit from this tutorial regardless of whether you saw previous articles about S3Client or S3Template, or not. But, I definitely encourage you to take a look at them, too:
- #1 Spring Boot with AWS S3, S3Client, and Kotlin
- #2 Spring Boot with Kotlin, AWS S3, and S3Template
- #3 (This article)
Video Tutorial
If you prefer video content, then check out my video that covers all three articles:
If you find this content useful, please leave a subscription 🙂
Prerequisites
Before heading to the guide, I just wanted to emphasize that today we will be working with Testcontainers. And this means that we must have a supported Docker environment.
So, if you do not have Docker configured on your local and want to follow this article, please check out their documentation.
Of course, you must have Java, IDE, and Spring Boot project too, but I believe this is quite obvious 😉
Testcontainers and LocalStack
Lastly, I would like to say a few words about the Testcontainers and LocalStack, which in my opinion are a great way to test Spring Boot S3 integration (and other AWS integrations, too).
Testcontainers
Testcontainers is a library for providing throwaway, lightweight instances of Docker containers. They are an excellent approach whenever need to test behavior dependent on external services, like AWS, or some external databases.
Long story short, instead of mocking, or manual set up of some test environment, we define test dependencies as code. Then, we can run our test code and disposable containers will be started and deleted after they finish.
Let’s take a look at the example from Spring Boot docs:
@Testcontainers @SpringBootTest class MyIntegrationTests { @Test fun myTest() { // ... } companion object { @Container @JvmStatic val neo4j = Neo4jContainer("neo4j:5"); } }
The above code runs a Neo4j docker container before the tests. Of course, this is just an example, so most of the time, we will need to add some more config.
Nevertheless, we can clearly see that this Testcontainers JUnit integration allows us to achieve our goal in an easy and neat manner.
Localstack
Localstack, on the other hand, is a cloud service emulator that runs in a single container. In other words, we can run AWS applications or Lambdas without connecting to the remote cloud provider.
And thanks to the Testcontainers module for LocalStack, we can test various AWS integrations with just a few lines of code.
Again, let’s take a look at the example, but this time from the LocalStack documentation:
DockerImageName localstackImage = DockerImageName.parse("localstack/localstack:3.5.0"); @Rule public LocalStackContainer localstack = new LocalStackContainer(localstackImage) .withServices(S3);
You will find links to both documentation at the end of this article. But for now, let’s not distract ourselves and focus on what we came here for 😉
Configure Project
If you are following my S3 series, or you already have a Spring Boot project, then those are the necessary dependencies for us today:
testImplementation("org.springframework.boot:spring-boot-starter-webflux") testImplementation("org.testcontainers:localstack") testImplementation("org.springframework.boot:spring-boot-testcontainers")
As we can see, apart from LocalStack and TestContainers, we must provide the Spring Boot Starter WebFlux.
But why?
Well, this is necessary to work with WebTestClient- a client we will use to test our web servers (REST endpoints).
On the other hand, if you would like to set up a project from scratch, then please navigate to the Spring Initializr and select the following:
However, please keep in mind that LocalStack is not provided out of the box in Spring, so we must add it manually.
Moreover, as we have chosen the Spring Web, the WebFlux dependency is not present, too:
testImplementation("org.springframework.boot:spring-boot-starter-webflux") testImplementation("org.testcontainers:localstack")
Testcontainers Singleton Approach
With all of that being done, let’s head to the practice part.
When working with Testcontainers, we can configure them in various ways:
- we can use the JUnit extension (Jupiter integration)- which allows us to use @Testcontainers and @Container annotations and makes JUnit responsible for the automatic startup and stop of containers in our tests.
- we can configure them manually in every test case,
- or, alternatively, we can use the singleton approach– in which we control containers’ lifecycle manually. But, thanks to that we can easily reuse them across multiple test classes.
Of course, these are not all approaches, and based on your needs you may want to configure Testcontainers differently. Nevertheless, in this tutorial, we will focus on the manual, reusable approach.
Introduce Base Class
Firstly, let’s introduce the LocalStackIntegrationTest
:
@SpringBootTest(webEnvironment = RANDOM_PORT) class LocalStackIntegrationTest { }
As we can see, we mark our class with @SpringBootTest – annotation necessary to run our integration tests and inject the instance of WebTestClient later in our subclasses.
Add Testcontainer
Following, let’s take a look at how to instantiate a LocalStack container:
companion object { val localStack: LocalStackContainer = LocalStackContainer( DockerImageName.parse("localstack/localstack:3.7.2") ) }
Right here, we create an instance of LocalStackContainer and we pass the name of a Docker image – localstack/localstack:3.7.2
– to its constructor. Alternatively, if we are working only with AWS S3 Buckets service, then we can use a dedicated image- localstack:s3-latest
. But personally, I am not a big fan of the latest tag, which can easily break our code.
Additionally, we put the LocalStackContainer instance in the companion object. Why? Because in the next steps, we will reference it in a function annotated with @DynamicPropertySource– and it must be static.
Note related to JUnit extension:
This is not the case here, as we want to take care of the container lifecycle manually, but, when using the Jupiter integration, containers declared as static fields will be shared between test methods. They will be started only once before any test method is executed and stopped after the last test method has executed. So, if in your case you pick the JUnit extension and don’t want that to happen, then you must not put the localstack in the companion object.
Control Testcontainer lifecycle
As we already know, with this approach we are responsible for the container lifecycle control. And although this may sound complicated, it basically means that without the extension we must start the container manually.
So the companion object after the update will look, as follows:
companion object { val localStack: LocalStackContainer = LocalStackContainer( DockerImageName.parse("localstack/localstack:3.7.2") ).apply { start() } }
Basically, we use the Kotlin scope function (you can learn more about it in my Kotlin course) to invoke the start()
function on the localStack
instance. And as the name suggests, this function will start the container (and pull the image, if necessary).
And basically, that is all we need to do here. With the above code, the container will be started when the base class is loaded and shared across all inheriting test classes.
Of course, there is also the stop()
function that we can invoke to kill and remove the container.
Nevertheless, we do not have to do it. Why? Let’s figure out.
Ryuk
Ryuk is a kind of “garbage collector” in Testcontainers.
Whenever we run integration tests, Testcontianers core starts one more container:
Long story short, this container is responsible for removing containers/networks/volumes created by our test cases. So, even if we do not clean the environment ourselves- for example with the stop()
function- the Ryuk container will take care of that.
Test Properties With DynamicPropertySource
With that done, we need to update our environment configuration.
If we try to run our Spring Boot application at this point, our logic responsible for communication with Amazon S3 will try to reach the actual AWS instance. It will use the defaults, or make use of the things we configured in the application.yaml
.
And this is not what we want, right? Instead, we would like to connect to the Testcontainer LocalStack instance.
In some examples, you might have seen the usage of application properties files. Nevertheless, if we want to be more flexible and make use of containers started on random ports, then the @DynamicPropertySource is our best friend here:
companion object { val localStack: LocalStackContainer = LocalStackContainer( DockerImageName.parse("localstack/localstack:3.7.2") ).apply { start() } @JvmStatic @DynamicPropertySource fun overrideProperties(registry: DynamicPropertyRegistry) { registry.add("spring.cloud.aws.region.static") { localStack.region } registry.add("spring.cloud.aws.credentials.access-key") { localStack.accessKey } registry.add("spring.cloud.aws.credentials.secret-key") { localStack.secretKey } registry.add("spring.cloud.aws.s3.endpoint") { localStack.getEndpointOverride(S3).toString() } } }
Thanks to that annotation, we can dynamically provide values to our test environment based on the LocalStack instance.
Of course, we must remember that methods annotated with @DynamicPropertySource must be static! And that’s why we use it with the @JvmStatic annotation.
Utilize LocalStack AWS CLI
At this point, we have our base class ready, but before we head to the tests, I would like to show you the LocalStack AWS CLI and why and how to use it.
As the first step, let’s create the util
package and add the LocalStackUtil.kt
file:
import org.testcontainers.containers.localstack.LocalStackContainer fun LocalStackContainer.createBucket(bucketName: String) { this.execInContainer("awslocal", "s3api", "create-bucket", "--bucket", bucketName) } fun LocalStackContainer.deleteBucket(bucketName: String) { this.execInContainer("awslocal", "s3api", "delete-bucket", "--bucket", bucketName) } fun LocalStackContainer.deleteObject(bucketName: String, objectName: String) { this.execInContainer("awslocal", "s3api", "delete-object", "--bucket", bucketName, "--key", objectName) }
As we can see, we introduced 3 helper extension functions that we will later use to create and delete buckets and objects. This way, we can easily clean up between the tests (we use the shared approach, right?). Moreover, it will simplify the setup process for each test case.
The above code combines the execInContainer
– which will run the passed command in our running LocalStack container, just like with the docker exec
, and the awslocal
– a LocalStack wrapper around the AWS CLI. So, if you’ve ever been working with the AWS command line interface, then you will see that this is 1:1.
Unfortunately, we must provide the command as a separate String value, because otherwise, we will end up with:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: “awslocal s3api create-bucket –bucket bucket-1”: executable file not found in $PATH: unknown
Write Integration Test Cases
With all of that LocalStack preparation done (I know, quite a bunch of things to learn, but once you learn this, it will be a simple copy-paste), we can finally write some integration tests for our Spring Boot S3 integration.
Firstly, let’s create the controller
package and put the BucketControllerIntegrationTest
class:
class BucketControllerIntegrationTest( @Autowired private val webTestClient: WebTestClient ) : LocalStackIntegrationTest() { }
As we can see, no annotations are required. We simply extend the LocalStackIntegrationTest
class and inject the WebTestClient
.
Test No Buckets Exist
Nextly, let’s introduce our first test case. If we do not do anything, we expect that no buckets exist in our S3 instance:
@Test fun `Given no existing buckets When getting list of buckets Then return an empty array`() { val buckets = webTestClient .get().uri("/buckets") .exchange() .expectStatus().isOk() .expectBody(object : ParameterizedTypeReference<List<String>>() {}) .returnResult() .responseBody assertNotNull(buckets) assertTrue(buckets.isEmpty()) }
As mentioned before, we use the WebTestClient to make a GET HTTP request to the /buckets
endpoint. Then, we use a small hack with ParameterizedTypeReference
– because the endpoint returns a list of Strings and we use Kotlin- and we obtain the response body.
Lastly, we have plain assertions. We verify that the response body is not null and that our S3 bucket list is empty.
Verify S3 Bucket Exists In LocalStack
Following, let’s see our helper functions in action:
@Test fun `Given one existing bucket When getting list of buckets Then return an array with expected bucket name`() { val bucketName = "bucket-1" localStack.createBucket(bucketName) val expectedJson = """ [ "Bucket #1: $bucketName" ] """ webTestClient .get().uri("/buckets") .exchange() .expectStatus().isOk() .expectBody() .json(expectedJson) localStack.deleteBucket(bucketName) }
This time, we utilize the createBucket
and make sure that the /buckets
endpoint returns the expected JSON. Please note that this is another way to assert the response body.
After all, we delete the existing bucket, so it won’t affect other test cases.
Assert Bucket Created Successfully
As the next step, let’s take a look at how we can check if our endpoint responsible for creating new S3 buckets works. And I see two paths we can go here.
The first one, using the execInContainer
:
@Test fun `Given no existing buckets When creating bucket Then create bucket successfully`() { val bucketName = "bucket-2" webTestClient .post().uri("/buckets") .bodyValue(BucketRequest(bucketName = bucketName)) .exchange() .expectStatus().isOk() val execResult = localStack.execInContainer("awslocal", "s3api", "list-buckets").stdout assertTrue(execResult.contains(bucketName)) localStack.deleteBucket(bucketName) }
The important thing to mention here is that the execInContainer
returns the ExecResult
. And thanks to that, we can read additional info, like stdout
, stderr
, or exitCode
.
And thanks to the stdout
, we can get this JSON to verify it contains particular bucket name (or even we could parse that to an object):
{ "Buckets": [ { "Name": "bucket-2", "CreationDate": "2024-09-19T05:28:42.000Z" } ], "Owner": { "DisplayName": "webfile", "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a" } }
Alternatively, we can use the /buckets
endpoint once again, too:
@Test fun `Given no existing buckets When creating bucket Then create bucket successfully`() { val bucketName = "bucket-2" webTestClient .post().uri("/buckets") .bodyValue(BucketRequest(bucketName = bucketName)) .exchange() .expectStatus().isOk() val expectedJson = """ [ "Bucket #1: $bucketName" ] """ webTestClient .get().uri("/buckets") .exchange() .expectStatus().isOk() .expectBody() .json(expectedJson) localStack.deleteBucket(bucketName) }
Test Remaining Cases
The remaining cases of our integration test use a more or less similar approach, so I will simply copy-paste them here so that you can analyze them.
At this point, I am pretty sure you understand the general idea behind what I understand by testing of Spring Boot S3 integration with LocalStack, so I don’t see the need for explaining them one- by one:
@Test fun `Given no objects existing in the bucket When getting objects of a bucket Then return an empty array`() { val bucketName = "bucket-3" localStack.createBucket(bucketName) val objects = webTestClient .get().uri("/buckets/$bucketName/objects") .exchange() .expectStatus().isOk() .expectBody(object : ParameterizedTypeReference<List<String>>() {}) .returnResult() .responseBody assertNotNull(objects) assertTrue(objects.isEmpty()) localStack.deleteBucket(bucketName) } @Test fun `Given no objects When creating example object Then return created object`() { val bucketName = "bucket-4" val objectName = "example.json" localStack.createBucket(bucketName) val expectedJson = """ { "id": "123", "name": "Some name" } """ webTestClient .post().uri("/buckets/$bucketName/objects") .exchange() .expectStatus().isOk() .expectBody() .json(expectedJson) localStack.deleteObject(bucketName, objectName) localStack.deleteBucket(bucketName) } @Test fun `Given created object When getting list of objects Then return array with one object`() { val bucketName = "bucket-5" val objectName = "example.json" localStack.createBucket(bucketName) val expectedJson = """ [ $objectName ] """ webTestClient .post().uri("/buckets/$bucketName/objects") .exchange() .expectStatus().isOk() webTestClient .get().uri("/buckets/$bucketName/objects") .exchange() .expectStatus().isOk() .expectBody() .json(expectedJson) localStack.deleteObject(bucketName, objectName) localStack.deleteBucket(bucketName) } @Test fun `Given existing object When getting object by key Then return object content`() { val bucketName = "bucket-6" val objectName = "example.json" localStack.createBucket(bucketName) val expected = """ { "id": "123", "name": "Some name" } """ webTestClient .post().uri("/buckets/$bucketName/objects") .exchange() webTestClient .get().uri("/buckets/$bucketName/objects/$objectName") .exchange() .expectStatus().isOk() .expectBody() .json(expected) localStack.deleteObject(bucketName, objectName) localStack.deleteBucket(bucketName) } @Test fun `Given existing bucket with object When deleting bucket Then bucket is removed`() { val bucketName = "bucket-7" localStack.createBucket(bucketName) webTestClient .post().uri("/buckets/$bucketName/objects") .exchange() .expectStatus().isOk() webTestClient .delete().uri("/buckets/$bucketName") .exchange() .expectStatus().isOk() val buckets = webTestClient .get().uri("/buckets") .exchange() .expectStatus().isOk() .expectBody(object : ParameterizedTypeReference<List<String>>() {}) .returnResult() .responseBody assertNotNull(buckets) assertTrue(buckets.isEmpty()) }
Summary
And that is all for this tutorial, in which we learned how to implement integration tests for Spring Boot AWS S3 integration with LocalStack and Testcontainers.
I hope you enjoyed it and for the source code, please visit this GitHub repository.
Have a great day and see you in the next articles! 🙂