Using Docker + AWS to build, deploy and scale your application


I recently worked to develop a software platform that relied on Spring Boot and Docker to prop up an API. Being the only developer on the project, I needed to find a way to quickly and efficiently deploy new releases. However, I found many solutions overwhelming to set up.

That was until I discovered AWS has tools that allow any developer to quickly build and deploy their application.

In this 30 minute tutorial, you will discover how to utilize the following technologies:

Once finished, you will have a Docker application running that automatically builds your software on commit, and deploys it to the Elastic beanstalk sitting behind a load balancer for scalability. This continuous integration pipeline will allow you to worry less about your deployments and get back to focusing on feature development within your application.

Here is the order in which to configure services:

  1. Git repository initialization using CodeCommit
  2. CodeBuild Setup
  3. EBS Configuration
  4. CodePipeline Configuration
Background knowledge

I am using Docker for this tutorial application. However AWS supports a wide range of configurable environments in the Elastic beanstalk; .NET, Java, NodeJS, PHP, Python, Ruby. Docker was chosen for this tutorial so that the reader can focus more on the build process and less on the project setup. With that being said, I will not be diving deeply into Docker. If you wish to learn more about Docker, start by reading the introduction on the Docker website.

The Application

The example Spring Boot source code that will be used can be found at:

The application is a Spring Boot project configured to run on port 5000 and has a REST controller with a single endpoint.

The API REST controller is very basic. It maps /api/ path to a method which returns a list of strings in JSON format. This is the endpoint we will use to verify our application has successfully built and deployed on the AWS EBS.

The application creates am example-1.0.0-SNAPSHOT.jar file when built using Maven. This file is important for us to reference in our Dockerfile.

Maven build:

Would produce target/example-1.0.0-SNAPSHOT.jar. The Dockerfile below uses a flavor of Alpine Linux to add, expose and run the Spring Boot application.


1. Git repository initialization using CodeCommit

First things first, we need a git repository to build our code from. AWS CodeCommit is cheap, reliable, and secure. It uses S3 which is a scalable storage solution subject to S3 storage pricing.

Begin by logging into your AWS console and creating a repository in CodeCommit. For the purpose of this tutorial, I have called the repository name the same name as the Spring Boot application. Once created, you will be presented with the standard HTTPS and SSH URLs of the repository.

The above example has generated the following repository location, notice if I try to do a clone from the repository access is denied.

The above example has generated the following repository location; notice if I try to do a clone from the repository, access is denied.


IAM or Identity and Access Management enables you to securely control access to AWS services and resources. To authorize a user to access our private git repository, navigate to the IAM services page. Begin by adding a user. I have named the user the same name as the project and git repository. Choose programmatic access which will allow for policies to be added.

In order to allow this new user to fully administer our new git repository, attach the AWSCodeCommitFullAccess policy. Once added, click through to finish creating your user.

Now that a user has been created with the correct policies, GIT credentials are needed to work with the new CodeCommit repository. Navigate to the new user and look for the “HTTPS Git credentials for AWS CodeCommit” shown below. Generate a new username and password and download the .gitCrendientialsfile once prompted. Inside that file is the information needed to access your repository.

Note: Only two keys are allowed per user at this time. If you lose your key, a new one will need to be generated to access the repository. For more in-depth information on setting up git credentials in AWS, check out the guide for setting up HTTPS users using Git credentials.


With the new repository created, clone the Github repository holding our sample Spring Boot application. Change the remote to your new CodeCommit repository location, then finally push the master branch to master.

2. CodeBuild Setup

Now that the CodeCommit repository holds our sample Spring boot application, the code needs to be built for deployment. Navigate to CodeBuild. CodeBuild is a source code compiler which is pay on demand.

Start by creating a new build project and point the source to the AWS CodeCommit repository that was created in Step 1. You can see I have pointed this new build project to the AWS CodeCommit source provider, and specified the DockerCodePipeline repository.

Next it asks for environmental information. The default system image is fine for this build process. The most important part is to tell CodeBuild to use the buildspec.yml. The buildspec contains the necessary commands to generate the artifacts needed to deploy to the EBS.

Included in the sample Spring Boot application is a buildspec.yml. This file is used to tell CodeBuild what commands to run in each phase, and what files to bundle up and save in the artifacts.

Additional configuration options can be found at:


Final setup for the build process is to specify the location where the artifact made from the buildspec.ymlwill be stored. In the example below, I put all artifacts in the Amazon S3 under the name dockerAWSCodePipeline, and in a bucket named irdb-builds. The bucket can be in bucket of your choice. You must go into S3 and create this bucket prior to creating the build project.

The build project is now configured and ready to use. Builds can manually be run from the console creating artifacts stored in S3 as defined above.

4. EBS Setup

Now that the code is in CodeCommit, and the artifacts are built using CodeBuild, the final resource needed is a server to deploy the code. That is where the Elastic beanstalk comes in useful. The EBS is a service that automatically handles provisioning, load balancing, auto-scaling, etc. It is a very powerful tool to help your manage and monitor your applications servers.

Let’s assume, for example, my API needs to have four servers due to the amount of requests I am receiving. The EBS makes the scaling of those servers simple with configuration options.

Begin by creating a new webserver environment and give it a name and domain name. This domain name is your AWS domain name; if you have a personal domain name you can point it to this load balancer being created using Route53.

The last step of creating your webserver worker environment is to tell EBS that we want to run Docker and to use the example application code. Later our code from CodeBuild will replace the AWS sample application.

The server and environment will take several minutes to start. Once complete, navigate to the configuration page of your new EBS environment.

By default the environment has a load balancer installed and auto scales. A scaling trigger can be set to adjust the number of instances to run given certain requirements. For example: I could set my minimum instances to 1 and maximum to 4 and tell the trigger to start a new instance each time the CPUUtilization exceeds 75%. The load balancer would then scale requests across the number of instances currently running.

5. CodePipeline Configuration

This is the final piece of the puzzle which brings all steps 1-4 above together. You will notice that up until now we have had to manually tell CodeBuild to run, then would have to go to the EBS and manually specify the artifact for deployment. Wouldn’t it be great if all this could be done for us?

That is exactly what Codepipeline does. It fully automates the building and provisioning of the project. Once new code is checked in, the system magically takes care of the rest. Here is how to set it up.

Begin by creating a new CodePipeline. In each step. select the repository, build project, and EBS environment created in step 1-4 above.


Once complete the CodePipeline will begin monitoring changes to your repository. When a change is detected, it will build the project, and deploy it to the available servers in your EBS application. You can monitor the CodePipeline in real time from the pipelines detail page.

A Final Word

When configured properly, the CodePipeline is a handy tool for the developer who wants to code more and spend less time on DevOps.

This pipeline gives a developer easy access to manage a application big or small. It doesn’t take a lot of time or money to set yourself up with a scalable application that utilizes a quick and efficient build and deployment process.

If you are in need of a solution to build, test, deploy, and scale your application, consider AWS CodePipeline as a great solution to get your project up and running quickly.





Multiple Beans are Eligible for Injection


In some cases you may want to inject a controller (another backing bean) into another controller. In Eclipse it will show a warning: Multiple beans are eligible for injection to the injection point. This may prevent your server from starting.

The solution is to provide CDI with the name of the property to inject. The @Named annotation is shown below:

Now CDI will know the proper controller to inject.

How to skip Maven unit tests


All Maven unit tests are ran by default. When a test fails the project will not build. While this forces you to have a stable build of your project before deploying; it may not be ideal in some situations.

Skipping Over Failed Unit Tests

Within the in maven-surefire-plugin in your pom, specify the skipTests parameter

Now all tests will be skipped on the next build.

Using a comparator to sort 2 separate instances


Consider you have 2 separate classes; “dog” and “cat”. Both of these classes share a common interest in that they are animals. Each of these animals have a date of birth. In this example I will demonstrate how to merge 2 object arrays of both types (“dog” and “cat”) and sort them by their common property; date of birth.

Where might I need this?

In the case of needing both cats and dogs on the same dataTable in JSF this would provide a merged object array to iterate through.

The Code

Creating the abstract class “animal” to be extended by both the dog and cat classes. This abstract class defines what both a cat and dog have in common; a name.

Both the cat and dog classes are very similar. For the purpose of this tutorial they both only have a date of birth.

The dog class is identical to the cat class.

The Solution

To sort both cats and dogs by date of birth first begin by creating a list of each type then adding a few objects to each. The code below adds 2 animals to each type. Then all objects are added to a list of object array. Using collection.sort comparator it checks which type of object is being passed. It then casts that object to its instance type and gets the date of birth. The final statement in the method is the compareTo function which compares the date of birth for each type. Using this approach you can see that you could easily add additional object array types and sort them all by a common date.

The following code is available on Github.

JSF detect session timeouts with web filter


When working with JSF 2.0 you will encounter a situation in which the user’s session times out and ajax requests fail. The response on a ajax request will be a viewExpiredException. However, the root cause is the session has expired. They are essentially stuck on the page and are forced to reload.

The solution:

Using a WebFilter the user can gracefully be redirected to a view expired exception page. This solution is accomplished by checking if the users session is valid, and if the context path is within our required conditions, finally if it is a ajax request it overwrites the default JSF response with our own custom xml response that tells the browser to redirect to the view expired exception page.

Note: This fix is required due to a bug that is set to be resolved in JSF 2.3 release:

2 way encryption with MD5 DES


Data encryption standard (DES) is a old way of encrypting data. It effectively encrypts data to a unreadable string; however it should be noted that it is not secure. The following code is for demonstration purposes only and should not be implemented as a security protocol. DES is a breakable algorithm smaller key, and block size than a advanced encryption standard (AES) algorithm.

Use case

A user is prompted to enter some non sensitive data. For the sake of this example lets assume we have a requirement that the user telephone number must be encrypted. The user enters a phone number and the encode method secures the data for storage in the database. Upon rendering back to the end user the data is translated using the decode method.

A complete working copy with test cases is available for checkout on Github.

BufferedImage JPEG transparency using OpenJDK


When attempting to write a transparent JPEG to disk using OpenJDK 5,6,7 javax.imageio.IIOException error is thrown. This occurs because OpenJDK does not have a native JPEG encoder. There are two separate solutions.

Solution 1: Change the library

A alternative would be to switch from OpenJDK to Sun’s JDK which has a native JPEG encoder.

Solution 2: Buffered image write around

A programmatic way to solve the problem is to map or draw the existing BufferedImage onto a new BufferedImage with the type changed from TYPE_4BYTE_ABGR (default) to TYPE_3BYTE_BGR (new). The converted image will now be writable to disk.


PBKDF2 with HMAC-SHA1 encryption class


Sensitive user data must insure confidentiality and integrity. The following class is a example of how to use a password based key derivation function (PBKDF2) algorithm to encode / decode data. Using the createHash method a salt and hash byte array is generated from a instance of the PBKDF2WithHmacSHA1 from the secretKeyFactory. The byte arrays that are returned serve as your encrypted data. You must have both to be able to validate.

Use case

A user signs up for a account on your website. They enter a username / password. The password that is entered is ran against the createHash method. The result (salt, hash) are stored in a database. This is the users login data encrypted. Whenever a user attempts to login with that username, the salt and hash are retrieved from the database and the validatePassword method is called. If the test passes you are assured they are a legitimate user and may proceed.

What is slowEquals?

When a password is validated it requires that slowEquals be true. SlowEquals is a method designed to prevent against timing attacks. What this does is assure that the attacker cannot determine how long it took for the password to fail. It iterates through all values in the byte array regardless if they are equal or not. This prevents the attacker from having enough information compute a off line attack.

The complete project is available on Github.

JAAS login module in Tomcat 7 example (Part 2)


Part 1 of this tutorial demonstrated how to implement a login module using JAAS + Tomcat 7. This next segment shows how to create a login form and call the login module.

Folder Structure:

/protected/index.html (protected via our web.xml file)

Simple JSF login page

The following is a simple form used to submit the username and password to a backing bean called loginBean. The form uses HTML5 passthrough elements, as well as built in JSF validators on the input fields. All errors are displayed using the h:messages output.

Calling the login module

Once the form passes validation the login() action is called. The login action uses the submitted username / password to request a login from the servlet container. This will call the login module created in part 1 of this tutorial. If the request.login() servlet request fails, it throws a LoginException which is caught in the form of a ServletException below. If the login succeeds then the user is redirected to the protected page.

This concludes the configuration and implementation of JAAS container managed security. The original working copy of the complete project is available on Github.

JAAS login module in Tomcat 7 example (Part 1)


Implementing a login module using JAAS is an excellent way to secure URL’s in a web application. In this tutorial, it will walk through the steps in configuring JAAS authentication in Tomcat 7 using the form based authentication method.

Before looking at the code is it important to understand the purpose of JAAS authentication. JAAS or Java Authentication and Authorization Service is an optional package designed to work in a pluggable fashion. The JAAS component comes standard with many other servers such as Glassfish, Wildfly, and Jetty. Once the user is authorized, the JAAS component controls access to sensitive resources. In this article we will secure sensitive resources by URL patterns.

Understanding Principals

JAAS relies on users and roles to authorize the access to certain resources. In JAAS, the users and roles are separated into Principals. The separated Principals all represent the Subject identity. The Subject is an entity, such as a person or service. Lets begin by defining our user and roles Principals.

Note the simplicity of the Principals. They both are similar and only require a name for referencing.

The Custom Login Module

The Login module must implement the interface. It is by inheritance that it will override the following methods:

  • initialize() – all options are loaded at this time, options can be configured in the jaas.conf file later in the tutorial
  • login() – used to perform verification logic
  • commit() – invoked after successful authentication from the login() method
  • abort() – called if something goes wrong with the login() or commit() methods
  • logout() – clears the Subject

The login module accepts the inputs from the end user and verifies they are valid through the login() method. The verification can be made in any number of ways: checking a database, a LDAP, remote API, etc. It is up to the developer to define the requirements for verification. For the purpose of this tutorial we are only verifying that the end user has entered values for the username / password. Once the verification is successful; roles are attached to the verified user. These roles are used for restricting access to resource locations in the web.xml. If an error occurs during the verification process or the assigning of roles the login() method must throw a LoginException. This will be gracefully handled by our application later in the tutorial.

The commit() method is invoked after a successful verification from the login() method. A UserPrincipal and all RolePrincipals associated with the verified user are stored in the Subject. The Subject is accessible in the instance for the container managed security.

The final two methods are self explanatory. The logout() method clears all the subjects principals (user information). The abort() method nullifies the inputs (username / password), and principals if the commit failed but the user is authenticated. However, if the user is not verified or the commit() did succeed, then a standard logout() is performed.


Structuring the web application

Enabling the module above requires 3 files to be configured. Once complete this will secure access to the specified folder on the Java web application.

Begin by editing the context.xml file and defining the JAAS realm. This example uses the default JAASRealm for defining the user and role Principal classes which we defined above as UserPrincipal and RolePrincipal. The appName attribute is used to define which security module Tomcat will look to use.

Finally, for the web application to load the security configuration a jaas.config file must be defined. This file will inform Tomcat what class file to load when processing login requests. Note that CustomLogin mirrors our appName in the context.xml. The CustomLoginModule is referenced from our java class above.

Tomcat needs to know where the application security configuration file is located (jaas.config). A JVM argument needs to be added to the file. On server startup the server will load the custom login module.

Securing Resource Locations

To prevent access to certain resource location the web.xml needs modification. Any number of security constraints can be added to map to resource locations. The auth-constraint can have multiple roles as shown.

This concludes the setup / configuration of JAAS on a java web application. The next tutorial demonstrates how to implement a JSF login page with JAAS.

A full copy of the complete project is available on Github.