Using Docker + AWS to build, deploy and scale your application

Standard

I recently worked to develop a software platform that relied on Spring Boot and Docker to prop up an API. Being the only developer on the project, I needed to find a way to quickly and efficiently deploy new releases. However, I found many solutions overwhelming to set up.

That was until I discovered AWS has tools that allow any developer to quickly build and deploy their application.

In this 30 minute tutorial, you will discover how to utilize the following technologies:

Once finished, you will have a Docker application running that automatically builds your software on commit, and deploys it to the Elastic beanstalk sitting behind a load balancer for scalability. This continuous integration pipeline will allow you to worry less about your deployments and get back to focusing on feature development within your application.

Here is the order in which to configure services:

  1. Git repository initialization using CodeCommit
  2. CodeBuild Setup
  3. EBS Configuration
  4. CodePipeline Configuration
Background knowledge

I am using Docker for this tutorial application. However AWS supports a wide range of configurable environments in the Elastic beanstalk; .NET, Java, NodeJS, PHP, Python, Ruby. Docker was chosen for this tutorial so that the reader can focus more on the build process and less on the project setup. With that being said, I will not be diving deeply into Docker. If you wish to learn more about Docker, start by reading the introduction on the Docker website.

The Application

The example Spring Boot source code that will be used can be found at: https://github.com/sixthpoint/Docker-AWS-CodePipeline

The application is a Spring Boot project configured to run on port 5000 and has a REST controller with a single endpoint.

The API REST controller is very basic. It maps /api/ path to a method which returns a list of strings in JSON format. This is the endpoint we will use to verify our application has successfully built and deployed on the AWS EBS.

ApiController.java

The application creates am example-1.0.0-SNAPSHOT.jar file when built using Maven. This file is important for us to reference in our Dockerfile.

Maven build:

Would produce target/example-1.0.0-SNAPSHOT.jar. The Dockerfile below uses a flavor of Alpine Linux to add, expose and run the Spring Boot application.

Dockerfile

1. Git repository initialization using CodeCommit

First things first, we need a git repository to build our code from. AWS CodeCommit is cheap, reliable, and secure. It uses S3 which is a scalable storage solution subject to S3 storage pricing.

Begin by logging into your AWS console and creating a repository in CodeCommit. For the purpose of this tutorial, I have called the repository name the same name as the Spring Boot application. Once created, you will be presented with the standard HTTPS and SSH URLs of the repository.

The above example has generated the following repository location, notice if I try to do a clone from the repository access is denied.

The above example has generated the following repository location; notice if I try to do a clone from the repository, access is denied.

1A. CONFIGURING IDENTITY AND ACCESS MANAGEMENT (IAM)

IAM or Identity and Access Management enables you to securely control access to AWS services and resources. To authorize a user to access our private git repository, navigate to the IAM services page. Begin by adding a user. I have named the user the same name as the project and git repository. Choose programmatic access which will allow for policies to be added.

In order to allow this new user to fully administer our new git repository, attach the AWSCodeCommitFullAccess policy. Once added, click through to finish creating your user.

Now that a user has been created with the correct policies, GIT credentials are needed to work with the new CodeCommit repository. Navigate to the new user and look for the “HTTPS Git credentials for AWS CodeCommit” shown below. Generate a new username and password and download the .gitCrendientialsfile once prompted. Inside that file is the information needed to access your repository.

Note: Only two keys are allowed per user at this time. If you lose your key, a new one will need to be generated to access the repository. For more in-depth information on setting up git credentials in AWS, check out the guide for setting up HTTPS users using Git credentials.

1B. MOVING THE CODE TO THE NEW CODECOMMIT REPOSITORY

With the new repository created, clone the Github repository holding our sample Spring Boot application. Change the remote to your new CodeCommit repository location, then finally push the master branch to master.

2. CodeBuild Setup

Now that the CodeCommit repository holds our sample Spring boot application, the code needs to be built for deployment. Navigate to CodeBuild. CodeBuild is a source code compiler which is pay on demand.

Start by creating a new build project and point the source to the AWS CodeCommit repository that was created in Step 1. You can see I have pointed this new build project to the AWS CodeCommit source provider, and specified the DockerCodePipeline repository.

Next it asks for environmental information. The default system image is fine for this build process. The most important part is to tell CodeBuild to use the buildspec.yml. The buildspec contains the necessary commands to generate the artifacts needed to deploy to the EBS.

Included in the sample Spring Boot application is a buildspec.yml. This file is used to tell CodeBuild what commands to run in each phase, and what files to bundle up and save in the artifacts.

Additional configuration options can be found at: http://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html.

Buildspec.yml

Final setup for the build process is to specify the location where the artifact made from the buildspec.ymlwill be stored. In the example below, I put all artifacts in the Amazon S3 under the name dockerAWSCodePipeline, and in a bucket named irdb-builds. The bucket can be in bucket of your choice. You must go into S3 and create this bucket prior to creating the build project.


The build project is now configured and ready to use. Builds can manually be run from the console creating artifacts stored in S3 as defined above.

4. EBS Setup

Now that the code is in CodeCommit, and the artifacts are built using CodeBuild, the final resource needed is a server to deploy the code. That is where the Elastic beanstalk comes in useful. The EBS is a service that automatically handles provisioning, load balancing, auto-scaling, etc. It is a very powerful tool to help your manage and monitor your applications servers.

Let’s assume, for example, my API needs to have four servers due to the amount of requests I am receiving. The EBS makes the scaling of those servers simple with configuration options.

Begin by creating a new webserver environment and give it a name and domain name. This domain name is your AWS domain name; if you have a personal domain name you can point it to this load balancer being created using Route53.

The last step of creating your webserver worker environment is to tell EBS that we want to run Docker and to use the example application code. Later our code from CodeBuild will replace the AWS sample application.

The server and environment will take several minutes to start. Once complete, navigate to the configuration page of your new EBS environment.

By default the environment has a load balancer installed and auto scales. A scaling trigger can be set to adjust the number of instances to run given certain requirements. For example: I could set my minimum instances to 1 and maximum to 4 and tell the trigger to start a new instance each time the CPUUtilization exceeds 75%. The load balancer would then scale requests across the number of instances currently running.

5. CodePipeline Configuration

This is the final piece of the puzzle which brings all steps 1-4 above together. You will notice that up until now we have had to manually tell CodeBuild to run, then would have to go to the EBS and manually specify the artifact for deployment. Wouldn’t it be great if all this could be done for us?

That is exactly what Codepipeline does. It fully automates the building and provisioning of the project. Once new code is checked in, the system magically takes care of the rest. Here is how to set it up.

Begin by creating a new CodePipeline. In each step. select the repository, build project, and EBS environment created in step 1-4 above.

 

Once complete the CodePipeline will begin monitoring changes to your repository. When a change is detected, it will build the project, and deploy it to the available servers in your EBS application. You can monitor the CodePipeline in real time from the pipelines detail page.

A Final Word

When configured properly, the CodePipeline is a handy tool for the developer who wants to code more and spend less time on DevOps.

This pipeline gives a developer easy access to manage a application big or small. It doesn’t take a lot of time or money to set yourself up with a scalable application that utilizes a quick and efficient build and deployment process.

If you are in need of a solution to build, test, deploy, and scale your application, consider AWS CodePipeline as a great solution to get your project up and running quickly.

 

 

 

 

BackboneJS with Webpack: A lesson in optimization

Standard

Developing a large BackboneJS application presents a unique design problem. As developers, we like to organize our code so it is understandable, logical, and predictable. However, doing so can cause performance issues on the client side.

In this blog I will discuss a handy tool I like to use for this purpose: Webpack. I’ll show it in action, how to use it, and what it is good for. But first, let’s talk about how I came across Webpack.

An Example

On a previous project, I was building an audio and video streaming player. The frontend was developed using BackboneJS. I used libraries such as jQuery and SocketIO. Using Require shim configuration, I ended up with my dependencies / exports organized as follows.

This worked great for my loading all my libraries. For each of my files, I defined the libraries I wanted to use. Breaking down each of my files into modules is a great way to handle organizing the large code base. Then I used the RequireJS AMD text resource loader plugin to load all template files.

At the time this was a decent solution. My code base was organized, easy to understand, and predictable. However, the larger my application became, a performance problem began to develop. For each template that was added, a new call to the server was made. This began to balloon the initial loading time of the application.

templateLoading

Loading of all templates

Wouldn’t it be great if all necessary resources were compacted into a single file, optimizing the loading time, and still be able to keep our code organized?

Developing a BackboneJS app all in a single file would be frustrating experience to manage. That’s where Webpack comes to the rescue.

What is Webpack?

Webpack is a module bundler that takes your files, compacts them, and generates a static file. Think of it as your own personal secretary there to help keep your life organized. You provide the configuration, it supplies the optimization.

Webpack’s primary goal is to keep initial loading time down. It does this by code splitting, and loaders.

Code Splitting

Code splitting is ideal for large applications where it is not efficient to put all code into a single file. Some blocks of code may only be useful for certain pages of the site. Using this opt-in feature allows you to define the split points in your code base, and Webpack will optimize the dependencies required to generate the optimal bundle.

This example uses commonJS require.ensure to load resources on demand. The final output would contain two chunked files:

  • output.js – the primary entry point chunk containing
    • chunk loading logic
    • module A
  • 1.output.js – the additional chunk to be loaded on demand containing
    • module B
    • module C

Loaders

Loaders preprocess files as you use the require() method. Using a loader you can easy require() resources such as CSS, images, or compile-to JS languages (CoffeeScript or JSX).

By default, Webpack knows how to process your JavaScript files, minify, and combine them. But it doesn’t really know how to do much else. Loaders are the solution to processing different types of files and turn them into usable resources for your application.

Load bootstrap CSS file:

Create a image element, and set the src to the image resource file:

Using the CLI all loaders can be defined in the webpack.config.js file.

This example configuration file sets the application entry point, desired output file name for webpack to build, and a list of two loaders (CSS and PNG).

If you want to test this out for yourself, check out this github repo: https://github.com/sixthpoint/webpack-async-code-splitting

How do I use BackboneJS with Webpack?

Above I showed my starter application which had an initial loading performance issue. Remember all those template calls? Webpack async loading and code splitting is going to significantly decrease load times. Let’s assume my application needs only a two entry points:

  • #/nowplaying – will be responsible for loading data from socket.io
  • #/schedules – will display all scheduling information

To start I have modified my Webpack config file and using the providePlugin added jQuery, Backbone, and Underscore to the global scope of my application. I will no longer have to require these libraries through my app.  This is similar to the shim config above.

The most important file of this app is the Backbone router. The router defines the code splitting points.

Notice that by using require.ensure, I will load only the socket.io API resource when navigating to the now playing page. This way, if somebody never goes to the now playing page, the resources for that page will never have to be loaded. If the user navigates to the now playing page, it will then be cached for if they return, for performance reasons.

So how does Webpack organize this? Simple, both the now playing view (1.output.js) and the schedule view (2.output.js) get their respective files since they are async loaded.

Here is the output of the terminal, as expected:

Final Thoughts

What kind of project is Webpack good for? Webpack is great for any scale of project. It is simple to use and configure. Anyone who is developing a JavaScript application should consider using Webpack for its performance improvements and excellent set of plugins.

The complete source code of the optimized project can be found on github: https://github.com/sixthpoint/webpack-backbonejs-socketIO-client

Using the webpack dev server

Standard

A great feature of Webpack is has a build in webserver for testing your application. It will monitor your files for changes and rebuild. This is similar to watch mode that can be enabled during configuration. However, the dev server expands on that by providing a localhost 8080 port address and automatic refreshing of the view when content changes.

First, install the webpack-dev-server globally

To start the server, navigate to your file directory and type the command:

This will start the server, output should look similar to below.

Now you can navigate to the running site: http://localhost:8080/webpack-dev-server/

http://localhost:8080/webpack-dev-server/

http://localhost:8080/webpack-dev-server/

Notice the bar that says “app ready”. This is the status bar that webpack has put into the browser. This does have HTML placed on the page using an iFrame. At some point you will not want this on your application, but for simple senario’s this is fine.

To remove the status bar, navigate your browser to the base url (http://localhost:8080/). Downside to this is that now the browser is not automatically refreshed when files are modified. To enable watch mode and auto refreshing on the dev server, specify the inline tag

Now content will automatically refresh without that pesky status bar in the way. Happy web packing!

Inversion of Control (IoC) with JSF

Standard

Power of containers

Flash back to the early 2000s and this article would be focused on POJOs and how they are transforming the way we organize our logic. But luckily, its 2015 and we don’t need concern ourselves with managing the state of our objects when developing server-side applications. Most likely you are using a version of inversion of control already in your application without knowing it.

Below is a simple example of JSF 2.2 using CDI for bean injection.

To understand the key concepts associated with IoC, consider the above example. The annotation @SessionScoped is providing a length of time for this container managed class to hang around. By definition a session scoped bean maintains its state across more than one JSF view. Since this is the user that is logged onto the site, this bean must be accessible for the duration of them browsing the application. CDI has implemented the definition of a session scoped bean using IoC facets.

There are 3 core key facets of IoC.

  • Manages constructor injection of managed objects – The developer does not need to explicitly instantiate the object. The container would use a default constructor to invoke the object. It should be noted that overriding the default constructor in IoC is possible given unique situations.
  • Dependency Handling – Certain objects can depend on each other to function. The container must have the logic to handle cyclical dependencies.
  • Life cycle and Configuration – Customization of the lifecycle must be provided through annotation or configuration (xml).

Inversion of control (IoC) is a concept that has been implemented in various containers/frameworks such as Spring, JSF, Google Guice, or PicoContainer. These frameworks provide the abilities similar to the above example. Using a framework eliminates the need to write large amounts of boilerplate code. For example; Writing a class to handle application, session, and view scoped logic.

What would it be like without IoC?

Simple answer is… a large headache. Imagine you have a web application. You have to manage a single class that is used by the application. Lets call this our applicationClazz. Now when each new user access the application we need to store there current application context. This user context would have to be stored in our applicationClazz. Then to add functionality, lets assume the site has a login page and stores information in a loginClazz. This login page is specific to each individual user context. So for each user that is using the application, the applicationClazz would have to maintain a map of all the loginClazz’s and maintain an association to the current user context. To make things even more complicated consider how difficult it would become to clean up and managing this application map if you had 20, 50, or 100 classes on your application that had different lifecycles. This is why we use IoC, to do all our heavy lifting.

CDI or Managed Property with JSF?

Prior to JSF 2.0 the annotation @ManagedProperty was widely used. Now mostly @Named is used which is context dependency injection (CDI). Both have support similar life cycles.

The following are a list of the most common CDI scopes used, there duration, and a example use case.

Session Scoped – User’s interaction lasts across multiple HTTP requests. Often used to store a logged in users information for the duration of there time on the site.

Request Scoped – User’s interaction lasts across a single HTTP request. This scope would be best suited for pages that require little to no ajax/form interaction. An example simple example would be a page that displays the date / time. If a ajax request were implemented to refresh the content. Since it is request scoped, a new bean would be created for each ajax request.

Application Scoped – Contents are shared across all users interacting with the web application. Let’s assume you have a dropdown list that will always have the same values no matter the user. The best solution would be to put those values into a application scoped bean so that they are alway sin memory, improving performance.

A Short Summary

The most important thing to take away from this article is; IoC is your friend. It does a lot of the heavy lifting managing classes. CDI give you the tools to quickly create applications using sesssion, request, and application scoped beans. Without it much of your time would be spent managing lifecycles.

Death to the back button in JSF

Standard

The browser back button is notorious for being the most hated browser feature by developers. It posses many design challenges and considerations. This article will cover a few approaches to handling the browser back button, as well as highlighting a way to create your own within a JSF application.

Stateful vs Stateless

When laying out your applications workflow, it is smart to consider how you want the application to flow and look to the end user. In a stateful application you attempt to store as much data in the backing beans, whereas with a stateless approach you load data as pages change. In JSF you have access to different kinds of managed beans. Some types work better for different implementations. Using view scoped and request scoped for a more stateless approach, whereas it would be smarter to use conversation scoped or session scoped for a more stateful approach. Each have there benefits and drawbacks.

Start by determining the applications purpose, this will help when selecting which type of bean to use. For example; when developing a search feature that spans multiple tables in various databases it may become inefficient to load the search results over again if the user presses the back button. Thus, a more stateful scoped bean (conversation scoped, or session scoped) would be smarter choice.

Consider the following workflow:

userSearch.xhtml -> finds a user and clicks -> userDetail.xhml

In a typical stateful workflow we could manage this entire page navigation using a conversation or session scoped bean in JSF. The data that is returned from the user search limits the users content to be shown on the user detail page. It only requires one backing bean to between both pages.

Benefits of a Stateful approach:
  • Additional security requirements are not needed since the user id is hidden.
  • No need to load data again in between views
  • Routes can easily be managed in backing beans using explicit routing
Drawbacks of a Stateful approach:
  • Backing beans can become cluttered
  • Stale data may become a issue
  • Pages are not bookmarkable (important in modern web applications)
  • Relies heavily on POST commands to navigate which is not search engine friendly
  • Memory usage can become an issue with larger / high traffic sites
A better stateless approach

Let’s continue to look at the following workflow, but with a different way to implement it. For this case I am going to assume that the userSearch is efficient.

userSearch.xhtml -> finds a user and clicks -> userDetail.xhml?id=123

Notice the “id?=123” has been added to the user detail page. This represents the id of the user that is expected be loaded. With a stateless implementation the user search page and the user detail page have no knowledge of each other. They would in fact be completely separate backing beans, most likely viewscoped. When the user is shown a list of search results, those links are generated using implicit routing and rendered to the DOM. Hovering over the link would show you the full url path. No need to hit a backing bean to determine routes like when using the stateful approach. The route is predetermined. This is one of the huge benefits to creating views that are stateless.

Benefits of a Stateless approach:
  • Pages are bookmarkable
  • Data is never stale
  • Links do not have to rely on backing beans, they can be generated on the page, SEO friendly
  • Less of a memory hog per session
Drawbacks of a Stateless approach:
  • Have to consider security implications when exposing id’s in the URL.
  • If performing heavy calculation, could hurt server performance.
Stateless with the back button

But how do we handle the back button in JSF applications? Designing your application to use stateless beans enables the ability to bring back the back button into your JSF application.

A typical enterprise application built in JSF will cripple if the back button is pressed. In fact a lot of developers have gone as far as building there own stateful back button to display on the page. This back button functions just like the browser back button, but has additional knowledge to control the stateful views. All of this is not necessary if your views are stateless.

It is my opinion that you should never give JSF too much control over the browser. If you have to implement your own back button within your application, do it so with stateless views. Stateless views by design should all have unique URL’s which you can track. Simply add a preRender event each JSF page which calls this BrowserHistoryController. This controller maintains a page stack of all url’s visited. It has a small amount of intelligence to handle users switching between pressing a on page back button, and the browser back button.

On any of your xhtml pages that you want tracked

Creating your own back link

BrowserHistoryController.java

Use this controller in combination with stateless views and the browser back button should no longer be a issue when coding for your application.

Terminate a running CentOS program using xkill

Standard

When an application is unresponsive in CentOS it sometimes requires the task be terminated. This is a neat trick to closing a application with minimal command line use.

Start by opening your terminal. Then type the following command:

Now click on the window you want to terminate. The program will automatically get the pid from the application window and terminate the process.

Output after click:

 

Backup filesystem to Amazon S3

Standard

Every server needs to be backed up periodically. The trouble is finding an affordable place to store your filesystem if it contains large amounts of data. Amazon S3 is the solution with reasonably priced standard storage ($0.0300 per GB), as well as reduced redundancy storage ($0.0240 per GB) at the time of writing this article. Updated pricing can be seen at http://aws.amazon.com/s3/pricing/

This short tutorial will show how to backup a servers filesystem using s3cmd. S3cmd is a command line tool for uploading, retrieving, and managing data in Amazon S3. This implementation will use a cronjob to automate the backup processing. The filesystem will be scheduled to be synced nightly.

How to install s3cmd?

This example assumes you are using CentOS, or RHEL. The s3cmd library is included in the default rpm repositories.

After installation the library will be ready to configure.

Configuring s3cmd

An Access Key and Secret Key are required from your AWS account. These credentials can be found on the IAM page.

Start by logging in to AWS and navigating to the Identity & Access Management (IAM) service. Here you will first create a new user. I have excluded my username below.

aws-create-new-user

Create a new user

Next create a group. This group will hold the permission for our user to be able to access all your S3 buckets. Notice under permissions the group has been granted the right to “AmazonS3FullAccess” which means any user in this group can modify any S3 bucket. To grant your new user access to the group click “Add Users to Group” and select your new user from the list.

aws-create-group

Create a new group and give it the “AmazonS3FullAccess” permission. Then add your new user to this group.

For s3cmd to connect to AWS it requres a set of user security credentials. Generate an access key for the new user by navigating back to the user details page. Look to the bottom of the page for the “Security Credentials” tab. Under Access Key click “Create Access Key”. It will generate a Access Key ID and Secret Access Key. Both these are required for configuring s3cmd.

aws-create-access-key

On the user details page generate a new access key

You now have a user setup with permissions to access the S3 API. Back on your server you need to input your new access key into s3cmd. To begin configuration type:

You should now see this page and be able to enter your Access Key Id and Secret Key.

At this point s3cmd is fully configured and ready to push data to S3. The final step is to create your own S3 bucket. This bucket will serve as the storage location for our filesystem.

Setting up your first S3 bucket

Navigate to the AWS S3 service and create a new bucket. You can give the bucket any name you want and pick the region for the data to be stored. This bucket name will be used in the s3cmd command.

aws-create-s3-bucket

Each file pushed to S3 is given a storage category of standard or reduced redundancy storage. This is configurable when syncing files. For the purpose of this tutorial all files will be stored in reduced redundancy storage.

Standard vs Reduced Redundancy Storage

The primary difference between the two options is durability; or how quickly do you need access to your data. Standard storage gives you nearly instant access to your data, where as reduced redundancy storage (RRS) may take up to several hours to retrieve the file(s). For the use case of this tutorial all files are storage in RRS. As noted previous RRS is considerably cheaper than standard storage.

Configuring a simple cronjob

To enter the cronjob editor simply type

Once in the editor create the cronjob below which will run Monday – Friday at 3:30 a.m. every morning.

This cronjob calls the s3cmd sync command and loads the default configuration which you have entered above. The –delete-removed option tells s3cmd to scan for locally deleted files, then remove them from the remove S3 bucket as well. The –reduced-redundancy option places all files in RRS for cost savings. Any folder location can be synced, just change the path to your desired location. Make sure to change mybucket to the name of your S3 bucket.

The server has now been configured to do nightly backups of the filesystem to AWS S3  using s3cmd library. Enjoy!

Multiple Beans are Eligible for Injection

Standard

In some cases you may want to inject a controller (another backing bean) into another controller. In Eclipse it will show a warning: Multiple beans are eligible for injection to the injection point. This may prevent your server from starting.

The solution is to provide CDI with the name of the property to inject. The @Named annotation is shown below:

Now CDI will know the proper controller to inject.

(KCDC) Kansas City Developers Conference 2015

Standard

The following presentation was given at kcdc.info on June 25, 2015

Topic: Building a more responsive design with JSF + Bootstrap 3

Modern web design has placed an emphasis on lightweight, responsive oriented design. Utilizing libraries such as Bootstrap 3 CSS / Javascript as well as font awesome one can create elegant designs, quickly and efficiently. We’ll dive into some best practices I’ve extract from solving real world problems when merging JSF with Bootstrap 3. Areas of emphasis would be error handling, responsive modals, and utilizing HTML5 data attributes to make job easier.

I would like to thank all the people that attended my presentation. It was a great opportunity to be around so many other excellent developers.

Presentation slides can be found online here (KCDC Presentation).

 

Alternative to p:defaultCommand

Standard

Using the standard JSF h:commandLink has a limitation when wanting to submit the form when the enter button is pressed. This solution uses the javascript keyCode keyboardevent built into browsers. When keyCode 13 (or enter key) is pressed the h:commandLink (which renders a <a> tag) is submitted. This will call its action that has been attached to the component.

This solution is most useful when the Primefaces defaultCommand component is not available.