Here at SWARM, our engineering team has agreed on the following goals for deploying applications into the wild.
Deployment should happen automatically. Creating and publishing packages are for the dark ages!
Production environment should be the same as the development environment. Issues in production should be easy to reproduce in development.
Easy to scale up or down services based upon demand.
Zero-downtime while scaling or upgrading the service.
Logging & monitoring should be in place to watch for issues
Our workflow needs to account for several folks to coordinate in pushing out a build.
Developers commit code / update scripts to git. We use gitflow to keep track commits for our dev / qa / production environments.
Our CI Platform of choice TeamCity automatically kicks in to create a build, test it and then deploy it to QA. It then notifies our PM & QA teams to verify the latest build. Once approved
We use a combination of TeamCity & AWS Application Pipeline to generate builds for the production environments.
Deploying backend apps with AWS Application Pipeline
When we try to deploy an application in AWS, let’s say Ludlow2, we should set up an application pipeline first.
A typical pipeline includes:
One Load Balancer
Several Target Groups (each for one branch: dev, qa, prod)
Each Target Group is used to route requests to one or more registered targets by a Load Balancer listener rule.
Several ECS services (each for one branch: dev, qa, prod)
ECS service is a specified number of instances of a task definition you can run and maintain simultaneously in an Amazon ECS cluster.
Several ECR repositories (each for one branch: dev, qa, prod)
ECR is a managed AWS Docker registry service which supports private Docker repositories. You can use the Docker CLI to author and manage images.
One EC2 instance
An EC2 instance provides scalable computing capacity in the Amazon Web Services (AWS) cloud.
Site domain name for your application, e.g.
Application Pipeline Setup
See the detail operations.
Continuous Integration (CI) with TeamCity and AWS
For CI, we set up a standalone TeamCity server instance as a docker container for each client and deploy it to AWS. The build agent is also a docker container that serves as an image for multiple on-demand build agents running on AWS.
While, the official build agent from jetbrains is great, most of our projects also need support for Node.js and AWS tools. So we created our own docker image for the TeamCity build agent where we bundled in the Node.js development environment and installed AWS CLI tools
In this flow we focus on building backend API application server building as another docker image.
TeamCity is great for running unit test, which can nicely show the test result report and code coverage report. TeamCity supports different test frameworks. We pick Karma as our test framework, which can be used for backend apps (Node.js) as well as front-end (Angular) apps. Also, Karma is TeamCity friendly by using its TeamCity plugin .
Karma has a plugin karma-coverage-istanbul-reporter to generate coverage report, which is a html page, zipped them as an Artifact: coverage.zip. TeamCity could recognize it and create a Code Coverage tab for you automatically.
In an Angular project, Start a test by command: ng test --code-coverage
TeamCity could generate a nice statistic report for you based on historical unit test results.
Building Docker Image by Docker Build Runner
Pushing Docker image to ECR, then deploy to ECS
These 2 Build Steps should be the same while building a docker image. The differences are coming from the Parameters.
We are using ECS Deploy to do the deployment, which is triggered by the last command in Build phase:
ecs deploy %ECS_CLUSTER_NAME% %ECS_SERVICE_NAME% %AWS_REGION% %ECS_DEPLOY_OPTIONS%
This tells Amazon ECS to duplicate the current Task Definition and cause the Service to redeploy all running tasks.
The new task will be started while the old one is still running. When the new task successfully registers to the Target Group, the old one will stop and unregister(draining) from the Target Group. From the User’s perspective, the application(Ludlow2) is upgraded without any downtime.
All the env files are commit to GitHub, convenience yet insecurity. Instead of being put into version control, they should be generated in TeamCity Build phase.
Integration with IDE
If you are using IntelliJ-based IDEs, e.g. WebStorm, you can install the TeamCity Plugin, which take advantage of all the features provided by TeamCity as a continuous integration server without leaving the context of the IDE.
The coolest function is Remote Run, which is similar as git commit. Instead of committing to your github repository, it just commit to the TeamCity server and do a CI cycle based on your local change.
We are using CloudWatch for logging.
When setting up the ECS Task Definitions in Application Pipeline, we already redirect application output to CloudWatch by awslogs. In this way, the application doesn’t need to change anything and the console.log messages will go to CloudWatch.
In case you want to control the log flow, e.g. log to different stream based on the session, you could use AWS Log API, e.g. winston-cloudwatch.
CloudWatch has Alarms, you can create one based on build-in or your own customized Metrics. We will show below how to send an email to develop team when there is a error happened in production server.
Create a Metric Filter based on a Log Group
Create Alarm based on the Metric
Define how to trigger alert.