Amazon EC2 Reserved Instances allow you to reserve Amazon EC2 computing capacity for 1 or 3 years, in exchange for a significant discount (up to 75%) compared to On-Demand instance pricing.
Reserved Instances can significantly lower your computing costs for your workloads and provide a capacity reservation so that you can have confidence in your ability to launch the number of instances you have reserved when you need them.
To learn how to buy a Reserved Instance, visit the Amazon EC2 Reserved Instance Getting Started page.
Manage Your AWS Resources
Introduction to Amazon EC2 Reserved Instances
Why Should I Use Reserved Instances?
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing.
Reserved Instances provide a capacity reservation so that you can have confidence in your ability to launch the number of instances you have reserved when you need them.
You have the flexibility to pay all, part, or nothing upfront. The more you pay up front, the more you save. If your requirements change, you can modify or sell your Reserved Instance.
The Reserved Instance hourly rate is applied to your Amazon EC2 instance usage when the attributes of your instance usage match the attributes of your Reserved Instances.
With Reserved Instances, you can choose the type of capacity reservation that best fits your application needs.
- Standard Reserved Instances: These instances are available to launch any time, 24 hours/day x 7 days/week. This option provides the most flexibility to run the number of instances you have reserved whenever you need them, including steady state workloads.
- Scheduled Reserved Instances: These instances are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month. For example, if you have a predictable workload such as a monthly financial risk analysis, you can schedule it to run on the first five days of the month. Another example would be scheduling nightly bill processing from 4pm-12am each weekday.
- Instance type: Instance types comprise varying combinations of CPU, memory, storage, and networking capacity. For example, m3.xlarge.
- Availability Zone: Amazon EC2 provides you the ability to purchase Reserved Instances within AWS Availability Zones. For example, us-east-1a.
- Platform description: Reserved Instances can be purchased for Amazon EC2 running Linux/UNIX, SUSE Linux, Red Hat Enterprise Linux, Microsoft Windows Server, and Microsoft SQL Server platforms.
- Tenancy: Each instance that you launch has a tenancy attribute. Generally, instances run with a default tenancy (running on multi-tenant hardware) unless you’ve explicitly specified to run your instance with a tenancy of dedicated (single tenant hardware) or host (dedicated physical server).
Standard Reserved Instances
- Term: AWS offers Reserved Instances for 1 or 3 year terms. Reserved Instance Marketplace sellers also offer Reserved Instances often with shorter terms.
- Payment Option: You can choose between 3 payment options: All Upfront, Partial Upfront, and No Upfront. If you choose the Partial or No Upfront payment option, the remaining balance will be due in monthly increments over the term.
Scheduled Reserved Instances
- Term: Scheduled Reserved Instances have a 1 year term commitment.
- Payment Option: Scheduled Reserved Instances accrue charges hourly, billed in monthly increments over the term.
Visit the Amazon EC2 Pricing page to view the Reserved Instance prices sold by AWS and volume discounts.
If you purchase a large number of Reserved Instances in an AWS region, you will automatically receive discounts on your upfront fees and hourly fees for future purchases of Reserved Instances in that AWS region.
Reserved Instances are sold by third-party sellers on the Reserved Instance Marketplace, who occasionally offer even deeper discounts at shorter terms.
Reserved Instance Marketplace allows other AWS customers to list their Reserved Instances for sale. Third-party Reserved Instances are often listed at lower prices and shorter terms. These Reserved Instances are no different to Reserved Instances purchased directly from AWS. To learn more about how to buy a Reserved Instance from AWS or from third-party sellers, visit the Amazon EC2 Reserved Instances Getting Started page.
To learn more about selling your Reserved Instances on the Reserved Instance Marketplace, visit the Amazon EC2 Reserved Instance Marketplace page.
Visit the Amazon EC2 Reserved Instance Getting Started page to learn more about how to purchase the right Amazon EC2 Reserved Instance.
Tutorial: Installing a LAMP Web Server on Amazon Linux
The following procedures help you install the Apache web server with PHP and MySQL support on your Amazon Linux instance (sometimes called a LAMP web server or LAMP stack). You can use this server to host a static website or deploy a dynamic PHP application that reads and writes information to a database.
Prerequisites
This tutorial assumes that you have already launched an instance with a public DNS name that is reachable from the Internet. For more information, see Step 1: Launch an Instance. You must also have configured your security group to allow SSH
(port 22), HTTP
(port 80), and HTTPS
(port 443) connections. For more information about these prerequisites, see Setting Up with Amazon EC2.
Important
If you are trying to set up a LAMP web server on an Ubuntu instance, this tutorial will not work for you. These procedures are intended for use with Amazon Linux. For more information about other distributions, see their specific documentation. For information about LAMP web servers on Ubuntu, see the Ubuntu community documentation ApacheMySQLPHP topic.
To install and start the LAMP web server on Amazon Linux
- Connect to your instance.
- To ensure that all of your software packages are up to date, perform a quick software update on your instance. This process may take a few minutes, but it is important to make sure you have the latest security updates and bug fixes.
Note
The
-y
option installs the updates without asking for confirmation. If you would like to examine the updates before installing, you can omit this option.[ec2-user ~]$
sudo yum update -y
- Now that your instance is current, you can install the Apache web server, MySQL, and PHP software packages. Use the yum install command to install multiple software packages and all related dependencies at the same time.
[ec2-user ~]$
sudo yum install -y httpd24 php56 mysql55-server php56-mysqlnd
- Start the Apache web server.
[ec2-user ~]$
sudo service httpd start
Starting httpd: [ OK ] - Use the chkconfig command to configure the Apache web server to start at each system boot.
[ec2-user ~]$
sudo chkconfig httpd on
Tip
The chkconfig command does not provide any confirmation message when you successfully enable a service. You can verify that httpd is on by running the following command.
[ec2-user ~]$
chkconfig --list httpd
httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:offHere, httpd is
on
in runlevels 2, 3, 4, and 5 (which is what you want to see). - Test your web server. In a web browser, enter the public DNS address (or the public IP address) of your instance; you should see the Apache test page. You can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS column; if this column is hidden, choose Show/Hide and select Public DNS).
Tip
If you are unable to see the Apache test page, check that the security group you are using contains a rule to allow
HTTP
(port 80) traffic. For information about adding anHTTP
rule to your security group, see Adding Rules to a Security Group.Important
If you are not using Amazon Linux, you may also need to configure the firewall on your instance to allow these connections. For more information about how to configure the firewall, see the documentation for your specific distribution.
Note
This test page appears only when there is no content in
/var/www/html
. When you add content to the document root, your content appears at the public DNS address of your instance instead of this test page.
Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon Linux Apache document root is /var/www/html
, which is owned by root
by default.
[ec2-user ~]$ ls -l /var/www
total 16
drwxr-xr-x 2 root root 4096 Jul 12 01:00 cgi-bin
drwxr-xr-x 3 root root 4096 Aug 7 00:02 error
drwxr-xr-x 2 root root 4096 Jan 6 2012 html
drwxr-xr-x 3 root root 4096 Aug 7 00:02 icons
To allow ec2-user
to manipulate files in this directory, you need to modify the ownership and permissions of the directory. There are many ways to accomplish this task; in this tutorial, you add awww
group to your instance, and you give that group ownership of the /var/www
directory and add write permissions for the group. Any members of that group will then be able to add, delete, and modify files for the web server.
To set file permissions
- Add the
www
group to your instance.[ec2-user ~]$
sudo groupadd www
- Add your user (in this case,
ec2-user
) to thewww
group.[ec2-user ~]$
sudo usermod -a -G www
ec2-user
Important
You need to log out and log back in to pick up the new group. You can use the exitcommand, or close the terminal window.
- Log out and then log back in again, and verify your membership in the
www
group.- Log out.
[ec2-user ~]$
exit
- Reconnect to your instance, and then run the following command to verify your membership in the
www
group.[ec2-user ~]$
groups
ec2-user wheel www
- Log out.
- Change the group ownership of
/var/www
and its contents to thewww
group.[ec2-user ~]$
sudo chown -R root:www /var/www
- Change the directory permissions of
/var/www
and its subdirectories to add group write permissions and to set the group ID on future subdirectories.[ec2-user ~]$
sudo chmod 2775 /var/www
[ec2-user ~]$find /var/www -type d -exec sudo chmod 2775 {} \;
- Recursively change the file permissions of
/var/www
and its subdirectories to add group write permissions.[ec2-user ~]$
find /var/www -type f -exec sudo chmod 0664 {} \;
Now ec2-user
(and any future members of the www
group) can add, delete, and edit files in the Apache document root. Now you are ready to add content, such as a static website or a PHP application.
(Optional) Secure your web server
A web server running the HTTP protocol provides no transport security for the data that it sends or receives. When you connect to an HTTP server using a web browser, the URLs that you enter, the content of web pages that you receive, and the contents (including passwords) of any HTML forms that you submit are all visible to eavesdroppers anywhere along the network pathway. The best practice for securing your web server is to install support for HTTPS (HTTP Secure), which protects your data with SSL/TLS encryption.
For information about enabling HTTPS on your server, see Tutorial: Configure Apache Web Server on Amazon Linux to use SSL/TLS.
To test your LAMP web server
If your server is installed and running, and your file permissions are set correctly, your ec2-user
account should be able to create a simple PHP file in the /var/www/html
directory that will be available from the Internet.
- Create a simple PHP file in the Apache document root.
[ec2-user ~]$
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
Tip
If you get a “
Permission denied
” error when trying to run this command, try logging out and logging back in again to pick up the proper group permissions that you configured in To set file permissions. - In a web browser, enter the URL of the file you just created. This URL is the public DNS address of your instance followed by a forward slash and the file name. For example:
http://
my.public.dns.amazonaws.com
/phpinfo.phpYou should see the PHP information page:
Note
If you do not see this page, verify that the
/var/www/html/phpinfo.php
file was created properly in the previous step. You can also verify that all of the required packages were installed with the following command (the package versions in the second column do not need to match this example output):[ec2-user ~]$
sudo yum list installed httpd24 php56 mysql55-server php56-mysqlnd
Loaded plugins: priorities, update-motd, upgrade-helper 959 packages excluded due to repository priority protections Installed Packages httpd24.x86_64 2.4.16-1.62.amzn1 @amzn-main mysql55-server.x86_64 5.5.45-1.9.amzn1 @amzn-main php56.x86_64 5.6.13-1.118.amzn1 @amzn-main php56-mysqlnd.x86_64 5.6.13-1.118.amzn1 @amzn-mainIf any of the required packages are not listed in your output, install them with thesudo yum install
package
command. - Delete the
phpinfo.php
file. Although this can be useful information to you, it should not be broadcast to the Internet for security reasons.[ec2-user ~]$
rm /var/www/html/phpinfo.php
To secure the MySQL server
The default installation of the MySQL server has several features that are great for testing and development, but they should be disabled or removed for production servers. Themysql_secure_installation command walks you through the process of setting a root password and removing the insecure features from your installation. Even if you are not planning on using the MySQL server, performing this procedure is a good idea.
- Start the MySQL server.
[ec2-user ~]$
sudo service mysqld start
Initializing MySQL database: Installing MySQL system tables... OK Filling help tables... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! ... Starting mysqld: [ OK ] - Run mysql_secure_installation.
[ec2-user ~]$
sudo mysql_secure_installation
- When prompted, enter a password for the
root
account.- Enter the current
root
password. By default, theroot
account does not have a password set, so press Enter. - Type Y to set a password, and enter a secure password twice. For more information about creating a secure password, see http://www.pctools.com/guides/password/. Make sure to store this password in a safe place.
Note
Setting a root password for MySQL is only the most basic measure for securing your database. When you build or install a database-driven application, you typically create a database service user for that application and avoid using the root account for anything but database administration.
- Enter the current
- Type Y to remove the anonymous user accounts.
- Type Y to disable remote
root
login. - Type Y to remove the test database.
- Type Y to reload the privilege tables and save your changes.
- When prompted, enter a password for the
- (Optional) Stop the MySQL server if you do not plan to use it right away. You can restart the server when you need it again.
[ec2-user ~]$
sudo service mysqld stop
Stopping mysqld: [ OK ] - (Optional) If you want the MySQL server to start at every boot, enter the following command.
[ec2-user ~]$
sudo chkconfig mysqld on
You should now have a fully functional LAMP web server. If you add content to the Apache document root at /var/www/html
, you should be able to view that content at the public DNS address for your instance.
(Optional) Install phpMyAdmin
phpMyAdmin is a web-based database management tool that you can use to view and edit the MySQL databases on your EC2 instance. Follow the steps below to install and configure phpMyAdmin on your Amazon Linux instance.
Important
We do not recommend using phpMyAdmin to access a LAMP server unless you have enabled SSL/TLS in Apache; otherwise, your database administrator password and other data will be transmitted insecurely across the Internet. For information about configuring a secure web server on an EC2 instance, see Tutorial: Configure Apache Web Server on Amazon Linux to use SSL/TLS.
- Enable the Extra Packages for Enterprise Linux (EPEL) repository from the Fedora project on your instance.
[ec2-user ~]$
sudo yum-config-manager --enable
epel
- Install the
phpMyAdmin
package.[ec2-user ~]$
sudo yum install -y phpMyAdmin
Note
Answer
y
to import the GPG key for the EPEL repository when prompted. - Configure your
phpMyAdmin
installation to allow access from your local machine. By default,phpMyAdmin
only allows access from the server that it is running on, which is not very useful because Amazon Linux does not include a web browser.- Find your local IP address by visiting a service such as whatismyip.com.
- Edit the
/etc/httpd/conf.d/phpMyAdmin.conf
file and replace the server IP address (127.0.0.1) with your local IP address with the following command, replacingyour_ip_address
with the local IP address that you identified in the previous step.[ec2-user ~]$
sudo sed -i -e 's/127.0.0.1/
your_ip_address
/g' /etc/httpd/conf.d/phpMyAdmin.conf
- Restart the Apache web server to pick up the new configuration.
[ec2-user ~]$
sudo service httpd restart
Stopping httpd: [ OK ] Starting httpd: [ OK ] - Restart the MySQL server to pick up the new configuration.
[ec2-user ~]$
sudo service mysqld restart
Stopping mysqld: [ OK ] Starting mysqld: [ OK ] - In a web browser, enter the URL of your
phpMyAdmin
installation. This URL is the public DNS address of your instance followed by a forward slash andphpmyadmin
. For example:http://
my.public.dns.amazonaws.com
/phpmyadminYou should see the phpMyAdmin login page:
Note
If you get a
403 Forbidden
error, verify that you have set the correct IP address in the/etc/httpd/conf.d/phpMyAdmin.conf
file. You can see what IP address the Apache server is actually getting your requests from by viewing the Apache access log with the following command:[ec2-user ~]$
sudo tail -n 1 /var/log/httpd/access_log | awk '{ print $1 }'
205.251.233.48
Repeat Step 3.b, replacing the incorrect address that you previously entered with the address returned here; for example:
[ec2-user ~]$
sudo sed -i -e 's/
previous_ip_address
/205.251.233.48
/g' /etc/httpd/conf.d/phpMyAdmin.confAfter you’ve replaced the IP address, restart the
httpd
service with Step 4. - Log into your
phpMyAdmin
installation with theroot
user name and the MySQL root password you created earlier. For more information about usingphpMyAdmin
, see thephpMyAdmin
User Guide.
Related Topics
For more information on transferring files to your instance or installing a WordPress blog on your web server, see the following topics:
For more information about the commands and software used in this topic, see the following web pages:
- Apache web server: http://httpd.apache.org/
- MySQL database server: http://www.mysql.com/
- PHP programming language: http://php.net/
- The
chmod
command: https://en.wikipedia.org/wiki/Chmod - The
chown
command: https://en.wikipedia.org/wiki/Chown
If you are interested in registering a domain name for your web server, or transferring an existing domain name to this host, see Creating and Migrating Domains and Subdomains to Amazon Route 53 in the Amazon Route 53 Developer Guide.
Linux AMI Virtualization Types
Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance.
For the best performance, we recommend that you use current generation instance types and HVM AMIs when you launch your instances. For more information about current generation instance types, see the Amazon EC2 Instances detail page. If you are using previous generation instance types and would like to upgrade, see Upgrade Paths.
For information about the types of the Amazon Linux AMI recommended for each instance type, see the Amazon Linux AMI Instance Types detail page.
HVM AMIs
HVM AMIs are presented with a fully virtualized set of hardware and boot by executing the master boot record of the root block device of your image. This virtualization type provides the ability to run an operating system directly on top of a virtual machine without any modification, as if it were run on the bare-metal hardware. The Amazon EC2 host system emulates some or all of the underlying hardware that is presented to the guest.
Unlike PV guests, HVM guests can take advantage of hardware extensions that provide fast access to the underlying hardware on the host system. For more information on CPU virtualization extensions available in Amazon EC2, see Intel Virtualization Technology on the Intel website. HVM AMIs are required to take advantage of enhanced networking and GPU processing. In order to pass through instructions to specialized network and GPU devices, the OS needs to be able to have access to the native hardware platform; HVM virtualization provides this access. For more information, see Enhanced Networking and Linux GPU Instances.
All current generation instance types support HVM AMIs. The CC2, CR1, HI1, and HS1 previous generation instance types support HVM AMIs.
To find an HVM AMI, verify that the virtualization type of the AMI is set to hvm
, using the console or the describe-images command.
PV AMIs
PV AMIs boot with a special boot loader called PV-GRUB, which starts the boot cycle and then chain loads the kernel specified in the menu.lst
file on your image. Paravirtual guests can run on host hardware that does not have explicit support for virtualization, but they cannot take advantage of special hardware extensions such as enhanced networking or GPU processing. Historically, PV guests had better performance than HVM guests in many cases, but because of enhancements in HVM virtualization and the availability of PV drivers for HVM AMIs, this is no longer true. For more information about PV-GRUB and its use in Amazon EC2, see PV-GRUB.
The C3 and M3 current generation instance types support PV AMIs. The C1, HI1, HS1, M1, M2, and T1 previous generation instance types support PV AMIs.
To find a PV AMI, verify that the virtualization type of the AMI is set to paravirtual
, using the console or the describe-images command.
PV on HVM
Paravirtual guests traditionally performed better with storage and network operations than HVM guests because they could leverage special drivers for I/O that avoided the overhead of emulating network and disk hardware, whereas HVM guests had to translate these instructions to emulated hardware. Now these PV drivers are available for HVM guests, so operating systems that cannot be ported to run in a paravirtualized environment (such as Windows) can still see performance advantages in storage and network I/O by using them. With these PV on HVM drivers, HVM guests can get the same, or better, performance than paravirtual guests.
EC2 Run Command adds support for more predefined commands and announces open source agent
Posted On: Apr 4, 2016
Today we are excited to announce two new predefined commands for EC2 Run Command for your Windows instances. To learn more about how to use these new commands, please visit the user guide.
- On-demand patching: Using these commands, you will be able to scan to find out which updates are missing, and then install specific or all missing updates.
- Collecting inventory information: The inventory command lets you collect on-instance information such as operation system related details, installed programs, and installed Windows Updates.
We are also making the EC2 Run Command Linux agent available as open source. The source code for the Amazon SSM Agent is available on GitHub. We encourage you to submit pull requests for changes that you would like to have included.
We launched EC2 Run Command in October 2015 to provide a simple way of automating common administrative tasks like installing software or patches, running shell commands, performing operating system changes and more. Run Command allows you to execute commands at scale and provides visibility into the results, making it easy to manage your instances.
To get learn more, please visit the EC2 Run Command webpage and the user guide for Linux and Windows.
Running Services Using Docker and Amazon EC2 Container Service
In the previous post, we took a detailed look at the architecture underpinning the Amazon EC2 Container Service. Now that we understand important concepts like scheduling, state management, and resource allocations, let’s see these in action by actually running a service in ECS.
The first step in building an application designed to run on ECS is to package the application code into one or more Docker containers. As discussed in the first post, Docker containers are based on images, and images are defined in Dockerfiles. A Dockerfile is a text file that describes how to “build” the image. For example, if we want to run a WordPress application in ECS, the Dockerfile might look like this:
FROM php:5.6-apache
RUN a2enmod rewrite
# install the PHP extensions we need RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd — with-png-dir=/usr — with-jpeg-dir=/usr \ && docker-php-ext-install gd RUN docker-php-ext-install mysqli
VOLUME /var/www/html
ENV WORDPRESS_VERSION 4.1.1 ENV WORDPRESS_UPSTREAM_VERSION 4.1.1 ENV WORDPRESS_SHA1 15d38fe6c73121a20e63ccd8070153b89b2de6a9
# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo “$WORDPRESS_SHA1 *wordpress.tar.gz” | sha1sum -c — \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz
COPY docker-entrypoint.sh /entrypoint.sh
# grr, ENTRYPOINT resets CMD now ENTRYPOINT [“/entrypoint.sh”] CMD [“apache2-foreground”]
The Dockerfile is used to build an image, which will be stored in a repository. Again, as discussed in the first post, this is accomplished by running the docker build command. In order for ECS to access the image, it must be put in a publicly accessible Docker image repository. All images in DockerHub are available to ECS by default. So, for the purposes of our example using the Dockerfile defined above, we’ll use the WordPress image built from that Dockerfile that lives in the DockerHub repo.
And that’s basically about all there is to packaging up the application code. If it can run in a Docker container, then it can run in ECS. This portability of Docker containers is quite powerful. We can build, test, and debug our code on any machine capable of running Docker (which is any machine with a Linux kernel). When the code is ready, we can package it up into a Docker image by building the image from a Dockerfile and storing it in a repository.
Now that we’ve packaged up our code to run in a Docker container, we need to provide the compute resources required to run containers. In ECS, this is called a cluster, and it consists of EC2 instances called “container instances” that are running the ECS agent. To create an ECS cluster of container instances, we simply launch one or more EC2 instances using the Amazon ECS-Optimized Amazon Linux AMI. Any EC2 instance launched from this AMI will be automatically placed in the “default” ECS cluster — every AWS account will have a “default” ECS cluster for each region the service runs in. If you want to launch the instance into a different ECS cluster, simply create the cluster using the ECS console or the following AWS CLI command (Note: for this post, I’m using the CLI because the console didn’t exist until recently. But all the CLI commands listed here can easily be executed via the GUI in the console):
$ aws ecs create-cluster — cluster-name WordPress
Launching EC2 container instances into the cluster is as simple as launching an instance through the EC2 console. The instance will need to be associated with an IAM role that allows the agent running on the instance to make the necessary API calls to ECS. Details are documented here.
When launching the EC2 container instances into this cluster, include the following user-data script in the “advanced” section of the “instance details” page when launching EC2 instances from the console:
#!/bin/bash echo ECS_CLUSTER=Wordpress >> /etc/ecs/ecs.config
When the EC2 instances launch, we should see that they are now associated with the “Wordpress” cluster:
$ aws ecs list-container-instances — cluster WordPress { “containerInstanceArns”: [ “arn:aws:ecs:us-west-2:xxxxx:container-instance/1ef890e2-a42f-4ed5-bff5–7b39edd66c9d”, “arn:aws:ecs:us-west-2:xxxxx:container-instance/315e5dbd-924b-4a86–9fa3–32ca3f7982b3”, “arn:aws:ecs:us-west-2:xxxxx:container-instance/52b5ef99-add7–4dc4-a7e3–49019e1b7c9e”, “arn:aws:ecs:us-west-2:xxxxx:container-instance/f00d41ad-043f-46c8–8437-cc4ea22aacf5”, “arn:aws:ecs:us-west-2:xxxxx:container-instance/f7486c80–4b5f-4ba4–94db-2e238406bcc9” ] }
We now have an ECS cluster capable of running Docker containers. The next step is to tell ECS how to run the containers that comprise our WordPress application. To do this, we use an entity called a “task definition.” An ECS task definition can be thought of a prototype for running an actual task — for any given task definition, there can be zero or more task instances running in the cluster. The task definition allows for one or more containers to be specified. For tasks consisting of more than one container, the dependencies between containers are expressed in the task definition. For example, if we want to run WordPress, we’d need both the WordPress container (described above) as well as a MySQL container. The ECS task definition would look like this:
{ “containerDefinitions”: [ { “name”: “wordpress”, “links”: [ “mysql” ], “image”: “wordpress”, “essential”: true, “portMappings”: [ { “containerPort”: 80, “hostPort”: 80 } ], “memory”: 500, “cpu”: 10 }, { “environment”: [ { “name”: “MYSQL_ROOT_PASSWORD”, “value”: “password” } ], “name”: “mysql”, “image”: “mysql”, “cpu”: 10, “memory”: 500, “essential”: true } ], “family”: “wordpress” }
The ECS documentation describes in detail all of the task definition parameters. However, the ones to note here are the image and links parameter. This task definition includes two containers, wordpress and mysql. The image parameter is used to specify the name of the image for each container in DockerHub. The links parameter is what tells ECS that the wordpress container has a network dependency on the mysql container. So instead of having to manage the two containers required to run our WordPress application individually, we can instead treat the entire application as a single task definition.
Let’s go ahead and register this new task definition by saving the JSON to a file called ecs-wordpress-task-def.json and running this command:
$ aws ecs register-task-definition --family wordpress --cli-input-json file://./ecs-wordpress-task-def.json
The above task definition is actually all we need to execute our WordPress application in ECS. However, ECS defines another entity called a “service,” which is useful for long-running tasks, like web applications. A service allows multiple instances of a task definition to be run simultaneously. It also provides integration with the Elastic Load Balancing service. For this example, we’re not using ELB because each WordPress service instance contains both the web layer and the database. For a production deployment, the database would be stored on some sort of persistent storage, and shared with all the instances of the web layer behind the ELB. For the purposes of example, though, it still makes sense to schedule our WordPress task definition as a service. But, if we were running a different type of application — perhaps a command-line app that does batch processing — all we would need is the task definition and we could schedule tasks in ECS directly from that.
To launch our WordPress application as an ECS service, we need a service definition like the following:
{ "cluster": "Wordpress", "serviceName": "wordpress", "taskDefinition": "wordpress:1", "loadBalancers":[], "desiredCount": 1, }
Let’s create the service using the following command:
$ aws ecs create-service --cluster WordPress --service-name wordpress --task-definition wordpress:1 --desired-count 1
Again, the service definition parameters are defined in detail in the ECS documentation. But note that the service definition is where load balancers can be specified, which is one of the primary reasons why long-running tasks like web applications should be launched in ECS as a service. In microservices architecture, each endpoint (or collection of related endpoints) can be defined as an ECS service, each managed independently from one another using different Docker images. In this scenario, ECS provides an extremely convenient way to deploy service endpoints.
If all goes well, we should now have a running instance of our WordPress application:
$ aws ecs list-tasks — cluster WordPress{ “taskArns”: [ “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d” ] }
$ aws ecs describe-tasks — cluster WordPress — tasks 7af1a8c0-d199–47af-b05c-9d0496a9d97d { “failures”: [], “tasks”: [ { “taskArn”: “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”, “overrides”: { “containerOverrides”: [ { “name”: “mysql” }, { “name”: “wordpress” } ] }, “lastStatus”: “RUNNING”, “containerInstanceArn”: “arn:aws:ecs:us-west-2:xxxxx:container-instance/1ef890e2-a42f-4ed5-bff5–7b39edd66c9d”, “clusterArn”: “arn:aws:ecs:us-west-2:xxxxx:cluster/Wordpress”, “desiredStatus”: “RUNNING”, “taskDefinitionArn”: “arn:aws:ecs:us-west-2:xxxxxx:task-definition/wordpress:1”, “startedBy”: “ecs-svc/9223370607723201507”, “containers”: [ { “containerArn”: “arn:aws:ecs:us-west-2:xxxxx:container/0d2073be-88be-4f54-a8e1-ed27f4daf90d”, “taskArn”: “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”, “lastStatus”: “RUNNING”, “name”: “mysql”, “networkBindings”: [] }, { “containerArn”: “arn:aws:ecs:us-west-2:xxxxx:container/83a63f47-b1ab-488e-87b7–923463c9072d”, “taskArn”: “arn:aws:ecs:us-west-2:xxxxx:task/7af1a8c0-d199–47af-b05c-9d0496a9d97d”, “lastStatus”: “RUNNING”, “name”: “wordpress”, “networkBindings”: [ { “bindIP”: “0.0.0.0”, “containerPort”: 80, “hostPort”: 80 } ] } ] } ] }
From the above JSON, we can determine where the task is running by looking at the “containerInstanceArn” parameter. We can use this to determine the specific EC2 instance running our application’s containers:
$ aws ecs describe-container-instances — cluster WordPress — container-instances 1ef890e2-a42f-4ed5-bff5–7b39edd66c9d { “failures”: [], “containerInstances”: [ { “status”: “ACTIVE”, “registeredResources”: [ { “integerValue”: 4096, “longValue”: 0, “type”: “INTEGER”, “name”: “CPU”, “doubleValue”: 0.0 }, { “integerValue”: 7483, “longValue”: 0, “type”: “INTEGER”, “name”: “MEMORY”, “doubleValue”: 0.0 }, { “name”: “PORTS”, “longValue”: 0, “doubleValue”: 0.0, “stringSetValue”: [ “2376”, “22”, “51678”, “2375” ], “type”: “STRINGSET”, “integerValue”: 0 } ], “ec2InstanceId”: “i-8224cc75”, “agentConnected”: true, “containerInstanceArn”: “arn:aws:ecs:us-west-2:xxxxx:container-instance/1ef890e2-a42f-4ed5-bff5–7b39edd66c9d”, “pendingTasksCount”: 0, “remainingResources”: [ { “integerValue”: 4076, “longValue”: 0, “type”: “INTEGER”, “name”: “CPU”, “doubleValue”: 0.0 }, { “integerValue”: 6483, “longValue”: 0, “type”: “INTEGER”, “name”: “MEMORY”, “doubleValue”: 0.0 }, { “name”: “PORTS”, “longValue”: 0, “doubleValue”: 0.0, “stringSetValue”: [ “2376”, “22”, “80”, “51678”, “2375” ], “type”: “STRINGSET”, “integerValue”: 0 } ], “runningTasksCount”: 1 } ] }
$ aws ec2 describe-instances — filters Name=instance-id,Values=i-8224cc75 | jq ‘.Reservations[].Instances[] | {PublicDnsName}’ { “PublicDnsName”: “ec2–54–149–174–11.us-west-2.compute.amazonaws.com” }
If we log into the EC2 instance, we should see our running Docker containers:
$ ssh -i ~/.ssh/id_myKeyPair ec2-user@ec2–54–149–174–11.us-west-2.compute.amazonaws.com
[ec2-user@ip-10–0–0–115 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0f69a8ed2cf1 wordpress:4 “/entrypoint.sh apac 32 minutes ago Up 32 minutes 0.0.0.0:80->80/tcp ecs-wordpress-1-wordpress-94e3ffd5aafdb8df5300 edcd8fe51c21 mysql:5 “/entrypoint.sh mysq 32 minutes ago Up 32 minutes 3306/tcp ecs-wordpress-1-mysql-ccdff1db88a4bed44b00 278fd30d86e5 amazon/amazon-ecs-agent:latest “/agent” 2 hours ago Up 2 hours 127.0.0.1:51678->51678/tcp ecs-agent
Likewise, if the security group used to launch the EC2 cluster instances is set up to allow inbound access on port 80, we should be able to see our WordPress application running in the browser:

Conclusion
ECS is a sophisticated cluster management service that enables developers to harness the full power of Docker containers. Using ECS, engineers can take full advantage of the efficient development and test cycles made possible by the portability of Docker containers. Complex, distributed microservices architectures benefit from the isolation of the Docker execution environment. ECS allows distributed applications built using these architectures to be run in a clustered computing environment under full control of the customer, but with the full benefits of a managed service. In this post, we explored some of the architectural principals of container-based cluster computing. In later posts, we’ll further explore some of the other advantages of deploying applications in a clustered environment. Until then, we encourage you to explore ECS and Docker further. Have fun!
Nate Slater
Solution Architect