We configure the webapps that run in our ECS cluster via an S3 bucket. Here’s how it works.
Services typically need runtime configuration that doesn’t get checked in to GitHub. This can range from environment-specific values (like URLs or API keys) to actual secrets (passwords).
We configure our container-based services using S3. We store files in special S3 buckets and directories, and when the containers start up they copy those files into themselves. We consistently use dotenv
(both for Node and Ruby) so we can use .env
files to set environment variables our apps will see.
The two buckets are called cob-digital-apps-staging-config
and cob-digital-apps-prod-config
. Within each are sub-directories for each of our services. Those sub-directories match the ECS service names.
The service directories can contain config files directly, and they can also have sub-directories with variant-specific configuration. We use variants in our staging environment to handle cases where we need to have different configurations. For example, Access Boston has “dev” and “test” variants that correspond to separate integration environments.
The buckets are configured to keep old versions of files, so we can recover from changes if we need to.
Our container-based apps in the CityOfBoston/digital monorepo have an ENTRYPOINT
script that syncs in the contents of the appropriate S3 directories before running the server start command.
The script first syncs the service’s main directory (either staging or prod, whichever is appropriate). It then syncs the variant-specific directory, or default
if the container is running as the unnamed default variant.
In this way, you can override the configuration for a variant on a file-by-file basis.
Some values, like database passwords or OAuth secrets, should not be stored in plaintext on disk. Each container-based service automatically gets a KMS keypair that only it has permission to decrypt with.
Our Node-based apps use the srv-decrypt-env
module to automatically decrypt any environment variables that end in the _KMS_ENCRYPTED
suffix at runtime. (We do not have a similar library for Ruby, so we don’t encrypt those passwords for our few Ruby apps.)
See the Encrypting service configuration for S3 guide for how to generate the encrypted values.
AWS CLI access key
In Cyberduck’s preferences, you’ll need to enable encryption for communicating with S3. Choose “SSE-S3 (AES-256)” from the “Encryption” dropdown in the S3 section.
1. Open Cyberduck, and click the “Open Connection” button.
2. Select “Amazon S3” in the dropdown, and then enter your AWS CLI Access Key ID and Secret Access Key.
The AWS access key/secret pair is not the same thing as the credentials you use to log into AWS via web browser.
3. Double-click on either cob-digital-apps-prod-config
or cob-digital-apps-staging-config
, depending on whether you want to edit prod or staging.
4. Double click on the service whose configuration you’d like to change. Some staging services have “variants” (for example, Access Boston has both “dev” and “test” configurations to match the IAM team’s integration environments). Click into a variant if appropriate.
5. Choose “Show Hidden Files” from the “View” menu so that files starting with a .
(such as .env
) are visible. Note that this will also show previous versions of files.
6. Press the Refresh button. This is incredibly important. Otherwise you may end up editing an older version of a file and overwrite newer changes.
7. Right-click on the most recent version of .env
and edit it. When you save in your editor, Cyberduck will update the S3 bucket automatically.
If you’re adding a secret to the config file, see the Encrypting service configuration for S3 guide.
Don’t forget to always press Refresh before editing! If you mess up and forget, you can usually go back and edit a previous version and re-save it to make it the most recent version, then re-apply your changes.
8. Once you’ve updated the configuration, you’ll need to restart the ECS tasks for the service, since they only get the latest configuration on startup. See the Restarting an ECS service guide.