Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
For web services that will run in containers, our system expects the following 2 behaviors:
It listens for TLS on its $PORT
It responds with a 200 to requests for /admin/ok
For static apps that will be served from S3, it’s enough that there’s a script to generate the HTML and JS files.
In general, new services should be written in the CityOfBoston/digital monorepo. This lets us trivially share code, configuration, and conventions that make our apps faster to write and easier to maintain.
We don’t have templates for new services, so your best bet is to copy the most recently written service that is similar to yours. For web applications, choose registry-certs
or access-boston
. For non-UI HTTP services, payment-webhooks
. For a statically-generated site, public-notices
.
Follow the existing conventions for naming the service’s directory and the name
property in package.json
, as the deployment system makes assumptions about where things are located and what they’re named based on this “short name”.
It’s best to do a find-and-replace on the old service’s name in the new directory so that you don’t miss anything.
You’ll need to update our Terraform configuration for the new service. In the CityOfBoston/digital-terraform repo, copy the service_
file that matches the app you copied when setting up the repository and change all the names.
If it’s a while until the public launch, you may want to comment out the production parts of the service.
Besides adding the service, you’ll need to modify terraform.tvars
to add variants to the service_variants
variable (even if it’s just to put default
in there) and add lines into staging_listener_rule_priorities
for each variant. If you’re setting up production now, you’ll need a prod_listener_rule_priorities
entry as well.
See the Making changes with Terraform guide for more information about opening a PR and using Atlantis.
One you have a skeleton of an app, it’s time to get it running on the ECS clusters.
You’ll likely need to add an appropriate .env
file to configure your service. See Service Configuration for more info about our S3 configuration buckets. Since the app is new, you’ll need to create the service subdirectory. Keep the name consistent.
The GitHub webhooks and Slack bot will prompt you to deploy if they see changes on a branch that begins with staging/
. To do the first staging deploy, check in your code locally, and then run:
If you’re using variants, use the branch name staging/app-name@variant
instead.
Shippy-Toe should chime in on #digital_builds and offer to deploy to staging. Tell her to do it! She’ll kick off a CodeBuild process to do the deployment.
If you’re staging an S3-based service (like Public Notices), you will be able to see it at https://apps.digital-staging.boston.gov/app-name
.
If your service is container-based, you’ll need to update the ECS service to increase the Number of tasks from 0 to 1. Do this via the ECS web console. Once you’re done, your app will be available at app-name.digital-staging.boston.gov
or app-name-variant.digital-staging.boston.gov
. (There’s no additional DNS setup needed.)
When you’re ready for production, don’t forget to create configuration files in the prod config bucket. While you hopefully won’t have the same secrets between staging and prod, if you do you’ll need to re-encrypt them because the staging and production services have separate encryption keys.
Shippy-Toe prompts you to do a production deploy (in the #digital_builds channel in slack) when a Travis run completes on develop
. Shippy-Toe works out what requires deployment by checking for changes to files in a service’s dependencies that are out-of-sync with that service’s production/*
branch.
First time deploy for a new app/service.
The easiest way to do the very first production push for a service is to make a production/app-name
branch off of develop
before you make your last set of changes to the app. Make sure you push it up to GitHub and merge to develop
. (it wont deploy your app yet).
Make your last PR for the service in a new branch (off the develop
branch) and once it is approved, merge it to develop.
Once Travis passes, Shippy-Toe will prompt you for a production push (deploy) for your app/service. This will deploy your app to AWS.
As with staging, you’ll need to increase the number of tasks from 0. For most web services, you’ll bring it up to 2, one for each availability zone.
If you’re hosting your app at a subdirectory of apps.boston.gov
, it will be available there. If it’s on its own subdomain, you’ll need to get a DNS change done to point at the ELB.
This is a new deploy document covering deploy processes for apps started or converted after August 2021.
For web services that will run in containers, our system expects the following 2 behaviors:
It listens for TLS on its $PORT
It responds with a 200 to requests for /admin/ok
Services are deployed into the Elastic Container Services (ECS) environments by cloning a tagged image held in the Elastic Container Register (ECR).
Our images are stored in the AWS US-East-1 ECR.
You will first need to create a new private repository using the AWS console or the AWS CLI. The repository name should follow a convention: the environment (stage/prod) and service (e.g. access-boston etc) that is contained within the image. For example cob-digital-apps-prod/access-boston
.
Repo Creation Notes
The new repository name must start with a letter and can only contain lowercase letters, numbers, hyphens (-), underscores (_), and forward slashes (/).
AWS refer to cob-digital-apps-prod/
part of the name as a namespace "to group the repository into a category".
The digital monorepo deploy process presently requires one of 2 namespaces cob-digital-apps-prod
and cob-digital-apps-stage
. You do not need to follow this convention, but it is useful if your images need to be different for stage and prod.
Using AWS CLI to create a new repo:
AWS CLI access key
In Cyberduck’s preferences, you’ll need to enable encryption for communicating with S3. Choose “SSE-S3 (AES-256)” from the “Encryption” dropdown in the S3 section.
1. Open Cyberduck, and click the “Open Connection” button.
2. Select “Amazon S3” in the dropdown, and then enter your AWS CLI Access Key ID and Secret Access Key.
The AWS access key/secret pair is not the same thing as the credentials you use to log into AWS via web browser.
3. Double-click on either cob-digital-apps-prod-config
or cob-digital-apps-staging-config
, depending on whether you want to edit prod or staging.
4. Double click on the service whose configuration you’d like to change. Some staging services have “variants” (for example, Access Boston has both “dev” and “test” configurations to match the IAM team’s integration environments). Click into a variant if appropriate.
5. Choose “Show Hidden Files” from the “View” menu so that files starting with a .
(such as .env
) are visible. Note that this will also show previous versions of files.
6. Press the Refresh button. This is incredibly important. Otherwise you may end up editing an older version of a file and overwrite newer changes.
7. Right-click on the most recent version of .env
and edit it. When you save in your editor, Cyberduck will update the S3 bucket automatically.
If you’re adding a secret to the config file, see the Encrypting service configuration for S3 guide.
Don’t forget to always press Refresh before editing! If you mess up and forget, you can usually go back and edit a previous version and re-save it to make it the most recent version, then re-apply your changes.
8. Once you’ve updated the configuration, you’ll need to restart the ECS tasks for the service, since they only get the latest configuration on startup. See the Restarting an ECS service guide.
This covers deploying (and using) the deploy tool.
The application uses several AWS resources, including Lambda functions and an EventBridge Rule. These resources are defined in the template.yaml
file in the project.
To develop and deploy this application, you will need the Serverless Application Model Command Line Interface (SAM CLI).
The SAM CLI is an extension of the AWS CLI that adds functionality for building and testing Lambda applications
To use the SAM CLI, you need the following tools.
SAM CLI - Install the SAM CLI
Docker - Install Docker community edition
You will also need to:
You can also download and use the AWS Toolkit plug-in in your preferred integrated development environment (IDE). The AWS Toolkit provides extensions to use the SAM CLI, and also adds step-through debugging experience for Lambda function code.
See the following links to get started with your preferred IDE:
To clone the cob_ecrDeploy source code to your local machine you need the following tools.
git
AWS Credentials
To clone the repo you will need to have an SSH Key registered in your AWS User account.
Then the repo can be cloned into any directory with this command:
The code contained within the repo needs to be built before it can be run, deployed or tested.
Build the application with the sam build
command from the root folder of the locally cloned cob-ecr-deploy repo.
The SAM CLI installs dependencies defined in ecrDeploy_function/requirements.txt
, creates a deployment package, and saves it in the .aws-sam/build
folder.
The actual work of deploying an ECR image is performed by the lambda_handler()
function in the app.py
file in the ecrDeploy_function/ecrDeploy
folder.
The lambda_function()
is where you will likely need to make changes to the functions' interaction with ECS: i.e. stopping and starting tasks.
The EventBridge rule which triggers the Lambda function is defined within the template.yaml
file.
You could also just modify EventBridge Rules in the AWS Console, but if you do that then any future deployment of this app may reset those changes.
The IAM permissions for the Lambda function are defined in template.yaml
and can be changed there.
You could also just modify the Lambda (or roles/policies in IAM) in the AWS Console, but if you do that then any future deployment of this app may reset those changes.
Environment variables control the passing of variables into the application (tags/cluster-names etc). These can be changed in the template.yaml
file.
You could also just modify the Lambda in the AWS Console, but if you do that then any future deployment of this app may reset those changes.
After making changes to app.py
or template.yaml
(or any other file in the project) save the files and test (see running and debugging sections).
When satisfied, commit and push the changed code into the repository and then re-build and re-deploy the app using the SAM CLI (see next sections).
Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Test events are included in the events
folder in this project.
Run functions locally and invoke them with the sam local invoke
command.
Check the event/event.json file before running the command below - it will cause the image:tag specified to be deployed.
Note: This command will start a lambda-like docker container loaded with an AWS supplied Python image and the built app loaded into it.
The app will be automatically run as if the even loaded in the event.json
had fired, without any delay or any user input.
With the AWS Toolkit, your IDE can be used to add breakpoints and step through code line by line. See the box-out above for links to installation and debugging instructions for your IDE.
The application template uses AWS Serverless Application Model (AWS SAM) to define application resources. AWS SAM is an extension of AWS CloudFormation with a simpler syntax for configuring common serverless application resources such as functions, triggers, and APIs. For resources not included in the SAM specification, you can use standard AWS CloudFormation resource types.
The application uses several AWS resources, including Lambda functions and an EventBridge Rule. These resources are defined in the template.yaml
file in the project.
To simplify troubleshooting, SAM CLI has a command called sam logs
. sam logs
lets you fetch logs generated by your deployed Lambda function from the command line. In addition to printing the logs on the terminal, this command has several nifty features to help you quickly find the bug.
NOTE
: This command works for all AWS Lambda functions; not just the ones you deploy using SAM.
You can find more information and examples about filtering Lambda function logs in the SAM CLI Documentation.
Tests are defined in the tests
folder in this project. Use PIP to install the pytest and run unit tests.
Deployment must be performed using the SAM CLI. SAM CLI will use Docker to run the repo's functions in an Amazon Linux environment that matches Lambda.
The deployment settings are defined in the file samconfig.toml
which is stored in the root of the repository.
To build and deploy your application, run the following in your shell, from the root of the repo:
To delete the application after you have created it, use the AWS CLI or the Cloud Formation pages in the AWS Console.
AWS CLI - Asuming the samconfig.toml
file has not been used to change the stack-name
, you can run the following:
From time to time, the python package versions and dependencies should be checked and updated as needed.
To do this, you can specify actual package versions in requirements.txt
and then run
You can update a specific package with this command:
You can check dependencies with this command:
Your IDE may also be able to identify packages which should/could be updated. Or you can use this command
Take care with updating packages.
Only packages which appear in the folder .aws-sam/build/ecrDeployFunction
need to be manually managed, there are a load of other packages pre-loaded within the AWS supplied container.
Really, running the pip3 list --outdated
command should be run inside the container to get an accurate picture across all Python dependencies.
See the AWS SAM developer guide for an introduction to SAM specification, the SAM CLI, and serverless application concepts.
Next, you can use AWS Serverless Application Repository to deploy ready to use Apps that go beyond hello world samples and learn how authors developed their applications: AWS Serverless Application Repository main page
See the README.md in CityOfBoston/digital-terraform.
To access the AWS resources (e.g. EC2 devices) you first need to SSH into the AWS environment.
You can access the SSH Bastion from the City Hall network (140.241.0.0/16
) if you have an SSH key on your AWS account and are in the SshAccess
IAM group.
Request an AWS Admin to add you to the SshAccess
IAM group.
From the IAM console, upload a public key for your account
Edit your /etc/hosts
to add the following line: 35.169.164.239 apps-bastion
Initialize your account on the bastion by SSHing without a public key: ssh -o PubkeyAuthentication=no <username>@apps-bastion
Note: your bastion username is the bit before @boston.gov
on your account name.
Control-C out when it asks for a password.
SSH in with your public key: ssh -A <username>@apps-bastion
(the -A forwards the SSH agent, which is important for SSH'ing on to the instances.)
From the Bastion, you can get to the EC2 instances which host the ECS services.
Request that the AWS Admin share the ec2-user private keys and passwords with you via dashlane. There are 2 keys one for production and one for staging. Save whichever you need, or both, into your ~/.ssh
folder.
Ensure the permissions on the private key file/s are set to 600 (chmod 600 xxxx
)
Note the Private IPv4 address
of the EC2 instance from the EC2 instances page in the AWS console - this will be 10.40.15.x
for staging and 10.40.115.x
for production.
There are 2 production instances, you can use either.
These IPAddresses change after each deployment, so check regularly.
Once you have successfully SSH'd onto the bastion (#6 in Step 1 above), you will be able to ssh onto the instance ssh ec2-user@<ipaddress>
Once you’re on a container instance (#4 step 2 above), you can use docker
commands to inspect containers
for example some useful commands are:
??? Outside of the containers, the ec2-user
account can use sudo -s
to open up a shell with root access.
The Digital Team use terraform to manage the AWS configuration.
Terraform is a CLI utility synchronizes AWS with scripts. In essence, it uses a series of scripts to detect and make changes to AWS. Terraform commands are run from a terminal session on a machine with Terraform libraries installed. See website: See documentation
Introduction to how the Digital webapps infrastructure is set up in AWS
The Digital team uses Docker via Amazon’s Elastic Container Service to deploy its webapps. We migrated to AWS from Heroku primarily so we could establish a VPN connection to internal City databases (such as those used for boards and commissions applications and the Registry certificate ordering).
Our production cluster runs two copies of each app, one in each of two AZs. This is more for resilience against AZ-specific failures than for sharing load.
Almost all of our AWS infrastructure is described by and modified using our Terraform configuration.
The webapps that the City has developed so far are extremely small and low-traffic. Docker containers let us pack a few machines with as many webapps as we can; right now we’re limited only by memory. Docker keeps these apps isolated from each other. It also makes it easy to do rolling, zero-downtime deployments of new versions.
The typical limitations of Docker (stable storage is a pain, as is running related processes together, loss of efficiency for high loads) are not concerns for the types of apps we’re building.
Amazon’s ECS, along with its Application Load Balancers, handle restarting crashed jobs and routing traffic to the containers.
Our app containers are run on EC2 instances that live in four private subnets (2 AZs × 2 environments). These instances do not have public IPs and therefore cannot communicate directly with the public internet, which gives us some level of safety through isolation.
These ECS cluster instances receive traffic from Amazon’s ALB load balancers, which live in corresponding public subnets. They can contact public web services through NAT gateways, which also live in the public subnets. The ECS cluster instances also have access to internal City datacenters through the VPN gateway.
The instances are further isolated by having security groups that only allow traffic from the security groups of their corresponding ALBs (and SSH traffic from the bastion instance).
The VPN gateway connects from our VPC to the City datacenter. It has two connections running simultaneously for redundancy. AWS VPNs need to have regular traffic to keep them active, and if they do disconnect they need traffic from outside AWS to cause them to come back online.
We have a SiteScope rule set up with the CoB network team that pings an EC2 instance inside of our VPC. (Currently this EC2 instance does not seem to be created via Terraform.) This rule does a ping every few minutes, which keeps traffic running on the connection and also will bring it back up if it does go down.
Additionally, we have a CloudWatch alarm that fires if one or both of the VPN connections goes down. If one has gone down traffic should still be flowing over the other, and usually it will come back up of its own accord. Contact NOC if there are issues.
In general, you should not need to SSH on to the cluster instances. Definitely not for routine maintenance (do that through an ECS task if you need that kind of thing). It may be necessary to troubleshoot and debug issues, however.
Instructions for how to SSH on to our bastion machine using an SSH key loaded into your IAM account, and from there how to SSH on to a cluster instance, are in the digital-terraform’s README.md file.
How to update the AMI on our ECS cluster instances
The Digital webapps cluster uses the Elastic Container Service on AWS. We have a handful of EC2 instances that actually host the containers.
These instances use a stock Amazon Machine Image (AMI) from Amazon designed for Docker that comes with the ECS agent pre-installed. From time to time, Amazon releases a new version of this “ECS-optimized” image, either to upgrade the ECS agent or the underlying OS.
This process is sometimes referred to as “rolling” the cluster though it’s more accurate that we set up a second cluster of machines and migrate to it.
Update the instance_image_id
value for the staging_cluster
module to the new AMI ID from step 1 above. Save/commit the file as a new branch, not directly to the production
branch.
Make a PR which merges the new branch into the production
branch, and assign a person to review the changes.
After viewing the plan, if you need to update the terraform scripts, be sure to save the changes to the new branch.
If comitting your changes does not trigger the atlantis plan automatically, you can run it manually by creating a new comment with atlantis plan
.
Keep an eye on the “ECS Instances” tab in the cluster’s UI. You should see the “Running tasks” on the draining instance(s) go down, and go up on the new instances.
Once all the tasks have moved, the old instance(s) will terminate and Terraform will complete. Check a few URLs on staging to make sure that everything’s up-and-running.
Now that Atlantis’s apply finished, you can merge the staging PR and repeat the process (steps 2-6) for the production cluster.
Ensure your cloned copy of the digital-terraform
repository is on the production
branch, and that the branch it up to date with the origin on GitHub.
Create a new branch from the production
branch.
In your preferred IDE open the /apps/clusters.tf
file and update the instance_image_id
value for the staging_cluster
module to the new AMI ID from step 1 above. Save/commit the file to the new new branch (not directly to the production
branch).
in a terminal/shell from the repo/apps/
folder, run the command:
terraform plan
When the plan is done, inspect the output and expect to see changes to: - resource "aws_autoscaling_group" "instances" - resource "aws_cloudwatch_metric_alarm" "low_instance_alarm" - resource "aws_launch_configuration" "instances" (This last one will have the new AMI guid) Any other changes the plan identifies should be carefully investigated. Terraform may be proposing to make changes to the AWS environment you don't want, or at least are not expecting.
Keep an eye on the “ECS Instances” tab in the cluster’s UI. You should see the “Running tasks” on the draining instance(s) go down, and go up on the new instances.
Once all the tasks have moved, the old instance(s) will terminate and Terraform will complete. Check a few URLs on staging to make sure that everything’s up-and-running.
Now that terraform's apply is finished, you can repeat the process (steps 2-9) for the production cluster.
Finally you should merge the changes in your new (local) branch into the local production
branch, and then push the your local production
branch to the origin in Github.
How to encrypt and use a .env file for a service hosted on S3.
Apps are configured by putting files in a secure configuration bucket on S3. The ENTRYPOINT
script for our apps pulls all files in from the app’s path in the bucket before starting up. This allows an app to be securely configured with a .env
file and, for example, server.crt
and server.key
files for TLS connections.
An AWS CLI login (this is different from the AWS account you can log into via the web)
kms:Encrypt
permissions
Though .env
files are stored encrypted in S3 and only transferred securely, we still encrypt environment variables like passwords so that they are not seen in plain when editing the .env
files.
Each service is configured to have its own private key in the Amazon KMS keystore. Only the task role may decrypt with that key.
Adding the _KMS_ENCRYPTED
suffix to an environment variable’s name in the .env
file will cause the task to decrypt the variable at runtime, storing it in process.env
after stripping the suffix.
To create an encrypted environment variable value:
Visit the “Customer managed keys” section of the KMS part of the AWS web console.
Look for the Key ID for your service. Save it in the $CONFIG_KEY_ID
environment variable.
Log in to AWS CLI with an account that has kms:Encrypt
permissions for the key.
Run the following command with a leading space so that it doesn’t appear in your command history: aws kms encrypt --output text --key-id $CONFIG_KEY_ID --plaintext STR-TO-BE-ENCRYPTED
Copy the encrypted value (the output up to the whitespace) as the value in .env
: PASSWORD_KMS_ENCRYPTED=…
Note that using --plaintext
on the command line will cause aws kms
to encrypt the ASCII as-is. When using the fileb://
form to reference a file on disk, aws kms
will first Base64 encode the value, which will cause a failure on the app side, which does not expect Base64-encoded values.
Decrypting a variable which was encrypted using the method above is possible using the following commands in a terminal session:
If you have multiple profiles on your computer, you may use this option in the aws kms decrypt command:
--profile=myprofile
How to restart an ECS service when you change its configuration.
When you update a service’s configuration in S3 you’ll need to manually restart it to pick up the file changes. Because we do rolling ECS updates, you can do this without dropping traffic.
You will need to have an AWS Console account.
First, visit the ECS page on AWS and choose your cluster (AppsStaging or AppsProd).
Then, click the checkbox next to the service you want to restart and press the Update button.
Don’t touch any other settings, but make sure to click the Force new deployment checkbox. That will start up new containers, even though the code hasn’t changed from what’s currently running.
Click Next step through all of the screens, and then click Update service.
Navigate to the service’s “Events” tab and keep an eye on things. You should see it start new tasks and eventually deregister and stop the old tasks. Once it says “…has reached a steady state” again then you know things were successful.
Guide to mounting an s3 bucket via SFTP as a drive on your computer
PREP
Check/Create SSH/RSA Keys
The RSA Keys should not have passwords, create new keys (without a password) if the current keys the user has were setup with a password
Setup SFTP Account on AWS
If you’re not an Admin ask one (David, Phill) to create your account
Add the users SSH/RSA key to their FTP account
Make sure the user you use in your computer is an admin on that computer
We’ll need to run commands under `sudo`
SETUP
Install FUSE
Install SSHFS
Restart the computer
Open the Terminal App, can be found in the Applications Folder under Utilities
Check sshfs is install with his command ```sshfs --help```
Create two directories, `mnt/patterns`
mkdir ~/mnt
mkdir ~/mnt/patterns
Locate the SSH/RSA Keys
This is probably at `~/.ssh/`,
Save/copy the users FTP account username (ask AWS admin if you don’t have it)
Try connecting/mounting the Drive with this command, replacing RSA Key and Username with the values from the previous two steps
This should work, if not trouble shoot by looking at the logs from the previous command (#1)
Now you are able to mound it’s time to create a Bash script that will run when the user logs in.
Using a Text or Code Editor create a bash file at ```/Library/Startup.sh``
Copy the code below into the file
From the Terminal app, make the file executable: ```chmod +x /Library/Startup.sh```
Get to this file using the `Finder`, then right-click on the file and select the 'Get Info' option. Use the 'Open with:' to use 'Terminal'. Can be found under 'Applications > Utilities' and check the 'Enable' drop down to 'All Applications'
Open up “System Preferences” and go to “Users & Groups”
Switch to the “Login Items” tab, unlock the ability to edit these settings by clicking the Padlock in the bottom left.
Use the “+” button to add a new action in “Login Items”, this will open up a file browser window.
Use the File Browser to locate the “Startup.sh” file we created in the “Library” and select it.
Use the Apple icon on the top left of the screen to “Log Out”
When you sign in again open up a “Finder” window and check if the drive mounted at ~/mnt/patterns
Debug Tips
Thanks to our , updating the cluster EC2 images is a zero-downtime process. Nevertheless, it’s best to run this during the weekly digital maintenance window, and make sure that staging looks good before doing it on production.
Find the latest ID for the ECS-optimized AMI. You can do this on the .
In a browser navigate to repository and edit the file.
When you make the PR, GitHub will automatically execute an atlantis plan
process ().
When the plan is done, inspect the output and expect to see changes to:
- resource "aws_autoscaling_group" "instances"
- resource "aws_cloudwatch_metric_alarm" "low_instance_alarm"
- resource "aws_launch_configuration" "instances" (This last one will have the new AMI guid)
Any other changes the plan identifies should be carefully investigated.
Terraform may be proposing to make changes to the AWS environment you don't want, or at least are not expecting.
Once the atlantis plan is finished, and the PR has been approved, create a new comment atlantis apply.
This will cause Atlantis to apply changes to AWS. (Atlantis runs a terraform apply
command in a background process). .
If you have on your local computer, you can do the update directly from your computer.
Find the latest ID for the ECS-optimized AMI. You can do this on the .
Once you are happy with the changes that terraform will apply to the AWS environment, you can run the command:
terraform apply
.
After the production instances are fully up, check that they have roughly equal “Running tasks” numbers. ECS should schedule duplicate tasks on separate machines so that they are split across AZs. If you see a service has both of its tasks on the same instance you can run a force deployment to restart it. (See )
Terraform is a CLI utility synchronizes AWS with scripts. In essence, it uses a series of scripts to detect and make changes to AWS. Terraform commands are run from a terminal session on a machine with Terraform libraries installed. | |
Atlantis provides a GitHub provisioned wrapper for Terraform: it runs terraform plan
and terraform apply
commands from GitHub and posts the results back to GitHub.
|
Atlantis is a small application which Digital team have installed on a very small serverless environment in AWS (fargate). It runs in fargate because it restarts the staging and production containers and therefore cannot run on any of the main EC2 instances.
The package installed and
Download FUSE & SSHFS from
If this doesn’t work try ```sftp -o IdentityFile=RSAPublicKeyLocation ```
Emotion is a tool for doing CSS-in-JS • https://emotion.sh
Emotion is our styling tool of choice for webapps. Use Emotion if you need CSS styles that aren’t in or appropriate to put in Fleet.
CSS-in-JS is attractive for us because it lets us write CSS that is co-located with our React components. Emotion styles are scoped to the component where they’re used, so it’s very easy to reason about the effects changing the styles would cause.
We picked Emotion over Next.js’s default styled-jsx because the latter is still reliant on user-generated class names to associate styles with components, which means that you a) have to come up with them; b) risk conflicting with other class names; and c) don’t get automatic dead code detection the way you do when you stop referencing an Emotion style.
We have two helpers that make using Emotion better and safer.
The jest-emotion
package interprets Emotion class names in Jest snapshots (such as those generated by StoryShots). It unrolls the CSS so that it gets diffed in the snapshot. Without it, CSS changes only appear in the snapshots as generated class name changes.
We use the eslint-plugin-emotion
package with ESLint to enforce setting the proper /** @jsx jsx */
pragma and @emotion/core
import statements when the Emotion 10 css
prop is used (see below).
Emotion does have two Babel plugins, but we’ve decided against using them. The babel-emotion
plugin generates slightly nicer class names and can make source maps. The @emotion/babel-preset-css-prop
preset includes babel-emotion
and also automatically uses the @emotion/core
jsx
factory for all JSX so you can use css
props.
We like to avoid Babel plugins because they introduce magic that can be hard to track down. We’re much happier with the solution of having eslint-plugin-emotion
add /** @jsx */
pragmas for us, since it’s nearly as automatic but significantly more transparent as to what’s happening.
Emotion released a new version, 10, that makes use of a JSX factory to support a css
prop on React components. When styles are applied via the css
prop, no extra work needs to be done to automatically inject <style>
elements for server-side rendering.
Currently, all of our webapps use the emotion
package, which uses the pre-10 API. With this API, the css()
method returns a string for the style’s auto-generated class name. We use these values in className
attributes in the JSX, often in template literals when they are composed with other class names. These apps also have code in _document.tsx
and _app.tsx
to handle the server-side rendering (and to set up a CacheProvider
for legacy compatibility).
react-fleet
has been updated to use css()
from @emotion/core
, which generates a style object that needs to get applied via the css
prop. It is therefore ready for when we start using the version 10 API in our apps.
It would be nice in general to use version 10 because that lets us remove the awkward SSR support. Given the backwards compatibility interface of the emotion
package, it’s not worth actively converting old apps. New apps, however, should start from the ground-up using the css
prop.
Using Storybook to develop and document UI components
Storybook lets us:
quickly build UI components in isolation
mock out a component’s different states/variations
experiment and test with different props (i.e. what happens if the text for a label is really, really long?)
It also:
provides documentation of our components
is used by Percy to perform its visual diffs
works with Jest to do automatic code snapshot testing (StoryShots addon)
can catch simple a11y violations as soon as they occur (a11y addon)
If you write a Story for every different use case or state of a component, Percy will take a snapshot and catch any visual changes later on.
If you group any Stories (i.e. Group|Folder/Story) in your Storybook, Stories without a defined group will now appear in an automatically-generated "Others" group in the sidebar.
You can have multiple storiesOf
blocks if it helps you stay organized; any blocks that share a name string will be combined in the sidebar as a single collection of Stories.
Good for components that stretch too wide without a container. Using addDecorator
, all of the Stories will be children of the wrapper component.
The Knobs addon lets us change the props passed into our component inside Storybook’s UI. It’s super helpful to quickly test out component appearance against different values.
Documentation: https://github.com/storybooks/storybook/tree/master/addons/knobs
You can also use Knobs to add buttons that trigger a change in UI:
Microservices are (usually API style) services or applications which perform a specific task that other web services or applications can leverage. They are typically middleware type components that enable:
Simplification of a web application (or service) by separating out specific utility tasks into a separate component, and/or
Standardization and provision of the same process that can be used by multiple web applications or other services, and/or
Predefined information or processes to bridge (or be shared between) organizations without direct network connections.
Rollbar is an error-logging service we use through our web apps to view and debug errors in real-time. The implementation is pretty easy and supports a large number of languages. There are two types of implementations for Rollbar, browser
and server
.
Implementing Rollbar generally requires the inclusion of the respective rollbar module (browser/server) and then initializing the rollbar object with account credentials.
Access Token
Browser Token
Environment (Staging/Production)
Payload (data to be analyzed)
Browser implementation tracks a user's actions through the Application/Webpage (DOM) and reports a significant amount of steps before the error occurs; displaying the output (DOM/text) the user sees when the error is triggered. In our WebApps we include the rollbar in the module that constructs the page's HTML structure, ie. services-js/[service-name]/pages/_document.tsx
. A browser implementation looks like this:
Our code base abstracts the server (Hapi.js) implementation on a top layer module at modules-js/hapi-common/src/hapi-common.ts
, here it expands the reporting tool to handle 404 errors as well among other things. Although reporting an error only requires the error message/payload, rollbar.error(error)
going forward we should be more verbose in the errors we log; rollbar.error(e, request.raw.req)
Ex.
Server Usage
Single Page Application (SPA)
Our web apps are Single Page Apps
, SPAs, meaning that it all runs off of the same front-end code even when the URL subpage and parameters change. Since we include Rollbar at the top of the Application it is available through the browser's window
DOM object or by including it as a module in specific sections. DOM window
is sufficient for our needs so the example below covers how to raise an error from within the app.
PHP Implementation
In Composer
Setup
Send an Error and a Message
Above we went through how to set up Rollbar in the codebase, however, we also need to set up a project for each implementation in the Rollbar dashboard.
Login into the account
Create New Project
Invite team members
Setup Notifications - Determine if you are using Slack or email, etc
Setup Versions and Deploy controls - Setup a way to notify Rollbar of an AWS deployment (Lambda)
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Browser Setup
Server Setup
Rollbar has an implementation for PHP, like others it's straightforward to implement, but it is missing a documented implementation for Drupal. There is a Rollbar module on that is installed with composer
but this will require more insight/research.
Integrate with Source Control Provider - We have a couple of apps (DBConnector) that are hosted in AWS, for this we'll need to set it up using this
id
string
ID of the cake to get, for free of course.
recipe
string
The API will do its best to find a cake matching the provided recipe.
gluten
boolean
Whether the cake should be gluten-free or not.
Authentication
string
Authentication token to track down who is emptying our stocks.
The City have a number of Data services (e.g. SQL Servers) that reside on the city network. There is no mechanism by which data can be accessed from these services from processes that are not resident on the Cityhall network. This includes boston.gov and other cloud based services.
The DBConnector
micro-service is available at:
The first step is to post to the /v1/auth endpoint, providing username and password credentials.
If authenticated, an Authentication Token, and Refresh Token will be returned.
By default, the Authentication Token (authToken) is valid for 180 seconds, and the Refresh Token for 900 seconds.
When your account was setup, a different lifetime may have been specified for authTokens issued using your credentials.
The authToken inherits role-based permissions that have been assigned to your credentials.
The authToken is then passed as a "bearer token" in the "Authroization" header - and can be used multiple times until it expires.
Example request header
When the Authentication Token expires, the Refresh Token can be passed to the /v1/auth/refresh endpoint and a new Authentication Token will be returned with a 180 second lifetime. A new Refresh Token will also be generated and returned with a 900 second lifetime. You could also just re-authenticate on the /v1/auth endpoint, but using the refresh token is faster and more efficient in the back-end because the user is not re-validated (using the DBConnector database) -saving database connection overhead.
If you have not saved or been given a connToken (which is a uuid representing a connection string) then the connToken can be obtained from the /v1/connections/:name endpoint.
Note: You will need to pass the authToken in header.
Depending on the role attached to the authToken (see User Permissions), you can use the /select, /insert, /update, /delete and /query endpoints to interact with data on the remote host defined by a connToken.
Notes: 1. You will need to pass the authToken in the "Authorization" header of your request. 2. You will need to pass the connToken (and other parameters) in the body/payload of your request
Results from all endpoints will be returned in JSON format.
The ECS requires that an application have an endpoint /admin/ok
which returns an HTTP Status Code of 200 when queried. The ECS container management functions use this to determine if the container is healthy or not. After some time, if that endpoint does not respond or return a 200, the task is stopped and restarted.
GET
This endpoint must return a 200 code when the task (instance of the dbconnector service) is running. A non 200 code will cause the task to be stopped.
POST
https://dbconnector.digital-staging.boston.gov/v1/auth
This endpoint is used to initially authenticate the user, and returns an Authentication Token which must be used in the header of all subsequent endpoint calls. - The Auth Token has a default lifetime of 180 seconds (3 min) (can be set per username), - The Refresh Token has an additional validity of 180 seconds (3 min).
POST
https://dbconnector.digital-staging.boston.gov/v1/auth/refresh
Using a valid Refresh Token (provided from /v1/auth endpoint), this endpoint will refresh a current (or expired) Authentication Token. Also generates a new Refresh Token.
The following errors can be raised from calls to the various endpoints.
400: Bad Request
400 errors usually indicate some issue with the body or querystring submitted to the endpoint.
A JSON string is returned with an error node in it. The error description is a short explanation of the issue encountered.
401: Unauthorised
Generally, 401 errors are returned when there is an issue with the AuthToken provided in he header.
403: Forbidden
When a user attempts to perform a task with insufficient permissions, a 403 error is generated. The 403 Errors raised by DBConnector
typically do not provide much information, but errors are logged. Typical 403 errors are:
Authenticating account or using token from an unregistered IPAddress
Account has insufficient permissions to perform requested task (see Permission description for each endpoint in this guide and also User Permissions section)
200:OK - Flooding
If too many requests have been made by a user in a given time period, then the user will be blocked and a 200 response code will be returned with an error message.
GET
https://dbconnector.digital-staging.boston.gov/v1/users
Returns a list of current users.
Permission: ADMIN or OWNER.
If the optional page
& list
parameters are provided, then a paged output will be returned (sorted by UserID)
GET
https://dbconnector.digital-staging.boston.gov/v1/users/:useridentifier
Returns a single user.
Permission: ADMIN, OWNER or the user specified in useridentifier
.
i.e. You can only read your own user record unless you are an ADMIN or OWNER.
GET
https://dbconnector.digital-staging.boston.gov/v1/users/:useridentifier/connections
Provides a list of connection strings that the user has been granted permission to use.
Permission: ADMIN, OWNER or user defined by useridentifier
.
i.e. You can only read your own user record unless you are an ADMIN or OWNER.
POST
https://dbconnector.digital-staging.boston.gov/v1/users
Adds a new user. Permission: ADMIN or OWNER.
PATCH
https://dbconnector.digital-staging.boston.gov/v1/users/:useridentifier
Updates the specified user with information provided in the payload. Permission: ADMIN or OWNER
DELETE
https://dbconnector.digital-staging.boston.gov/v1/users/:userid
Deletes the specified user. Permission: ADMIN or OWNER
A connection string record contains all the information required for a suitable driver to connect to a remote system.
Each connection string record is defined by a unique UUID (the connToken) -and also a unique name.
The connToken is used to refer to the remote system in execution endpoints so that connectivity details do not need to be stored in and passed from the calling system.
The connToken (UUID) should never change once the connections string record is created, and therefore can be safely stored in the calling system.
GET
https://dbconnector.digital-staging.boston.gov/v1/connections
Returns a list of all active remote system connection strings. Permission: ADMIN OR OWNER
GET
https://dbconnector.digital-staging.boston.gov/v1/connections/:token
Returns the remote system connection string defined by the specified Connection Token. Permission: ADMIN, OWNER or by a user who has permission to use the connection.
GET
https://dbconnector.digital-staging.boston.gov/v1/connections/find/:name
Fetch a connection token using the tokens name. Permission: All authenticated users.
GET
https://dbconnector.digital-staging.boston.gov/v1/connection/:token/users
Provides a list of users who have been granted permission to use a connection string. Permission: ADMIN or OWNER
POST
https://dbconnector.digital-staging.boston.gov/v1/connection
Saves a connection string. Permission: ADMIN or OWNER.
POST
https://dbconnector.digital-staging.boston.gov/v1/connection/:token/user/:userid
Grants a user permission to use a connection string. Permission: FULL/SUPER, ADMIN or OWNER.
PATCH
https://dbconnector.digital-staging.boston.gov/v1/connections/:token
Updates the remote system connection string defined by the specified Connection Token. Permission: ADMIN or OWNER..
DELETE
https://dbconnector.digital-staging.boston.gov/v1/connections/:token
Deletes the remote system connection string defined by the specified Connection Token. Permission: ADMIN or OWNER
DELETE
https://dbconnector.digital-staging.boston.gov/v1/connection/:token/user/:userid
Revoke an existing permission for a user to use a connection string. Permission: ADMIN or OWNER
Running data commands on a Remote System.
Errors which occur whilst executing an data command on the host server are passed back as cleanly as possible. Except for the /query endpoint, syntax errors should not occur.
Errors which might occur include:
connection string errors (credentials, host DNS/IPAddress etc),
incorrectly named tables and/or incorrectly named fields,
insufficient permissions to perform task on host,
creating duplicate records,
table locks,
foreign key constraints,
... etc ...
Actual error messages depend upon the drivers being used, and the wording of error reporting from the host. Generally a 400 error will be generated with a "cleaned" error message in this general JSON format:
POST
https://dbconnector.digital-staging.boston.gov/v1/query/:driver
Runs a command (or commands) on the remote system. Depending on the command/s, returns a JSON Array (of Arrays) of Objects.
To allow statements to be dynamic/re-used, the statement field may contain "named tokens" which will be substituted into the statement prior to execution. This creates a kind of stored procedure.
For Example:
Expands out to:
Calling Views
You can call views using the /query
endpoint (and also the /select
endpoint).
Treat a view like a table.
POST
https://dbconnector.digital-staging.boston.gov/v1/exec/:driver
Execute a stored procedure on the remote system. Permission: ALTER, FULL, ADMIN or OWNER
POST
https://dbconnector.digital-staging.boston.gov/v1/select/:driver
This runs a command on the remote system which is expected to return a paged data set in a JSON Array of Objects format.
POST
https://dbconnector.digital-staging.boston.gov/v1/insert/:driver
This endpoint creates a new record (or records) in the specified table.
POST
https://dbconnector.digital-staging.boston.gov/v1/update/:driver
This endpoint will update existing records in a table.
POST
https://dbconnector.digital-staging.boston.gov/vi/delete/:driver
Each user account defined in DBConnector has an assigned role.
Generally, the roles are hierarchical, so for example: a FULL-QUERY-USER can perform all the tasks a READ-USER can perform.
In the filter fields for the /select
, /update
and /delete
endpoints, the following directives can be used:
Why use an array of objects and not an object?
Because some filters might define the same field twice, which would not make sense in an object.
e.g. Suppose we wanted to run this: SELECT * FROM MyTable WHERE name ='a' OR name = 'b' or name ='c';
To accommodate the filter as an object we would have
{ "name": "a", "^name": "b", "^name": "c"}
which is not a valid structure.
So we use an array of objects thus:
[ {"name": "a"}, {"^name": "b"}, {"^name": "c"} ]
which is a valid structure.
The endpoint can be tested using some test data pre-loaded into the application.
The credentials to use are:
These credentials can be used with the connToken
Data contained in testTable available from connStr #4
This page contains information for developers wishing to update or maintain the SQL Proxy endpoint on the dbconnector service.
Developer Setup
This command will clone the develop
branch ofcob_dbconnector
application into a folder at ~/sources/cob_dbconnector
. The file may be cloned anywhere on the local computer, however this document assumes it will cloned into ~/sources/cob_dbconnector
and if it is not, then commands given here will need to be modified for the new location.
Once the repo is cloned, then the following command will build the app, build its container locally and tag the container:
The script will create a docker container, build the necessary files from the local cloned folders and then save and tag the docker image.
The developer can then use docker-compose to start the container as follows:
This will build the docker container locally, start it and name it dbconnector
.
Once the steps in "Developer Setup" above have been completed, the developer can go ahead and change the code as needed in the cloned repository.
In this example the code is found in ~/sources/cob_dbconnector/src
.
Best practice is to create a working branch off the develop
branch and then make commit changes to the working branch as needed. Once ready to push and merge code back to the repo in CodeCommit, the working branch can be pushed, and then a PR made and committed. For example, to add a period to the end of the README.md file:
Now login to AWS and find the cob_dbconnector
repo in CodeCommit. Create a PR from the working_branch
into develop
. Use approval workflows as needed.
Unlike the Drupal deploy process (where the deploy is triggered by commits to the GutHub repo) there are no triggers fired by committing, pushing, tagging or merging to the cob_dbconnector CodeCommit repository.
Deployment is triggered by updates to the ECR repository.
Once changes have been made and saved in the src
folder, the application needs to be re-built and the container re-deployed in order to verify the changes work as expected.
The repository files on the host are NOT mounted into the container. Therefore changes to code on the host machine are not automatically replicated to the container, even if a local build is forced (e.g. running npm run dev
inside the container) . In order to "load" changed code into the container, it must be rebuilt.
To re-build and re-deploy the container:
Or more concisely (ensure you are in the correct folder):
Each time you build the container using build.sh
, a new local image is created and tagged. The old image is not automatically deleted, so over time you will get a lot of images saved on your local computer, taking up disk space.
Which this is nice if you want to manually create a new container from a previous image, mostly its a nuisance and just takes up local disk space.
You can delete all DBConnector images which do not have a tag.
Because the dbconnector is an API designed to be used via an REST endpoint, the best tool for testing is either a custom written testbed which calls the container, or else a tool like postman.
To connect to the container for testing, you will need to know the containers IPAddress. This can be found as follows:
The IPAddress will typically be something like 172.x.0.x
, depending on your docker setup. Locally the service runs on port 3000, and the container has port 3000 exposed.
This should return an empty json message {}
with an HTTP Code of 200. If you get anything else there is an issue.
Using docker you can attach to the running console for the dbconnector container and get a tail of the stdout and stderr in real-time. This is useful because you can add console.log()
commands to the code and then see them appear in the attached console.
Use this command to attach to the console:
Using docker you can open a new bash shell in the container and run commands in the console for that session. You could use this to verify the existence of files, or to check if dependencies are installed etc. This command creates an interactive shell:
If you dont need an interactive shell, you just want to run a single command, you can use this:
There are a suite of automated tests created for the dbconnector. These tests are run locally by the developer, and the tests can be extended.
The stock tests adequately test all the endpoints for the service, and as new functionality or features are added, the tests should be expanded to ensure they are up to date.
The tests are run using this command:
The tests are defined in the file src/tests/tests.config.js
.
There are 2 remote environments an AWS, stage
and production
.
To deploy to either, its a simple matter of creating a stage or production build and then pushing the image to the ECR registry. A watch process monitors the ECR repository and changes to the container images (uploads or re-tags) automatically prompts deploys to the ECR environment.
For the DBConnector there is a helper script to build, tag and push container images:
This is all that is needed to initiate a deploy.
The might be the way a developer works:
create a local working branch from develop
branch
commit code locally to working branch
... iterate these steps as necessary
commit code and push to AWS Code Commit
create PR on CodeCommit (follow peer review and other related project steps) and then merge PR (on CodeCommit)
advise external testers
... iterate back through development stage steps as needed.
Add solutions to issues as you encounter them here.
Dashboard to manage application access for employees
This page contains information for developers wishing to update or maintain the pdf endpoint on the dbconnector service.
The non-fillable PDF generator has a maximum PDF version of 1.5. Many PDF generators, including fillable PDF forms save in version 1.6.
Version 1.6 works fine for the fillable PDF processes but using a v1.6 PDF docs cause the non-fillable PDF process to fail for the PDF generator.
To downgrade a PDF from v1.6 to v1.5, use the following gostscript command on Ubuntu:
You can determine the version using this command:
The requirement was to:
Complete a form PDF inserting data into the fields (rather than just stamping un-editable text on the form), and
Add a barcode and text to an existing PDF form, and output the resultant PDF as a form (the v1 PDFManager using a PHP solution always output a flat non-form PDF, even if a form PDF is used as the input).
This could not be achieved (in 2022) using an opensource PHP module, but there is a well established and proven Linux CLI app which can be utilized, and provided a couple of additional features to the requirement.
The main Drupal site (served by an Acquia webserver), while running on Linux is not managed by City of Boston and the pdftk libraries are not loaded on that server. Given the short time constraints, the pdftk was deployed within the same container as the DBConnector, leveraging the existing endpoint services (node/javascript/express) and some shellscripting.
The dbconnector service was extended to provide the following endpoints:
GET
/v1/pdf/heartbeat
GET
/v1/pdf/test
Internally calls the pdftk and captures the version of the cli.
POST
/v1/pdf/fill
A PDF and data file must be provided. The PDF must be a fillable form PDF and the data file must be a file in an FDF format.
The /v1/pdf/generate_fdf endpoint can be used to generate a blank FDF data file.
POST
/v1/pdf/overlay
POST
/v1/pdf/metadata
GET
/v1/pdf/decompress
This is a useful utility to use the PDFManager cannot manipulate a PDF because its compression is later than PDF1.5.
The endpoint first checks to see if it already has a file with the filename specified in the pdf_file
query parameter. If it does, then it just returns that file.
NOTE: restarting the dbconnector task(s) on AWS will empty this cache.
If the del
parameter is "true" then the file is deleted after decompression and downloading. To reduce load on the endpoint, set to "false" if the pdf_file
does not change often and if you expect to call the function frequently.
Returns the decompressed document as an attachment.
The expected headers are:
GET
/v1/pdf/fetch
Returns the document as an attachment.
When show=D, expected headers are:
The when show=I, expected headers are:
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
The code for the dbconnector is stored in the AWS codecommit repository (CoB Digital account).
Typically the code is cloned from that repository onto the developers local machine. In order do this, AWS credentials must set up correctly. See . The AWS-CLI is required, so it must be installed locally, and credentials must be set in the ~/.aws/credentials
file.
https://<IPAddress>:3000/ should be used as the endpoint for testing locally: For example In postman run a test GET request on :
follow setup instructions
(stage)
(production)
: Edit what apps, icons and the URLs for those apps for several environments
Also see SQLProxy
Also see SQLProxy
Also see SQLProxy
The has a nice interface which utilizes the features of the PDF Toolkit which is used by the DBConnector. This is particularly useful for extracting fdf data files from fillable forms, which can then be manually edited and fed back into the form for in-dev-testing.
As part of the creation, a method was required that would allow the manipultaion of fillable (form) PDF's.
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Environment
API Base URL
test
https://dbconnector.digital-staging.boston.gov
prod
https://dbconnector.boston.gov
Term
Meaning
AuthToken
A token which is generated when a user successfully authenticates against the /v1/auth endpoint and starts a session.
This token is used for all subsequent calls to the endpoint during a session.
The AuthToken has a lifetime which is typically 180s. After that the AuthToken expires and needs to be refreshed.
RefreshToken
A token which can be used to generate a new AuthToken without re-authenticating. A new AuthToken with a new lifetime of 180s can be generated at /v1/auth/refresh.
The RefreshToken is generated at the same time as the AuthToken and has a lifetime of 900s.
After the RefreshToken expires, the only way to generate an AuthToken is to authenticate against the /v1/auth endpoint.
The benefit of the RefreshToken is to ease load on the database server as AuthToken regeneration at /v1/auth/refresh endpoint does not require a database request,
User Account
A user account is required to authenticate, and is identified by a username and password. A user account may be only allow connections from a specified IPAddress, and will only be allowed to use certain ConnTokens.
username/userid
Each user account has a unique string and numeric identifier. The username is the string identifier, it must be unique and may be an email address. The userid is a system-generated number which can be used to identify the user in some endpoint operations.
ConnectionString
The DBConnector
connects to remote (database) environments which are either publicly available, or are housed within the City of Boston network. To connect to an environment, a connection string is required. Typically the connection string contains the following information about the target: Host, Port, Driver, Credentials.
ConnToken / ConnectionToken
Each ConnectionString defined within the DBConnector
is issued a unique ConnToken when it is saved. Any query requests made via the DBConnector
/v1/query or /v1/select endpoints provide the ConnToken (rather than a Connection String).
No Host or Credentials information needs to be stored in the caller system, nor passed across the network by the caller.
No Host or Credentials are passed across the internet from the caller system.
If Credentials need to be changed, the change is done once in the DBConnector
and all callers will use the new credentials without having to update their ConnTokens.
Session
A session begins when a user authenticates and receives an AuthToken, and ends when the AuthToken expires.
Calling System
The originating application which calls endpoints in this API.
username
string
A username (either a name or an email) which is registered in the DBConnector
(see /v1/users)
password
string
The password set for the username provided.
Authorization
string
Bearer: A valid authToken. An expired autToken may be used provided it matches the refreshToken and the refreshToken is not expired.
refresh_token
string
A valid refreshToken
page
number
The page number to return. Note: Page numbering starts at zero.
limit
number
The number of users to return for each page.
Authorization
string
Bearer - A valid connToken
useridentifier
integer
The useridentifier may be either of: - userID (numeric): unique user number - username (string): unique username
Authorization
string
Bearer: A valid connToken.
useridentifier
string
The useridentifier may be either of: - userID (numeric): unique user number - username (string): unique username
Authorization
string
Bearer: A valid authToken.
Authorization
string
Bearer: A valid authToken.
username
string
Any unique string to identify this user. Recommended to use email addresses for human users (e.g. "someone@boston.gov") or a meaningful name built around the calling service name (e.g. "cmdb_nightly_update"). Maximum 100 chars.
password
string
A complex password. The longer and more complex the better.
role
number
See User Permissions
enabled
number
1 or 0. Is this account to be created enabled or disabled. (0=disabled).
ipaddresses
string
A comma separated list of IPAddresses the user can make requests from. If this is left blank, then requests are accepted from all IPAddresses. Maximum 150 chars.
ttl
string
The lifetime of authTokens generated for this user. If this is left blank, then 180s will be used. Format is "xxxm/s" (e.g. "90s" for 90 seconds, or "3m" for 3 minutes) Note: Shorter key lifetimes provide better security. Maxmum 10m or 600s.
useridentifier
number
The userid. Userid is returned from the /v1/auth request.
Authroization
string
Bearer: A valid authToken
userid*
number
The userid (number). Userid is returned from the /v1/auth request.
Authorization*
string
Bearer: A valid authToken
Authorization
string
Bearer: A valid authToken.
token
string
Bearer: A valid authToken.
Authorization
string
Bearer: A valid authToken
:name
string
The name of a token
Authorization
string
Bearer: A valid authToken
token
string
Bearer: A valid connToken.
Authorization
string
Bearer: A valid authToken
Authorization
string
Bearer: A valid authToken.
connectionString
string
The connection string, usually as a JSON string.
"{
\"host\":\"somewhere.com\",
\"username\":\"sa\",
\"password\": \"asdfasd\"
}"
name
string
A name by which this connection string can be easily referred to.
description
string
The purpose of the connection string. Tip: Include the driver and/or type of connection defined.
enabled
number
1 (enabled) or 0 (disabled). Defaults to 1 (enabled)
token
string
A valid connToken
userid
integer
A userid (number). (The userid is returned from the /v1/auth request)
Authorization
string
Bearer: a valid authToken
token
string
A valid connToken.
Authorization
string
Bearer: A valid authToken
connectionString
string
name
string
description
string
enabled
integer
token
string
A valid connToken
Authorization
string
Bearer: A valid authToken
token
string
A valid connToken.
userid
integer
A userid. UserIds are returned from the /v1/auth request.
Authorization
string
Bearer: A valid authToken
:driver
string
The driver to use to execute the statement on the remote system. At this time, we only have mssql.
Authorization
string
A valid Authentication Token in format: "Bearer xxxxx-xxxxxx-xxxxxx"
token
string
statement
string
A single statement or command that can be executed on the remote system. Multiple statements may be included and should be separated by semi-colons.
args
string
A JSON string containing parameters to be substituted into the statement parameter.
:driver
string
The driver to use to execute the statement on the remote system. At this time, we only have mssql
Authorization
string
A valid Authentication Token in format: "Bearer xxxxx-xxxxxx-xxxxxx"
token
String
A valid connection string token
procname
string
The name of the procedure to execute
params
object
An object containing key:value pairs for parameters to be passed into the stored procedure. Note: Input parameters can be declared in any order
output
object
An object containing name:type pairs for output parameters to be passed into the stored procedure. The type must be one of the following strings: "number" or "varchar" Note: Ouput parameters can be declared in any order.
:driver
string
The driver to use to execute the statement on the remote system. At this time, we only have mssql.
Authorization
string
A valid Authentication Token in format: "Bearer xxxxx-xxxxxx-xxxxxx"
token
string
A valid connection string Token
table
string
The table to select data from. Note: Can also be a view name
fields
array
Fields to return. If omitted then all fields will be returned.
e.g. [ "ID", "name", "enabled" ]
filter
array
A JSON array of key/value pair objects containing filtering options for the data to be extracted from the table. (see where arrays)
e.g. [ {"ID": 1}, {"enabled": "false"} ]
sort
array
A JSON string array of fields to sort by. Required if limit parameter is provided.
e.g. [ "ID DESC", "name" ]
limit
string
Number of results to return in a page. If omitted, then defaults to 100 if the sort parameter is provided - else all records are returned.
page
string
Page to be returned. If omitted then defaults to page 0 (first page). Note: Page numbering starts at zero.
:driver
string
The driver to use to execute the statement on the remote system. At this time, we only have mssql.
Authorization
string
A valid Authentication Token in format: "Bearer xxxxx-xxxxxx-xxxxxx"
token
string
A valid connection string token (connToken)
table
string
The table to insert data into
fields
array
An array of fields to add values to. Each array element is a separate record to be added to the table.
e.g. [ "ID", "Name" ]
values
array
An array of arrays. Each array is a record. Each field in the record array is a value to add.
Note: The order of values must match the order of fields.
e.g. [ [ 1, "david" ], [ 2, "mike" ] ]
:driver
string
The driver to use to execute the statement on the remote system. At this time, we only have mssql.
Authorization
string
A valid authToken in the format: "Bearer xxxxx-xxxxxx-xxxxxx
token
string
A valid connection string token (connToken)
table
string
The table in which to update data.
values
object
Object containing key:value pairs where the key is the fieldname and the value is the fields value.
e.g. { "name":"david", "address": "my house" }
filter
array
A JSON array of key/value pair objects containing filtering options for the data to be extracted from the table. (see where arrays)
e.g. [ {"ID": 1}, {"enabled": "false"} ]
:driver
string
The driver to use to execute the statement on the remote system. At this time, we only have mssql.
Authorization
string
A valid authToken in the format: "Bearer xxxxx-xxxxxx-xxxxxx"
token
string
A valid connToken
table
string
The table to delete data from.
filter
array
A JSON array of key/value pair objects containing filtering options for the data to be extracted from the table. (see where arrays)
e.g. [ {"ID": 1}, {"enabled": "false"} ]
Role
Role#
Description
READ USER (NORMAL)
1
Can authenticate and then use the /select endpoint
ALTER USER (SUPER)
2
Can authenticate and then use the /select, /update and /insert endpoints
FULL QUERY USER
4
Can authenticate and then use the /select, /update, /insert, /delete and /query endpoints
ADMIN
2048
Can use all query endpoints and can CRUD users and connections and grant user rights to connections
OWNER
4096
Can use all endpoints
Filter field shorthand
Meaning
{"username": "david"}
return records where the username is exactly equal to "david".
{"!username": "david"}
return records where the username is not equal to "david". (the '!' must be the first char of the string.)
{"username": "david%"}
return records where "david" is at the start of the username field.
{"username": "%david"}
return records where "david" at the end of the username field.
{"username": "%david%"}
return records where "david" is contained in the username field.
{"username": ["david", "michael"]
return records where the username is "david" or "michael".
{"^username": "david"}
return records using an OR join for this filter. Care as the AND/OR predicates are applied in order they occur in the filter array.
formfile* | String | Url to a form PDF |
datafile* | String | Url to a form data file in FDF format |
basefile* | String | A PDF document - can be a URL or a file-reference returned from another endpoint. |
overlayfile* | String | URL to a PDF document |
overwrite | String | Defaults to "true" |
pdf_file* | String | A PDF document - can be a URL or a file-reference returned from another endpoint. |
meta_data* | String | A file in a the following format:
|
pdf_file* | String | Url to a PDF document |
del | String | Should the file be deleted after it is downloaded. Defaults to "true". |
file* | String | A file-reference from one of the endpoints |
del | String | Delete the file after downloading. defaults to false |
show | String | Download method: D (default) downloads attachment, I download and display in browser (if supported) |
Pre-Marriage information form, used to provide basic personal information prior to an in person meeting at the Registry department.
This app replaces the prior form that existed on the cityofboston.gov domain. It is used to capture information from both partners prior to meeting in person with the Registry department.
Edit the application configuration files for each environment
Edit the config files in this repository
This app runs a node server to provide several services that include logging into IdentityIQ/Ping, etc. The main GraphQL endpoint is connected to the identityIq API Url, however, we run an additional
The Server is extended to support GraphQL using middleware and add it to the server as a plugin .register(...)
format. The endpoint is connected to the identityIq
endpoint and uses PingID properties; both of these do not live in the repo, they are located in an AWS s3 bucket that gets mounted onto the container when service starts. identityIq
is an API URL that lives in the environment file in s3; how pingId
is a set of properties (token, adp_url, base64 key, etc) that get loaded from the PINGID_PROPERTIES_FILE
file in s3.
Relevant Files
services-js/access-boston/src/pages/_app.tsx
services-js/access-boston/src/server/access-boston.ts
services-js/access-boston/src/server/services/PingId.ts
modules-js/next-client-common/src/next-client-common.ts
services-js/access-boston/src/client/graphql/change-password.ts
services-js/access-boston/src/pages/change-password.tsx
This App is designed to process City of Boston
requests of birth, death, marriage certificates, and marriage intention applications. The Birth, Death, and Marriage requests are paid requests that get processed through theStripe
payment processing platform. Marriage Intention requests currently don't make use of our Stripe
processing because at the moment their payments are being handled elsewhere. However, they share about 70-80% the same functionality as the other request types in the WebApp.
The entry point for the Registry Apps is pages/_app.tsx
. This is a React.js app that uses the MobX library for state management. The entry point uses the *CertificateRequest
classes as PageDependencies
in each of the applications/request forms. These *CertificateRequest
classes define the data fields and methods the application/forms will use. *CertificateRequests
have a few key
methods that serialize and the application state
in the browser session among other things.
App Structure
Applicatiton Controller
[pages/_app.tsx
]
[client/store/
]
BirthCertificateRequest(Class)
DeathCertificateRequest(Class)
MarriageCentificateRequest(Class)
MarriageIntentionRequest(Class)
The application controller exist a few methods to be aware of such as a routerlistener
, screenreaderSupport
, siteAnalytics
, orderProvider
, etc. These are general use methods that will be inherited in almost all child applications. Of these, siteAnalitics
and orderProvider
are the most important. siteAnalytics
explains itself, however, orderProvider
is used only by the certificate requests that use Stripe
payment. orderProvider
handles storing and manipulating the required order information like shipping address, credit card info, etc. within the sessionStorage
. When the Application Controller is mounted, *CertificateRequests
fields are merged with the field data from the orderProvider
.
Outline which parts of the application each team is responsible for.
Digital Team | Security Team |
---|---|
Presentation Logic
i.e Application design, UI elements, UX (functionality), etc
Business Logic
i.e sorting, password change, session login, changes to application data, etc
Data Requests (from Security Endpoints)
Endpoint/API Access
Data Formatting (for display)
Data Formating
Application Session
Store user data after login in application session, used to hold tokens for password change, etc
Password Validation (Server side)
Compare against previous used password, etc
UI shows error message when accessing endpoint fails
GENERATE error when authentication fails for any reason
UI/UX informs user of missing/errors on input fields
GENERATE error when user account not set up properly
REPORT error generated by endpoint (Rollbar tracking)
GENERATE error when FID not functioning properly
REPORT error generated by UX to Rollbar
Certification changes, errors
Outline Services via guides, FAQ, etc
JavaScript
Commissions-App
Commissions-Search
Group-Mgmt.
Internal-Slack-Bot
Payment-Webhooks
Permit-Finder
Registry-Certs
Contact Form -- see: https://app.gitbook.com/@boston/s/digital/projects/contact-form
We configure the webapps that run in our ECS cluster via an S3 bucket. Here’s how it works.
Services typically need runtime configuration that doesn’t get checked in to GitHub. This can range from environment-specific values (like URLs or API keys) to actual secrets (passwords).
We configure our container-based services using S3. We store files in special S3 buckets and directories, and when the containers start up they copy those files into themselves. We consistently use dotenv
(both for Node and Ruby) so we can use .env
files to set environment variables our apps will see.
The two buckets are called cob-digital-apps-staging-config
and cob-digital-apps-prod-config
. Within each are sub-directories for each of our services. Those sub-directories match the ECS service names.
The service directories can contain config files directly, and they can also have sub-directories with variant-specific configuration. We use variants in our staging environment to handle cases where we need to have different configurations. For example, Access Boston has “dev” and “test” variants that correspond to separate integration environments.
The buckets are configured to keep old versions of files, so we can recover from changes if we need to.
Our container-based apps in the CityOfBoston/digital monorepo have an ENTRYPOINT
script that syncs in the contents of the appropriate S3 directories before running the server start command.
The script first syncs the service’s main directory (either staging or prod, whichever is appropriate). It then syncs the variant-specific directory, or default
if the container is running as the unnamed default variant.
In this way, you can override the configuration for a variant on a file-by-file basis.
Some values, like database passwords or OAuth secrets, should not be stored in plaintext on disk. Each container-based service automatically gets a KMS keypair that only it has permission to decrypt with.
Our Node-based apps use the srv-decrypt-env
module to automatically decrypt any environment variables that end in the _KMS_ENCRYPTED
suffix at runtime. (We do not have a similar library for Ruby, so we don’t encrypt those passwords for our few Ruby apps.)
See the Encrypting service configuration for S3 guide for how to generate the encrypted values.
A description of the how and why of our deployment system for the webapps
Commit your changes locally
Run git push --force --no-verify origin HEAD:staging/service-name
or git push --force --no-verify origin HEAD:staging/service-name@variant
to get the changes on GitHub
Click “Deploy” when prompted by Shippy-Toe in the #digital_builds
channel
Get your PR reviewed and merge it to develop
(see: Git / GitHub)
Travis will do a full test run which, if it succeeds, will notify Shippy-Toe
Click “Deploy” when prompted by Shippy-Toe in the #digital_builds
channel
It’s recommended that engineers turn Slack notifications on for all messages in #digital_builds
, since both deployment prompts and Travis success / failure messages go there.
If internal-slack-bot
comes up in the list of things to deploy, deploy it last, after the others have all succeeded. Otherwise it will lose state about what remains to deploy.
Our staging and production deploys are done through our AppsDigitalDeploy
Amazon CodeBuild project.
If you need to manually re-try a build, or if Slack or just internal-slack-bot
is down, you can deploy by using CodeBuild directly:
In the AWS CodeBuild console, choose “AppsDigitalDeploy”
Click the “Start build” button
In the “Source” section of the dialog that comes up, enter the branch name under “Source version.” This will be either staging/service-name[@variant]
or production/service-name
Click “Start build” at the bottom of the dialog
Our goals for a deployment mechanism are to let us quickly, easily, safely, and securely ship new and updated code to our customers.
To this end, we have a system that gives us the following things:
Code can be deployed with one click, which is both easy and keeps us from making mistakes
Old versions of services are only turned down when the new versions are healthy, so we don’t have downtime
Production code has to pass unit tests before it can be released
Staging can be pushed to on-demand for testing / demoing in a production-like environment
New, isolated staging environments can be created by editing a few variables in our Terraform configuration
Releasing code requires both GitHub and Slack access, limiting the window of unauthorized pushes
We’re signaled when there’s undeployed code on develop
for our services, so production doesn’t stay diverged from the checked-in code
Does not require a code review or tests to pass to deploy
Hosted at service-name.digital-staging.boston.gov for container-based apps, apps.digital-staging.boston.gov/service-name for S3-based apps
Supports “variants” that allow for separate code and/or configuration
Currently a cluster of one machine, since all our apps fit there
Run git push --force --no-verify origin HEAD:staging/service-name[@variant]
to trigger staging deployment.
The DNS for staging is taken care of by a wildcard *.digital-staging.boston.gov
CNAME that points to our ALB, so there’s no per-service configuration that needs to happen outside of our team.
Triggered when changes to a service are merged via PR into develop
and pass tests on Travis
Hosted at subdomain.boston.gov or apps.boston.gov/subpath depending on configuration
Does not support variants
Containers are replicated across 2 AZs for resilience
New boston.gov
subdomains need to be manually added by the network team. See New service setup.
For webapps that need a server implementation, usually in Node but Ruby for some legacy applications, we deploy in a Docker container to our Elastic Container Service cluster in AWS.
Docker containers let us trivially deploy many isolated apps on to a few machines, keeping our costs low, and ECS’s integration with Amazon’s Application Load Balancers gives us automatic health checking and zero-downtime deploys.
Some of our sites are just static HTML, or are static HTML and static JavaScript. The public-notices
app is an example of this.
These apps are served from the cob-digital-apps-prod-static
S3 bucket by an nginx server running in a container in the ECS cluster. This is set up by the proxy_pass_service
module in digital-terraform.
All of the webapp deployments through the monorepo use our internal-slack-bot
service, which appears as Shippy-Toe the squirrel in the #digital_builds
channel.
Deployable changes to staging branches or the develop
branch will trigger prompts in #digital_builds
. Just press the “Deploy” button to release them.