Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Notes on the way Drupal Entities and Configuration have been utilized in boston.gov and theHub.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
You will use the Git as the version control system for the City of Boston website and to manage code in the Acquia Cloud environment. If you are not already familiar with Git, you will want to check out this in-depth Git series. The first seven videos will give you most of what you need to know:
Introduction to the Git Series
What is Version Control?
Installing and Configuring Git (Dev Desktop comes with Git)
Getting Help with Git
Git Crash Course
Working with Git Branches and Tags
Moving through Git History
The next fourteen videos are more advanced Git topics, so you might want to save those for a later time.
Find the full Git series at: https://drupalize.me/videos/introduction-git-series?p=1173
You will see the listing of videos in the series in upper right panel; there are also a couple of different levels of scrollbar, so it’s easy to miss the later videos.
When and if a new environment is setup on Acquia for CoB, the following steps should be followed:
When a new environment is added, it will have a 3-4 character name (e.g. uat
or dev2
etc). This checklist refers to this environment short-name as the envname.
This change adds the specified domains to the acquia-purge registry. This means the varnish cache for these domains will be automatically purged. If a sub-domain is attached to an environment and is NOT listed here, then it will not be automatically purged as content is changed.
This change directs the new environment to request images and files from a shared (linked) folder rather than the default sites/default/files
folder. The folder is linked to conserve file space as each environment basic requires the same sets of images and files.
The following steps need to be completed to allow single sign on via Ping Federated.
To use the environment as a Drupal site, you need to attach a branch from the Acquia git repository. For detailed instructions see On Demand section.
Run phpcs on your custom modules
PHP CodeSniffer (https://github.com/squizlabs/PHP_CodeSniffer) is already included with our D8 project with composer. If you run a lando composer install
you should have it available at ./vendor/bin/phpcs
1. You need to specifically download the Drupal coding standards using the coder module. You can do this globally for your computer by running:
2. You need to make sure phpcs knows about your newly installed coding standard (note the path below assumes you're using Ubuntu, yours might be different on a mac):
3. Now you can run this manually against your custom modules:
If you're looking for more info, here's a good place to get started: https://www.drupal.org/docs/8/modules/code-review-module/installing-coder-sniffer
City of Boston use docker containers for local development.
Lando is used by the Drupal development team to manage the docker containers and provide basic tooling for the local development environment.
City of Boston strives to automate the develop, test, package and deploy process at each step from local development to deployment in live production,
This page is out of date and needs review (as at 17 June 2021)
The repository is cloned in a local folder and ready for building.
This entry condition can be achieved:
If you have not yet built the boston.gov website on your local machine, or
If you have cloned a new branch or created a new branch that you wish to build, you can run the doit rebuild quick
script, or
If you have the repository cloned, but wish to delete it and rebuild a fresh website from a branch on the GitHub repository, you can run doit rebuild full <branch>.
If you don't specify a branch, then develop
will be used.
Local developer responsible for creating local development environment.
The local build process is defined and controlled by Lando when lando start
is executed.
The doit
scripts serve to prepare the cloned repository prior to running lando start
Lando
lando start
causes the following processes to be run from lando.yml
3 standard Linux (ubuntu) containers are created. One optimized as an appserver with Apache, one optimized as a database server with MySQL and one with Node.
Install the required/dependent packages and tools -including Phing and Composer.
Create and install XDebug and other Apache/PHP settings files.
Set apache vhosts and container's network configs. (done by Docker via Lando).
Start all 3 containers.
Launch the phing script setup:docker:drupal-local
.
Phing
The phing scriptsetup:docker:drupal-local
in reporoot/scripts/phing/tasks/setup.xml
executes the following:
Download Drupal dependencies into Apache appserver container - including Drush. (done using Composer).
Download confidential settings and copy into Drupal file system (using Git).
Install Drupal by Installing a new Database on the database container. (using Drush).
Install Drupal modules and load configuration files. (using Drush).
Run Drupal's Update process to load updated-settings from modules. (using Drush).
Modify Drupal settings with localized settings.
Reset the admin password and issue login url. (using Drush).
Run Linting Test using PHP Linting. (done by PHP via Phing, launched by Travis).
Run Code Sniffer Test. (done by Sqizzlabs via Phing, launched by Travis).
(coming soon) Run Behat behavioral tests. (done by Behat via Phing, launched by Travis).
(coming soon) Run PHPUnit functional tests. (done by PHPUnit via Phing, launched by Travis).
For local development, the docker container build is controlled by Lando, with Phing being used to build Drupal.
When a Pull Request is created to merge code into the develop branch on Github a test build and some automated testing is run by Travis. Travis is used in place of Lando to initiate and control the build process as described above (i.e. Travis is used to build docker containers on Github/Travis infrastructure, whereas Lando builds docker containers on local machines). In both cases the Travis and Lando scripts are very similar in structure and identical (as possible) in function. Once the containers a built, both tools use the same Phing scripts to build and initiate Drupal.
(coming soon) Terraform will be used to spin up on-demand test/develop/experiment/demo instances of the containers (i.e. the websites) on AWS infrastructure. In this case Terraform scripts will be used to control the build in place of Lando - but (as with Travis) will be as similar as possible in function. Again, once the containers a built on AWS, the same Phing scripts will be used to build Drupal.
Run Linting Test using PHP Linting. (done by PHP via Phing, launched by Travis).
Run Code Sniffer Test. (done by Sqizzlabs via Phing, launched by Travis).
(coming soon) Run Behat behavioral tests. (done by Behat via Phing, launched by Travis).
(coming soon) Run PHPUnit functional tests. (done by PHPUnit via Phing, launched by Travis).
For local development, the docker container build is controlled by Lando, with Phing being used to build Drupal.
When a Pull Request is created to merge code into the develop branch on Github a test build and some automated testing is run by Travis. Travis is used in place of Lando to initiate and control the build process as described above (i.e. Travis is used to build docker containers on Github/Travis infrastructure, whereas Lando builds docker containers on local machines). In both cases the Travis and Lando scripts are very similar in structure and identical (as possible) in function. Once the containers a built, both tools use the same Phing scripts to build and initiate Drupal.
(coming soon) Terraform will be used to spin up on-demand test/develop/experiment/demo instances of the containers (i.e. the websites) on AWS infrastructure. In this case Terraform scripts will be used to control the build in place of Lando - but (as with Travis) will be as similar as possible in function. Again, once the containers a built on AWS, the same Phing scripts will be used to build Drupal.
Set up environment for Drupal development on various operating systems.
Select your operating system from below, and follow the instructions to setup your development environment and prepare to install the City of Boston Drupal 8 website.
Tip
You can (re)use an existing key on your development computer, so long as it meets the requirements of GitHub.
How to create SSH keys for github
Be sure you load the public keys you create into GitHub.
Tip
You can (re)use an existing key on your development computer, so long as it meets the requirements of Acquia.
City of Boston recommend the Ubuntu 16.04 or later distribution. While other Linux distributions will operate well, the instructions below assume the use of Ubuntu and, in particular, the apt
package manager.
Check Docker pre-requisites.
If using PHPStorm, install Docker-machine
At their core, Mac operating systems are similar to Linux and therefore the same basic steps apply to Macs as they do for Linux.
Git is usually installed, and on most operating systems verifying is achieved by typing the command below at a terminal prompt. This process has the advantage of prompting to install git if its not there.
Enter the command below. This will install a brew-community version of Lando, including docker as explained here.
Using brew is quick and simple and will definitely get you started. If you later find that you have issues with Lando and/or Docker versions, then follow the instructions on this page under the title "Install DMG via direct download" to get the latest versions.
Because Drupal is most commonly installed on Linux servers, City of Boston DoIT does not recommend using Windows® as a developer machine due to the increased difficulty in emulating the most common Drupal production web server.
However, if you have no alternative, or harbor an unquenchable desire to use Windows® then the following best practices and instructions should get you headed in the right direction.
There are many IDE's capable of being used to write, verify and deploy PHP code. City of Boston do not endorse any particular platform, but have successfully used the following:
Notepad++ (basic text editor)
Sublime Text (improved text editor)
VIM (Linux-based advanced text editor)
Visual Studio Code (full IDE)
Eclipse (full IDE)
PHPStorm (full IDE)
Attach a GitHub branch to an Aquia environment.
On demand instances of the Drupal site (boston.gov) are useful to demonstrate new features or functionality sand-boxed away from the continuous-deployment process.
These on demand versions of boston.gov are designed to be housed on a near-duplicate environment to the production site, and be easily accessible in a normal browser from anywhere by people with the correct link.
Acquia provide 6 environments to CityOfBoston.
The dev, stage(test) and prod
environments are associated with git branches used in the continuous-deploy workflow and can not be attached to different branches or repository tags without disrupting and potentially breaking the workflow.
The dev2, dev3, ci and uat
environments can track any desired branch or tag (even develop-deploy
or master-deploy
) without disrupting the continuous-deployment workflow.
This process has been decommissioned and some of the processes below are no longer implemented in scripts.
This page is left here only to provide background should COB decide/require to have Drupal in an AWS managed container.
You can push your local repository up to a test instance on our staging cluster on AWS. This will let you show off functionality using data from a staging snapshot of Boston.gov.
You will need a full development environment and Drupal 8 installed on your local machine (refer to earlier notes).
Install the AWS Command Line Interface.
Get a “CLI” IAM user with an access key and secret key.
Use aws configure
to log your CLI user in locally. Use us-east-1
as the
default region.
Request your CLI IAM user credentials from DoIT.
To create a place to upload your code, follow the instructions in the CityOfBoston/digital-terraform repository to make a “variant” of the Boston.gov staging deployment.
To push your local repository up to the cluster, run:
Where <variant>
is the variant name you created in CityOfBoston/digital-terraform
.
This will build a container image locally and upload it to ECR. It will then update your staging ECS service to use the new code.
By default, the container startup process will initialize its MySQL database with a snapshot of the staging environment from Acquia.
After the container starts up and is healthy, the doit
script will print useful URLs and then quit.
Direct SSH access is not generally available on the ECS cluster. To run drush
commands on your test instance, you can visit the webconsole.php
page at its domain. This will give you a shell prompt where you can run e.g. drush uli
to get a login link.
The webconsole.php
shell starts in docroot
.
Talk to another DoIT developer to get the webconsole username and password.
NOTE: Each time you deploy code to your test instance it starts with a fresh copy of the Drupal database.
If you want to preserve state between test runs, log in to webconsole.php
and run:
(The ..
is because webconsole.php
starts in the docroot
.)
This will take a snapshot of your database and upload it to S3. The next time your test instance starts up, it will start its sync from this database rather than the Acquia staging one.
The database will also be destroyed when the AWS containers are restarted for any reason. It is good practice to stash your DB regularly.
To clear the stash, so that your database starts fresh on the next test instance push, use webconsole.php
to run:
Here is a snapshot of the doit script referred to above.
Elsewhere this might be termed spinning up an on-demand instance of the site.
Make sure you have the latest copy of the main Drupal 8 repository cloned to a folder <repo-root-path>.
Checkout the branch develop
and make sure the latest commits are pulled (fetch+merged) locally.
Commit your work to a new branch (on-demand-branchname
) off the develop
branch .
Push that branch to GitHub, but do not create a PR or merge into develop
.
Edit <rep-root-path>/.travis.yml
file and make the following additions:
(Note: replace <on-demand-branchname>
with on-demand-branchname
.)
Edit <rep-root-path>/scripts/.config.yml
file and make the following additions:
(Note: This partial example addition is configured to deploy to the Ci environment on Acquia)
(Note: replace <on-demand-branchname>
with on-demand-branchname
.)
Commit the .config.yml and .travis.yml
changes to on-demand-branchname
and push to GitHub - but do not merge into develop
.
Make a small inconsequential change to the code and commit to the on-demand-branchname
branch, and push to GitHub. This will cause the first-time build on Travis, and deploy into the on-demand-branchname-deploy
branch in the Acquia Repository.
The Travis build can be tracked here in Travis.
Login to the Acquia Cloud console. In the UI switch the code in the Ci/Uat environment to the on-demand-branchname-deploy
branch.
This will cause a deploy on the Acquia server, which will copy across the current stage
database and update with configuration from the on-demand-branchname
branch.
The "on-demand" environment is now set. Users may view and interact with the environment as required. See Notes in "gotcha's" box below.
Once you have finished the demo/test/showcase cycle, you can merge the on-demand-branchname
branch to develop
- provided you wish the code changes to be pushed through the continuous-deploy process to production
.
Finally you can detach the on-demand-branchname
branch from the Acquia environment, and set it back to the tags/welcome
tag.
You can direct users to the URL's below, select the environment you switched to the on-demand-branchname-deploy
branch (in step 8) from the table below.
Housekeeping.
When finished with the environment, you should consider rolling-back the changes you made to .travis.yml
and .config.yml
in steps 4 & 5 before finally merging on-demand-branchname
to develop.
It is likely that the on-demand instance is no longer required, and its unnecessary for the the on-demand-branchname
to be tracked by Travis.
Also as a courtesy, change the branch on the environment back to tags/WELCOME
so it is clear that the environment is available for use by other developers.
Updating: If you push changes to on-demand-branchname
in GitHub (which eventually causes Acquia'son-demand-branchname-deploy
to be updated) - then in Aquia's terminology you are "updating" the code.
Any commits you push to the GitHubon-demand-branchname
will cause Travis to rebuild and update the code on the Ci/Uat environment and this will cause Acquia'spost-code-update
hook script to run.
- That update-hook script will backup your database and update and new configurations but will not update or overwrite any content (so changes made by users will be retained).
Deploying: If you switch the code on the Acquia server from on-demand-branchname-deploy
to some other branch or tag, and then back again - then in Acquia's terminology each switch of branch is a "deploy" of the code. GitHub is not affected by this change, so nothing will run on Travis, but once each switch is complete, Acquia'spost-code-deploy
hook script will run.
- That deploy-hook script will sync the database from the stage
environment and will overwrite any content in the database. Therefore, any content previously added/changed by users will be lost.
Setting up the Visual Studio Code editor to work well with Drupal
Edit .vscode/launch.json
Add the following configuration:
Click in top navbar navigate to file > preferences > settings
Under Workspace Settings
expand the Extensions
option
Locate PHP CodeSniffer configuration
and scroll down to the Standard
section and click the "Edit in settings.json" link
Add the following configuration to your Workspace Settings
:
General familiarity with the Drupal platform is a baseline requirement for anyone working on the site. We suggest reading through the following user guide to Drupal 8:
Additionally, Acquia's free training program Acquia Academy offers a series of Youtube video tutorials which can be found here, including a Drupal 8 Beginner's Course:
Under Extensions in the left sidebar, search for "PHP Debug" and click "Install"
Under Extensions in the left sidebar, search for "phpcs" and click "Install"
Environment
URL
uat
(public DNS entry)
ci
(public DNS entry)
dev2
(no DNS - make entry in local hosts file)
dev3 (pending)
https://d8-dev3.boston.gov
(no DNS - make entry in local hosts file)
Three options for setting up a development environment on Windows.
Because Drupal is most commonly installed on Linux servers, City of Boston DoIT does not recommend using Windows® as a developer machine due to the increased difficulty in emulating the most common Drupal production web server. However, if you have no alternative, or harbor an unquenchable desire to use Windows® then the following best practices and instructions should get you headed in the right direction. There are 3 strategies to choose from:
This is the most complicated solution to setup, but allows the developer to use any windows-based tools desired to manage the Drupal codebase and databases.
The git repo is cloned to a local Windows folder on the Windows host. This repo folder is mounted into a Linux (Ubuntu) Docker Container (like a VM). Docker manages the virtualization and the container contains all the apps and resources required to host and manage the website locally for development purposes. Git commands are run either from the Windows host, or from the container. Lando (a container manager tool) provides a “wrapper” whereby commands (e.g. Docker, Lando, Git, Phing, Drush, Composer, SSH etc) are typed into a console on the Windows host, and Lando executes them inside the container. To be clear, with this strategy:
The container hosts the website
The developer normally changes/adds/removes Drupal files in the Windows folder on the Windows host
Changes to custom Drupal files (i.e. to files in the mounted folder) either on the host or in the container are immediately available to both the host and container without restarting docker or VMs
The developer normally runs dev tools such as Git, Drush, Phing and Composer in the container, using Lando commands
The Windows host does not require to have tools other than Docker, Lando and VBox or Hyper-V installed on it
Some developers still like to have git installed on the Windows host so their IDE tools (e.g. PHPStorm) can manipulate the repos directly
Developers’ need to interact directly with the container (i.e. via ssh) is minimized, and
This installation creates a developer environment suitable for a Linux-based production deployment.
Due to Lando requirements to use Docker CE (not Docker Toolkit), which in turn requires Hyper-V, you: NEED to have a Windows 10 64bit Professional or Enterprise version CANNOT use Windows 7 or earlier CANNOT use Windows Home or Home Pro as Hyper-V is required by Lando and does not ship with home versions.
These 6 steps are all performed on the host (i.e your Windows®) PC.
This is required to supply a Linux core which is needed by Docker to generate the necessary containers.
Install Windows Subsystem for Linux (preferred method)
These instructions also depend on having a current version of Windows® 10 (version later than Fall Creators Update
and pref build 16215 or later).
To install WSL support, do the following:
Open Windows Powershell as Administrator
Run:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
Restart Windows when prompted
Taken from here
Install Linux Distro
DoIt suggests you install the Linux distribution from the Microsoft Store which most closely matches the Linux distro you will use on your production webservers. If you are unsure, install Ubuntu or Debian.
Install Hyper-V
If Hyper-V is not enabled when the Linux subsystem was installed (check by typing “Hyper-V” in the start menu), then follow these instructions.
If you are not using WSL, then Git for Windows provides a bash terminal for the Windows host. Installing Git for Windows is a convenient way to get this, and also gives the developer the option to directly execute git commands (against the repo) from the Windows host. This step is optional if you use WSL, or if you are confident with some other tool to provide a bash style console. Use Git for Windows from here. This is a good tutorial to step thru installation.
If you are using WSL and have enabled Hyper-V for your virtualization, then use the Docker “community version” from here - this link also guides you through an install.
Download the latest Windows .exe installer from here.
On Windows®, DoIT recommends:
In order to use VS Code for Drupal development, use this guide as a starting point. The editor is highly configurable with many extensions available. You will likely want to customize it further based on your needs.
Pickup from step 3 on the quick install guide.
This solution may be a quick and viable option if you have a powerful Windows machine to use as the host, and are not doing much development which required extensive use of an IDE. Depending on your setup, there may be issues with IPAddress routing, requiring complex configurations.
This method is not used by City of Boston DoIT, the preferred solutions on Windows machines are A or B.
For Windows® versions before 10 Fall Creators Update, we recommend that VirtualBox (free from Oracle) is used
For later versions you should use enable and use Hyper-V within Windows.
In the VM, install a Linux distro as close as possible to the production distro you will use, and unless you are very comfortable with the Linux CLI, be sure to install a distro with a GUI.
Once the Linux distro is installed, then follow the setup instructions for Linux.
Contact the AWS administrator to get credentials for logging into the AWS console and (if necessary) interacting with AWS via the command line.
Once you have a login to the AWS console: if you wish to use the AWS-CLI, or use any other command line program which connects to AWS (e.g. git for CodeCommit) you will need to register/add an SSH key on your AWS-CLI account.
You can use an existing ssh key, or create a new one.
You need to install the AWS CLI if you, or a tool you use, needa to interact with AWS from the command line - for example:
To use terraform to maintain AWS
To deploy webapps to AWS
To modify AWS objects from the command line
Follow the instructions here.
You want to install the AWS-CLI on you local machine, not inside a container. Follow the Mac, Windows or Linux instructions accroding to the OS you are using.
Verify AWS is installed using LINUX console:
You should see an output something like:
If not then return to the "Install AWS-CLI" section above.
Obtain your secret access keys for AWS from the AWS administrator, and then create the AWS credentials file using the LINUX console:
Alternatively, you could create and edit the ~/.aws/credentials
file using any text editor.
The following is a printout of the console from a typical build following the instructions on Installation Instructions page.
Specifically this output is from the command:
The log above was generated using lando start
with this.lando.yml
landofile.
The log above was generated using lando start
with thisconfig.yml
project file.
Creates a drupal 8 container, mysql container and node container, and connects them all up.
For more detailed install and usage instructions for various platforms, see "More Help" below.
Ensure you have set up your development environment as described here:
Clone the public boston.gov-d8 repo into a local folder
git clone -b <branchname> git@github.com:CityOfBoston/boston.gov-d8.git
(City of Boston DoIT recommends that the develop branch be used)
On host computer, change directory to the repository root and use lando to create and start containers:
lando start
Depending on the power of the host machine, the Drupal 8 build process for boston.gov can take more than 15-20 minutes. The composer install and site install (esp. config import) tasks can take 5-10 minutes each - with no updates being directed to the console.
-> You can follow the process by inspecting the log files in docroot/setup/
there are links to these files in the console.
From the repository root (on host):
to view a list of available lando commands:lando
to view phing tasks: lando phing -l
to run drush commands:lando drush <command>
lando ssh
to login to docker container as www-data
lando ssh -user=root
to ssh and login as root
lando ssh <servicename>
servicename = appserver / database / node
To reduce typing at the console, you can add the following aliases to your ~/bashrc
, ~/.bash_aliases
or ~/.bash_profile
files on your development (host) OS.
With these aliases, typing (in a console) lls <folder>
will use lando to run ls -la <folder>
in the default container (in our case appserver) and list files there. Whereas, ls <folder>
will list the folder locally (i.e. on the host) as usual.
For more information on installation, usage and administration of the development area, go to the next section.
Using Lando in City of Boston.
For our purposes, Lando is a PHP-based tool-set which does 3 main things:
Provides a pre-packaged local docker-based development environment,
Provides a wrapper for common docker container management processes,
Provides a means to execute development commands inside the container from the host.
Lando curates an appropriate LAMP stack for Drupal development, largely removing the need for this skill in the local development team. The stack is contained within:
Docker images that are maintained by Lando.
A configuration file (landofile) which Lando parses into the necessary dockerfiles and utility scripts
COB uses a landofile which can be found at /[reporoot]/.lando.yml
Lando provides a CLI for tasks developers commonly need to perform on the container.
A full list of defined Lando commands can be obtained by executing:
lando
Lando provides a CLI for tasks developers commonly need to perform in the container.
A full list of defined Lando commands can be obtained by executing:
lando
lando drupal-pull-repo
lando drupal-sync-db
lando drupal-pull-repo --no-sync &&
lando drupal-sync-db
lando rebuild
or to be completely sure, run these commands from the
lando destroy &&
rm -rf <repo-path>
git clone -b develop git@github.com:CityOfBoston/boston.gov-d8.git <repo-path>
lando start
If the installation has completed without errors, then you should be able to check the following:
The production/public website is hosted by Acquia and can be accessed at https://www.boston.gov.
The local development version of the public website can be viewed at: https://boston.lndo.site. This local copy of the Drupal website is served (by Apache) from the appserver
docker container, and its content is stored and retrieved from a MySQL database in the database
docker container.
You will find the CityOfBoston/patterns repo cloned into the root/patterns
folder on your host dev computer.
The production/public patterns library is hosted by City of Boston from our AWS/S3 infrastructure and can be accessed at https://patterns.boston.gov.
The local development version of the patterns library is hosted by Fleet and can be viewed at https://patterns.lndo.site. This local copy of the Fleet website is served (by Node/Fractal) from the patterns
docker container.
You will find the CityOfBoston/patterns repo cloned into the root/patterns
folder on your host dev computer.
The gulp, stencil, fractal and other services running in thepatterns
docker container will automatically build the local fleet static website into root/patterns/public
from the underlying files in real-time as they are changed.
The production/public patterns CDN is hosted by City of Boston from our AWS/S3 infrastructure at https://patterns.boston.gov.
The local development version of the CDN is hosted by Fleet at https://patterns.lndo.site. This local CDN is served (by Node/Fractal) from the patterns
docker container.
You will find the CityOfBoston/patterns repo cloned into the root/patterns
folder on your host dev computer.
The gulp, stencil, fractal and other services running in thepatterns
docker container will automatically build the local fleet static website into root/patterns/public
from the underlying files in real-time as they are changed.
For some people working within lando containers slows down and crashes their environment. To fix this they can work outside lando containers (patterns.lndo.site) and directly with localhost:3030
The local development version of the CDN is hosted by Fleet at http://localhost:3030. This local CDN is served (by Node/Fractal) from your local environment.
Super-powers are granted randomly so please submit an issue if you're not happy with yours.
Once you're strong enough, save the world:
In the context of this document: Think of as a for a stack constructed in .
The repo that was checked out in Step 1 of the is hosted on your dev computer, and is mounted into each of the docker containers. As you make changes to the files on your dev computer, they are instantly updated in all of your local docker containers.
For developers using PhpStorm IDE how and where to update your settings/preferences to make debugging and developing in Drupal easier.
That's a tough question but thankfully, our team is on it. Please bear with us while we're investigating.
Yes, after a few months we finally found the answer. Sadly, Mike is on vacations right now so I'm afraid we are not able to provide the answer at this point.
Steps 1 - 7 must be completed while the computer is connected to the city network.
Using Windows POWERSHELL (as Administrator):
Launch POWERSHELL as administrator: search powershell
from Windows search
Alternative strategy
This may work without Windows requesting a restart at the end.
Using CMD (console):
To open a CMD console search for cmd
in the Windows search
Alternative strategy:
This may provide a more fault tolerant WSL environment when we are switching from City network to external network (because we are controlling where the distro is installed, and its not on the user's profile).
Using LINUX (WSL) console:
To get the Linux console, open a CMD console, type: wsl
@see https://docs.microsoft.com/en-us/windows/wsl/wsl-config
These configuration files tweak the WSL environments to enable a better developer experience based on a standard CoB laptop configuration (i.e. minimum i7 chip, 32GB RAM and SSD harddisk).
Using a POWERSHELL console from the windows host:
Using a LINUX console (WSL):
Using LINUX console
If you have accessing the internet from WSL first try RESTARTING the computer.
If that does not work, using a LINUX console try:
=> then restart the computer.
Mount your development folders into WSL using the LINUX console:
Replace c:/Users/xxxx/sources
with the location in the windows host where you plan to keep all development source files.
This is the folder where you will be cloning the CoB repos.
If in doubt, create a sources
folder in your windows home folder, and for the command above just replace xxxx
with your CoB supplied EmployeeID/User Account.
Replace yyyy
with the accountname you used when you installed WSL (you can find this in the LINUX console by running cd ~ && pwd
- the path displayed be in the format /home/accountname
Download installer from https://docs.docker.com/desktop/windows/install/
Double click the installer to launch: + Click OK to accept non-windows app, + Select WSL2 as the backend (rather than Hyper-V)
Docker desktop does not automatically start after the install, you need to start it the first time from the Start menu.
Restart your computer after this step.
If you do not, and subsequently restart the computer while off the city network, your installation will be broken, and you will have to remove Docker and WSL, and start over.
(see "Docker Fails to Restart" notes below to fix broken/non-functional WSL installs)
Verify AWS is installed using LINUX console:
You should see an output something like:
aws-cli/2.7.4 Python/3.9.11 Linux/5.10.102.1
.....
Obtain your secret access keys for AWS from the AWS administrator, and then create the AWS credentials file using the LINUX console:
Alternatively. you could also create and edit the credentials file using vim
which is installed in the WSL instance (from step 5 above).
Add your ssh keys to into your windows account (typically into a windows folder on you home drive) and then from a LINUX console:
Replace xxxx with your EmployeeID/User Account from CoB.
Microsoft Visual Studio Code (VSC)
PHP Storm
Using POWERSHELL:
Using POWERSHELL:
Using LINUX console:
Replace xxxx
with your CoB supplied EmployeeID/User Account.
Replace yyyy
with the accountname you used when you installed WSL.
Using LINUX console:
Replace yyyy
with the accountname you used when you installed WSL.
Using LINUX console:
Using Powershell (as Administrator):
From Powershell console reinitialize WSL:
From LINUX (WSL) console reset the nameserver so you can access the internet:
Where X.X.X.X is the IPAddress: 8.8.8.8 (confirm if there should be a different address) when in the office and 10.241.241.70 when not on the city network but using a VPN.
If, when restarting the computer, Docker fails to start and/or you get the following error when starting WSL:
The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.
To fix this, perform the following steps.
Step 1: Using Powershell (ps) as Admin:
Step 2: Then using a CMD shell (as Admin)
Step 3: Restart Docker for Windows from the start menu.
Command | Explanation |
Starts all 3 lando containers, building them if they don't already exist. |
Stops all 3 containers, but does not delete or destroy them. They can simply be restarted later. |
Will rebuild the container using the values in the
|
Will destroy the container.
|
Command | Explanation |
Opens a bash terminal on the appserver docker container. If the -c switch is used,
then a terminal will be opened, the command provided will be run in the container and then the session will be closed.
|
| Executes a drush cli command in the appserver container:
|
| Executes a Drupal cli command in the appserver container:
|
| Executes a Composer command on the appserver container:
|
| Executes a cob script which copies the database from the stage environment to the local development environment, and sync's all the configurations etc.
|
| Executes a cob script which pulls the latest project repository from gitHub and then clones and merges the private -repository. Finally it runs sync tasks to update the DB with any new configurations.
To update the repo's without sync'ing the content, execute:
|
| Locally runs the linting and PHPCode sniffing checks that are run by Travis. |
| Allows you to switch between patterns CDN hosts. |
This is a series of videos around site building and administration tasks. The individual videos in the series are listed in the upper right panel of the screen:
https://www.youtube.com/watch?v=R1ivRsz_urk&list=PLpVC00PAQQxGwyvUD_tYcBbLJqRC1CZ6U
City of Boston use Acquia to host our Drupal website.
Acquia provide a number of different environments for COB to use. One of those environments is production the others are non-production - named: stage, dev, uat, ci & dev2.
Detail on deployment is covered elsewhere, but in summary we are able to "bind" certain branches of our GitHub repo (CityofBoston/boston.gov-d8) to these Acquia environments, and when changes occur in those branches, a deployment is automatically triggered.
Therefore, the way we branch-off, push-to and merge the "bound" branches is important.
The develop
branch is bound to the Acquia dev environment, and the master
branch to the stage environment. Changes cannot be made directly onto the master
branch, and changes should not be made directly onto the develop
branch - except when hotfixes are needed.
Best Practice is to create a working branch off develop
, then check out that working branch
locally.
Updated code should be committed to the locally checked out copy of the working branch
Updating the local working branch
will update the local containerized website for testing.
Periodically, the local working branch
should be pushed to the remote working branch
in GitHub.
Updating the working branch
in GitHub will not trigger any deploys or update any website.
To start the deploy to the dev environment, a PR is created in GitHub to merge the working branch
in GitHub into the develop
branch in GitHub.
Merging will trigger a build and the website on the dev environment will be updated.
When ready to deploy to the stage environment, a PR is created in GitHub to merge the develop
into the master
branch in GitHub.
Merging will trigger a build and the website on the stage environment will be updated.
To deploy to the production environment, use the Acquia Cloud UI - see continuous deployment notes.
We can bind a branch to the dev2, ci or uat environments so that we can share proposed or interim website changes with stakeholders or other individuals where a local containerized website is not appropriate. These environments can be considered on-demand, and the way to update them is similar but slightly to the normal deploy piepline, requiring an extra branch.
Branches attached to environments other than dev, stage and production in Acquia are termed environment branches (see also On-Demand Instances).
Initially, an environment branch
is created from the develop
branch.
This environment branch
is then bound to the desired Acquia environment (dev2, ci or uat).
Developers then create a working branch
off the environment branch
and check out that working branch
locally.
Developers commit their work to the local copy of the working branch
which can be pushed to the remote working branch
in GitHub whenever desired.
Updating the local working branch
will update the local containerized website for testing.
Updating the working branch
in GitHub will not trigger any deploys or update any website.
When ready to update the website on the bound environment, using a PR, the GitHub copy of the working branch
is merged to the environment branch
in GitHub.
Merging will trigger a deploy to the bound Acquia environment (i.e. dev2, uat or ci) and update the website on that environment.
Stakeholders can be directed to the website on the Acquia environment.
Once the project or piece of work is complete, a PR to merge the GitHubenvironment branch
to the develop
branch is created.
Merging will trigger a deploy to dev and update the website.
To continue to deploy to stage and production environments, follow the notes in Normal Deploy Pipeline above.
Sometimes a picture is worth 1,000 words.
In the above diagram,
Lines with an arrow indicate a merge to the branch in the direction of the arrow.
Lines with a dot connector indicate the creation (or updating) of a branch - and when the line is to a local branch it is a checkout to a local branch.
The master
branch is the production branch and cannot be pushed/merged to directly.
The correct way to update master
is to merge the develop
branch into the master
branch.
At all times the master
branch should be a copy of the code on the production environment. (see continuous deployment)
Green arrows cause a deployment process:
Only if the branch being merged into is bound to an Acquia environment, and
This is controlled/executed by Travis, taking approx 3 mins (uses 30 Travis credits), and
The website hosted on the Acquia Environment is updated during the deploy.
Orange arrows cause a build, test and deployment process:
Only if the branch being merged into is bound to an Acquia environment, and
This is controlled/executed by Travis, taking approx 30 mins (uses 300 Travis credits), and
The website hosted on the Acquia Environment is updated during the deploy.
Travis is configured so that this is extended process usually only runs when committing to the develop
branch - triggering a deploy to the Acquia Dev environment as the first step of the deployment pipeline.
Black arrows indicate a simple commit/merge process with no building or deploying:
Best practice reuquires that a working branch
is not bound to Acquia Environments
Merging does not trigger Travis, there is no deploy and 0 Travis credits are used
Note: A GitHub environment branch
can be bound to one or more Acquia Environments. When this is the case, deploys will occur simultaeously to all bound environments when the GitHub environment branch
is updated.
Travis always controls deploys, but only one set of credits is used per environment branch
merge regardless of how many Acquia environments it is bound to.
Caching considerations for Drupal with Acquia
Memcache is not used on boston.gov (at this time).
You can inspect the headers of requests to a webserver to see if varnish is enabled, and if content was served from the Varnish and/or Drupal caches.
This terminal command will return the headers from a request to a URL:
Examples:
Is "passive" caching: Varnish is not aware of the origin of html content it serves/caches.
Is outside of the Acquia load-balancers and is the first cache a user request hits.
Does not cache content for authenticated users.
On boston.gov, the Acquia Purge module is configured to remove entities (pages) from the Varnish cache as they are updated by content editors in Drupal. This invalidation process uses queues in Drupal. The Drupal queue processor is triggered by cron and runs until the queue is exhausted.
On production
cron runs every 5 minutes,
so (if there is no active queue) it could take up to 5 minutes for content changes to appear.
Onstage
anddevelop
cron runs every 15 minutes.
Acquia provide the memcahed
libraries on its environments, and will configure special memory allocations for memcache on request.
Memcache modules are not enabled on the City of Boston Drupal 8 environments.
Images:
Static Content: (typically web-pages built from a Drupal content type)
When an entity (bit of content) is updated in Drupal its tags are invalidated. Pages which use that content (and which are are already cached by Drupal) are also invalidated. Next time that page is requested a rebuild/regeneration and re-cache occurs within Drupal.
When a page is invalidated in Drupal, Varnish is notified and the page is also invalidated in the Varnish cache.
Because Drupal caching and invalidation is now so effective, the page-expiry for nodes should be set to a large value (> 1 month). This is done in the APE configuration.
Dynamic Content: (typically REST end-points and web-pages built from, or containing Drupal views)
Views can be given a lifetime, and set to expire a certain time after the last time the views underlying query was run. As I understand, with time-based caching there is no invalidation of the node, but as the content expires it will be re-cached by Drupal using the traditional (Drupal 7) method. The page containing the view should be set to expire at a relatively short period (in APE) - around the same time value as the view cache expiry. Unless told otherwise Varnish expires the page after 2 minutes.
REST endpoints should be given an expiry in APE.
The Varnish cache performs 2 functions, one intended and one somewhat unintended.
Reduces load on the application server (i.e. webserver), but also
The cache will continue to serve cached pages even if the application server (webserver) is down or otherwise unavailable. Any cached pages in varnish will continue to be served until the pages expire in the cache. Note: Not all pages are cached, and authenticated sessions are not cached.
The configuration system for Drupal 8 and 9 handles configuration in a unified manner.
By default, Drupal stores configuration data in its (MySQL) database, but configuration data can be exported to YAML files. This enables a sites configuration to be copied from one installation to another (e.g. dev to test) and also allows the configuration to be managed by version control.
TIP: Configuration data (aka settings) includes information on how custom and contributed modules are configured. Think of configuration as the way developers define how the Drupal back-end functions, and what options will be available to content authors.
Configuration is very different to content. Content is information which will be displayed to website viewers in Drupal nodes. Content is also stored in the database, but is not managed by the configuration system.
Drupal has a built in configuration management system, along with drush CLI commands to import and export configurations.
Configurations are saved in a folder (the config sync directory) on the webserver hosting the Drupal website. This folder is defined in the settings array $settings['config_sync_directory']
which is defined in the settings.php
file. This folder is defined relative to the docroot
folder typically outside of the docroot for example:
drush cex
exports configurations from the the database into the config sync directory.
drush cim
imports configurations from the config sync directory into the database.
Module Exclusions: The Configurations for an entire module can be excluded from both of the drush cim / cex
processes by defining them in the $settings['config_exclude_modules']
array in the settings.php
file. For example:
WARNING / CARE: If you add modules into this list, then they will be removed from the core.extensions.yml
file during the next config export. This means these modules will be uninstalled/disabled on any environment in which these configs are imported.
As a rule of thumb - only add modules to this array that you wish to be removed for all environments other than the one you are developing on.
The Drush CLI is the main CLI utility and is installed and enabled on the CoB Drupal backend.
config:delete (cdel)
Delete a configuration key, or a whole object.
config:devel-export (cde, cd-em)
Write back configuration to module's config directory.
config:devel-import (cdi, cd-im)
Import configuration from module's config directory to active storage.
config:devel-import-one (cdi1, cd-i1)
Import a single config item into active storage.
config:diff (cfd)
Displays a diff of a config item.
config:different-report (crd)
Displays differing config items.
config:edit (cedit)
Open a config file in a text editor. Edits are imported after closing editor.
config:export (cex)
Export Drupal configuration to a directory.
config:get (cget)
Display a config value, or a whole configuration object.
config:import (cim)
Import config from a config directory.
config:import-missing (cfi)
Imports missing config item.
config:inactive-report (cri)
Displays optional config items.
config:list-types (clt)
Lists config types.
config:missing-report (crm)
Displays missing config items.
config:pull (cpull)
Export and transfer config from one environment to another.
config:revert (cfr)
Reverts a config item.
config:revert-multiple (cfrm)
Reverts multiple config items to extension provided version.
config:set (cset)
Set config value directly. Does not perform a config import.
config:status (cst)
Display status of configuration (differences between the filesystem configuration and database configuration).
Drupal is an alternative CLI and is installed and enabled on the CoB Drupal backend.
config:delete (cd)
Delete configuration
config:diff (cdi)
Output configuration items that are different in active configuration compared with a directory.
config:edit (ced,cdit)
Change a configuration object with a text editor.
config:export (ce)
Export current application configuration.
config:export:content:type (cect)
Export a specific content type and their fields.
config:export:entity (cee)
Export a specific config entity and their fields.
config:export:single (ces)
Export a single configuration or a list of configurations as yml file(s).
config:export:view (cev)
Export a view in YAML format inside a provided module to reuse in another website.
config:import (ci)
Import configuration to current application.
config:import:single (cis)
Import a single configuration or a list of configurations.
config:override (co)
Override config value in active configuration.
config:validate (cv)
Validate a drupal config against its schema
These are unique to the drupal CLI, rarely needed but can be useful for manually creating configs for cusom modules.
generate:entity:config (gec)
Generate a new config entity
generate:form:config (gfc)
Generate a new "ConfigFormBase"
generate:theme:setting (gts)
Generate a setting configuration theme
It is possible to override configurations in the php files on the Drupal back end.
Normally the configurations a developer will wish to override will be in a xxx.settings.yml file. This is where settings type configurations are defined and saved by contributed and custom modules.
The strategy to globally override a config setting for the entire Drupal site is to alter the $config
array in the settings.php
file.
Because the main settings.php
file can include different settings files for different environments, we can add global overrides to an environment-specific settings.php file to implement an override for only that environment.
TIP: Code in a settings.php file can be conditional, so the override can be made to be conditional on the value of a local (or environment) variable.
Example 1- Core config override: The system.maintenance.yml
file contains a message
key to control text that appears on the site maintenance page when shown. To override the message
key set in the system.maintenance.yml
file, place this in an appropriate settings file.
Example 2- Custom/Contributed Module config override: The salesforce.settings.yml
file supplied by the salesforce
module contains key to authenticate against a salesforce.com account in order to sync data. To override the consumer_secret
key set in the salesforce.settings.yml
file, place this in an appropriate settings file.
Override/Secrets Best Practice:
It is best practice not to save passwords and other secrets (incl API keys) in configuration files, as these will end up in repositories, and could be made public by accident.
Instead, passwords and other secrets should be stored as Environment variables on the Drupal web server, and then be set in an appropriate settings.php
file.
Example: recaptcha secret key saved as environment variable bos_captcha_secret
This means that passwords and other secrets are saved on the environment to which they apply so there is less (or no) need for environment-specific overrides.
It also means that all secrets are managed the same way, and can be changed on the environment and take effect immediately without needing to redeploy any code.
PHP commands retrieve a current configuration settings are as follows:
These commands will get the original config value, ignoring any overrides:
To assist with configuration management, there are a number of contributed modules.
The contributed modules are generally deployed to help manage situations where different configurations are desired on different environments.
Although this is not a contributed module, the use of .gitignore
allows a way to prevent configurations from making their way into repositories, and replicating upwards from the local development environments to the Acquia dev/stage/prod environments.
Simply add specific config files (and/or wildcards) to the .gitignore
file in the root of the repository.
Provided the files do not already exist in the repository, they will be ignored by git during commits and pushes from the local repository.
Example: .gitignore in repository/project root.
TIP: If you don't prefix the entry with any folder paths, then all occurrences of the file will be ignored. This includes files from config exports (drush cex
) and also from config_devel exports (drush cde
- see below.)
This module provides configuration import protection. If you are concerned that importing certain configurations when using drush cim
(which is used during a deploy) will overwrite existing configurations on a site, then config ignore will help prevent this.
Specific files to be ignored during an import can be added to the ignored_config_entities
key of the config_ignore.settings.yml
file. This array can also be overridden/extended by altering the $config['config_ignore.settings']['ignored_config_entities']
array in an appropriate settings file.
The .yml
extension is dropped and wildcards can be used to select entire modules, entities, etc:
ignored
_config_entities:
- salesforce.settings
- ...
- 'core.entity_view_display.node.metrolist_development.*'
Note: This module only provides protection when drush cim
is executed. When drush cex
is executed, the config_ignore settings are not considered and a full set of configs is still exported.
If you can't use $settings['config_exclude_modules']
(because you maybe only want to exclude just the module.settings.yml
file from a module) then use gitignore to stop it being committed to the repo and deployed.
CoB Local Development.
CoB use config_ignore
as a fail-safe protection.
Configurations that are set in the production system at runtime (usually settings) via the UI and are therefore different to the config in the ../config/default
folder are added to config_ignore so that they cannot be imported over the site settings should the files exist in the folder.
This module provides configuration separation. Configurations can be split into different folders and imported/exported independently.
Drush Command Summary:
config-split:activate
Activate a config split.
config-split:deactivate
Deactivate a config split.
config-split:export
Export only split configuration to a directory.
config-split:import
Import only config from a split.
config-split:status-override (csso)
Override the status of a split via state.
Config split can be used to create a number of different configuration sets which can be applied on different environments and/or at different times. This is an ideal way to control which modules are installed on which environments, and even to provide environment-centric settings (for settings controlled via config).
This module provides custom module configuration installation. If you anticipate your custom module will be used as a "contributed" module on another site - or will be enabled or disabled individually - then you will want to save its configuration into an install
folder inside the custom module.
Drush Command Summary:
config:devel-export (cde, cd-em)
Write back configuration to module's config directory.
config:devel-import (cdi, cd-im)
Import configuration from module's config directory to active storage.
config:devel-import-one (cdi1, cd-i1)
Import a single config item into active storage.
City of Boston use Acquia to host all non local (docker) servers on our .
Acquia's servers are contained within an subscription and implement a cache outside the load-balancer, as .
The release of Drupal 8 contains a using "". Drupal 7's cache expired items based on a lifetime for that item. Drupal 8 introduces another option called cache invalidation. This is where you set the cache lifetime to be permanent and invalidate (purge) that cached item when its no longer relevant. Drupal 8 does this by storing metadata about the cached item. Then, when an event occurs, such as an update on a node, the metadata can be searched to find all cache items that contain computed data about the updated node, and can be invalidated.
(for the purposes of this summary document) can be considered to be a low-level cache which optimizes caching by saving more dynamic process responses to memory. The principal value is to minimize requests between the Drupal kernel and MySQL for queries that are run multiple times during bootstrap and page requests.
Is fully independent from the Drupal kernel, and therefore is decoupled from Drupal -except for a purge module provided by Acquia which manipulates a Varnish API. - (Beware: notes are for Drupal 7) -
says in Acquia Cloud, pages are cached for 2 minutes by default.
Varnish will accept caching instruction from a web-page headers, so we use (APE) in drupal to send specific cache instructions to Varnish. The default caching time (set by APE) for CoB drupal pages is 4 weeks (i.e. overrides default 2minutes with 4 weeks!).
Drupal entities are cached using .
Drupal caching is managed by the Drupal kernel and the module (APE).
Views by default honor the tag generation and invalidation process whereby a view is cached with a tag, but the view invalidation model is not very refined (to refine the invalidation of views tags consider module - but (as of version 8.x.1.1) custom coding is required to implement). If a view is based upon the entity type node, then any change that invalidates a node tag will also invalidate the view. Although this causes (potentially) unnecessary invalidation of views, it is an effective way to ensure current content is returned from a view. If the view display is a page, then the invalidation the views does bubble up to Varnish (provided it is using a tag-based cache strategy).
See this .
This information is adapted from this , and contains more advanced techniques and discussion.
Breadcrumbs are an informative device which appear on many pages on the site. Breadcrumbs provide the user a sense of location within the site and a way to logically navigate back to the homepage.
A breadcrumb is an ordered collection of crumbs, with each crumb having a title and a link.
Drupal has a built-in breadcrumbs methodology, which will attempt to build out a pathway based on the URI (e.g. /departments/housing/metrolist
) defined by the pages (i.e. nodes) URL Alias.
It does not matter if the URL Alias is set manually or automatically, the value shown in the back-end editor form once the node is saved is used to build out the breadcrumb.
The Drupal core process creates the breadcrumb by scanning the path represented by the URI, and testing if a local page exists for each path element. It stops adding crumbs when a path element does not resolve.
FOR EXAMPLE an article is created with a URI (as defined in its URL Alias):
/departments/housing/boston/housing-information-in-boston.
When the page is rendered, Drupal scans the articles URI and
if we have a breadcrumb setting which stipulates that the homepage should always be shown as the first crumb, then a crumb of home
with a link to https://site
is created, then
checks if /departments
is a valid URI. https://site/departments
is a valid URI, so it creates a crumb of "departments" with a link to https://site/departments
, then
checks if /departments/housing
is a valid URI. https://site/departments/housing
is a valid URI, so it creates a crumb of "housing" with a link to https://site/department/housing
, then
checks if /departments/housing/boston
is a valid URI. https://site/departments/housing/boston
is NOT a valid URI - there is no page with that name on https://site
so the breadcrumb scanner stops evaluating at this point, but
if we have a breadcrumb setting to display the actual page in the breadcrumb then a final crumb of housing information in boston
is added, with no link (because this is the page showing).
The final breadcrumb in this instance would be HOME > DEPARTMENTS > HOUSING > HOUSING INFORMATION IN BOSTON with links on the first 3 crumbs.
When evaluating if a page exists on the site, Drupal only considers URL Aliases and does not check URL Redirects.
So in the example above, the boston
crumb/link still would not appear in the breadcrumb even if a place_profile
page for Boston existed with the URL Alias of /places/boston
and a URL Redirect for /departments/housing/boston
.
Where Drupal core cannot build out its own breadcrumb trail, there is some additional custom code intended to help make a logical breadcrumb.
The custom breadcrumb code only functions when it determines that Drupal has not built out the entire breadcrumb.
If Drupal has been able to build out all parts of the URI path, then the Drupal breadcrumb is used.
The custom code scans URL redirects as well as URL Aliases when building out the breadcrumbs.
Care: Redirects which are manually made on the page admin/config/search/redirect
are usually considered "external" by default. Breadcrumbs which use an external link may behave unexpectedly when clicked.
Example: the breadcrumb on d8-dev.boston.gov may open a page on www.boston.gov when clicked.
Solution: Do not create redirects for internal (i.e. Drupal hosted) pages on in the admin/config/search/redirect
page. Instead create redirects using the redirect function on the "advanced" tab of the editor form for a page.
Some URI paths are hard-coded to build specific breadcrumbs.
For example pages which have a URI path starting with government/cabinet
. The custom code ignores the "government/cabinets" part of the path and then build the breadcrumb from the remainder of the path.
The custom breadcrumb object is built here: bos_theme/bos_theme.theme::bos_theme_preprocess_breadcrumb()
The breadcrumb is styled here: bos_theme/templates/navigation/breadcrumb.html.twig
Custom theme which presents the front-end UI to all users. .
CoB custom modules - usually taxonomy, nodes and paragraphs.
The following development conventions are being followed in developing boston.gov modules.
City of Boston have the following naming and grouping conventions for custom modules:
Templates for the component should be saved in:
To add a customized template, select a suggestion for the base (node, field, region etc), then
Save the template in the folder above
In the module_name_theme()
hook in module_name.module
add the following:
If a new suggestion is needed, then add the following:
Where XXX is the appropriate entity type (node, field, region, etc etc) to add a suggestion to.
Wherever possible, the style provided from the patterns library should be used. In practice this means that boston.gov can be styled by a Drupal developer ensuring that the twig template files provide HTML structured and given classes that the patterns library expects.
Should the need arise, then the patterns library style sheets can be overridden. Typically this is done at the module level, although if multiple modules will use the override, consider placing it in the bos_theme
theme.
To add overrides,
Create the style sheet module_name.css
and appropriate markup in the relevant template (see above section),
Save the stylesheet in:
Update (or create) the module_name.libraries.yml
file with the following:
Using a module_name_preprocess_HOOK()
hook in module_name.module
attach the css where and only when it is required. For example:
Wherever possible, JavaScript should not be used on boston.gov. This is to maintain compatibility with as many browsers as possible, and to maximize accessibility for screen readers etc.
Should the need arise, then a JavaScript library can be created and deployed. Typically this is done at the module level, although if multiple modules will use the override, consider placing it in the bos_theme
theme.
To add overrides,
Create the JavaScript library module_name.js
,
Save the library in:
Update (or create) the bos_modulename.libraries.yml
with the JavaScript directive - for example you could add the following:
Using a bos_modulename_preprocess_HOOK()
hook in bos_modulename.module
attach the JavaScript library where and only when it is required. For example:
Drupal 8 defines settings and configurations in YML files with the actual "current" settings and configurations being stored in the Drupal (MySQL) database.
When the website is deployed or the web-server is restarted, configurations are re-read from the database. To reload the configuration and settings from yml files requires a manual (usually drush
) process to be run by a developer.
Clearing the sites caches causes cached configurations and settings to be replaced with values from the database. Clearing caches does not reload yml files.
YML files in a modulesdocroot/modules/custom/ ... /module_name/config/install
folder will be imported into the database when a module is first installed.
YML files in the docroot/../config/default/
folder will be imported into the database when the configuration is imported via the Drupal UI, or the drush
command.
Current (run-time) settings and configurations in the database can be exported to the docroot/../config/default/
folder via the Drupal UI, or the drush
command.
If the config_devel
module is enabled then a modules configuration can be exported to the modules config/install
folder.
The dependent configurations are defined in the module_name.info.yml
file as follows:
To export these configurations to the config/install
folder use the following drush command:
Modules should try to reuse field.storage.entity-type.field_name
configurations wherever possible.
field.storage.entity-type.field_name
configurations should be:
1. saved in the modules parent module (e.g. bos_components
or bos_content
)to enable sharing, and
2. added to the parents config_devel
section of the .info.yml
file.
Boston.gov use Drupal core workflow and moderation modules.
CoB use the following modules for moderation:
Content Moderation: [core] Provides moderation states for content.
Workflows: [core] Provides UI and API for managing workflows. This module can be used with the Content moderation module to add highly customizable workflows to content.
Moderation Note: [contrib] Provides the ability to notate elements of a moderated Entity.
Moderation Sidebar: [contrib] Provides a frontend sidebar for Content Moderation.
Custom theme which presents the back-end UI to content authors and editors. .
We use 2 custom themes, one which presents the backend and one which presents the front-end.
Developer notes for content type (node) design and implementation.
Modules can define multiple content types (nodes) grouped by similar function.
A good example module can be found at:
Module naming convention is to call the module module_name
. The "module_name" should be indicative of the node/s contained within the module.
Sub-pages in this section assumes an example module is to be named module_name
and therefore the module folder would be:
Notes on bos_admin theme for UX when adding content via admin pages.
To keep a clear and clean editor experience which uniform across the site, the form display configuration for nodes will contain groups.
There will be a root (parent) group of type tabs
. This group will contain child groups of type tab
. Each tab group will contain the nodes fields.
Recommended Grouping Layout:
1. Required: Create a parent tabs
group called group_main
(the name is not important).
2. Create child tabs
groups with the following layout:
- Basic Information: Contains custom fields required by the new content-type,
- Sidebar Components: Single Entity reference revisions
field for sidebar paragraphs,
- Components: Single Entity reference revisions
field for main page paragraphs,
.. then other tab
groups is needed (try to minimise if possible).
The use of further nested groups is discouraged, except for grouping which occurs within paragraph components that are exposed in Components or Sidebar Components tabs.
If other groups are required to help clarify the form display, they should be details
type groups, and should be set to be collapsible, and be collapsed by default.
IMPORTANT:
For site consistency, ensure any and all Entity reference revisions
(i.e. paragraphs) on the node are set to "Paragraphs (EXPERIMENTAL)"
in the form display.
The bos_admin
theme makes some changes to the node administration forms.
Config settings provided by drupal core and drupal contributed modules are moved into a tab
called advanced, and are set as children of the tabs
group as defined above.
This manipulation is done in the hookbos_admin_form_alter()
found in bos_admin.theme
file at themes/custom/bos_admin
.
The moderation state, revision log note and save / preview / delete buttons are grouped together in a details group and moved to the right sidebar area of the administration form.
This manipulation is done in the hookbos_admin_form_alter()
found in bos_admin.theme
file at themes/custom/bos_admin
When you make an html.twig
file and add it to the templates folder of a custom theme you are pretty much done (after refreshing caches!). The Drupal theme rendering processes detect the template and uses it in preference to any template of the same name from a parent or default theme. You don't really have to do anything more than add the file and refresh cache.
But, if you add a template to a custom module -even if your intent is just to override a theme default template (e.g.field.html.twig
) or to provide a suggested template, there are a few extra things you must do.
Using the example of a custom content type (node) called "node_landing_page", the steps below fully implement a template to be used to render the nodes full
display.
Note: Drupal automatically generates the suggestion fornode__landing_page__full
which can be used for rendering the "default" (i.e. "full") display.
You can generate other suggestions using the hook_theme_suggestions_hook
hook.
Create the twig template you wish to use, and give it a name that matches an existing Drupal theme suggestion with ".html.twig" as the extension.
In rare cases you may want to create a new template suggestion. Do this by returning an array of suggestions from ahook_theme_suggestions_hook()
in your custom module (see last example below).
Convention is to name the template using an "entity breadcrumb" style, with "--"'s between entities and no spaces.
Save the template file in a folder called templates
in your custom modules root folder. In our example docroot/modules/custom/node_landing_page/templates
.
- You could organize files by creating a sub-folder tree - but if you do, you will then have to specify the path
to your template in the hook_theme
- see step 3 below.
In the hook_theme
of your module you must define your new template. This hook is read by the Drupal core theme engine and loaded into a template cache (aka register). Whenever a change is made to this hook you need to clear all caches to load your changes into the cache.
In hook_theme
return an assoc array with key-value pair nested arrays for each template you wish to define.
- The outer keys (template-keys) should be one for each of the templates you are defining. Keep it simple and traceable by setting the the template-key name to be the template filename without the ".html.twig". Important: Replace all "-"'s with "_"'s in the template-key string. (in our example the template-key is node__landing_page_full
)
- The value for the key (template-key) is an array with a required base_hook
and several other optional fields.
The base_hook
should define the entity type this template is used to render (in our case node
but other common entities we theme are field, region, block, paragraph, taxonomy_term
) .
[optional] The render element
defaults to elements
if not specified.
[optional] If you wish to use a template file which is not the same name as the suggestion (with "_"'s replaced with "-"'s) then you must specify its name in the template
field. Omit the "html.twig" extension. This could be useful if you want 2 display to share the same template.
[optional] If you want to use a custom path to the template file (i.e. not the default templates folder) then use the path
field.
(see bos_link_collections_theme
in boston.gov for example)
(see "Our Example hook_theme" below for the complete hook)
[optional] Once the cache is cleared you can then catch pre-process events using hook_preprocess_hook
in our example this would be node_landing_page_preprocess_node
(to catch all node pre-process events) or node_landing_page_preprocess_node__landing_page__full
(to catch only this new template pre-process events) - notice that the hook uses the template-key
defined in the hook_theme
array.
[optional] You can also catch template_preprocess_hook
events (in our example this is template_preprocess_node__landing_page__full
).
This hook is commonly used to create a content
variable which contains all the rendered (or renderable) elements of the elements
(or whatever the field is named in the templates render element
) array.
Our Example template file:
Our Example hook_theme:
Our Example hook_preprocess_hook (version 1):
Our Example hook_preprocess_hook (version 2):
Our Example template_preprocess_hook:
Our Example hook_theme_suggestions_hook:
Modules can contain multiple paragraphs grouped by similar function.
A good example module can be found at:
Module naming convention is to call the module bos_moduleName
. The "moduleName" should be indicative of the paragraph/s contained within the module.
Sub-pages in this section assumes an example module is to be named bos_module_name
- with the module folder:
Custom nodes deployed in boston.gov have a navigation menu which sits below the introduction text on each page.
The in-page menu requires the node to embed paragraphs, the node--xxxx.html.twig to contain a <div> and for each embedded paragraph to have a key field.
If the node has components (paragraphs) embedded, then the node will have a field called field_components
and this field will be of a type Entity reference revisions
. The field will allow only paragraphs, and will specify the paragraph types that are allowed on the node.
To enable in-page navigation, each paragraph must have a (text field) field_short_title
, and to reduce confusion for content editors, that field should be named "Navigation Title".
To make the menu look nice and work well on mobile devices, content editors and authors should be encouraged to keep the content added to the Navigation Title to 20 chars or less.
To enable the in-page navigation menu, the nodes template should include the following:
This block should ideally be located below the title and intro-text sections.
When there is more than 1 paragraphs embedded in a nodes web page, an in-page navigation menu should appear on the page. The menu should be styled from the patterns library.
UX Desktop: When the page first loads, the menu should display above the fold. As the user scrolls down the page, the menu should collapse into a fixed toolbar at the top of the page, below the seal menu with the seal retracted. Theme should come from patterns.
UX Mobile: Menu should appear as a collapsed set of drawers with a chevron icon to expand. Css from patterns controls the collapse across the responsive page width.
In either UX, when the user clicks on the menu, the page should scroll smoothly down to the correct paragraph display on the webpage.
The twig template (e.g. node--xxx.html.twig
) for the node is responsible for locating the menu on the node. The code required is described above.
On-page menu elements are rendered from the bos_theme_preprocess_node()
and bos_theme_preprocess_field()
hooks in bos_theme.theme
found in /themes/custom/bos_theme/
.
The page click and scrolling is provided by component-navigation.boston.js
which is found in /themes/custom/bos_theme/js/
.
To make a paragraph include itself in the in-page navigation menu, it just needs to contain a text field named field_short_title
(and for that field to be included in the display being used on the node).
Modules can contain a single vocabulary taxonomy.
A good example module can be found at:
Module naming convention is to call the module vocab_moduleName
. The "moduleName" should be indicative of the taxonomy contained within the module.
Sub-pages in this section assumes an example module is to be namedvocab_module_name
- with module folder at:
Converting D7 structures to D8
Login to the website and go to the paragraphs admin page (/admin/structure/paragraphs_type
) and delete the paragraph you want to work on
Step 1 above may delete some of the field.storage dependencies (field definitions), so just re-import all the bos_component module config to make sure you get all the shared config back into the database: lando drush config-import --partial --source=/app/docroot/modules/custom/bos_components/config/install
Create the module scaffolding using drush, for example: lando drush componetize bos_discussion_topic --components=discussion_topic
Add hook_theme() to .module file to connect to the paragraph template
Copy the corresponding paragraph template from boston.gov-d8/docroot/themes/preConversion/component
and put it in the scaffolding that the drush command from step 3 created: docroot/modules/custom/bos_components/modules/bos_discussion_topic/templates
Enable the module: lando drush en bos_discussion_topic
In the Drupal UI, add the new bundle to the field_components
paragraph types list for the Test Component Page content type: /admin/structure/types/manage/test_component_page/fields/node.test_component_page.field_components
Create a test page with the component added to review admin UI and display
Importing a single config file:
Exporting database config directly to your module (Important: the config file needs to be referenced in your module's info file under the config-devel
key): lando drush config-devel-export bos_cabinet
City of Boston support development of discrete React (and other JS framework) WebApps. Because these services will be hosted on Drupal there is a custom Drupal webapp launcher and some conventions to
Have stable local build of Drupal 8 website running on your machine.
Make sure you are “logged in” or have “admin” access to view the CMS and add new content / nodes.
Using Drush: lando drush uli
Using Drupal web login: https://boston.lndo.site/user/login?local
Navigate to Content menu item (make sure you are logged into Drupal to view) https://boston.lndo.site/admin/content
Scroll to bottom of the page and add content item by clicking “Add Content”, . Select “Listing Page” content type.
Give new Page content a Title. This is required.
Click on the “Components” tab on the left menu
Find the dropdown Select menu to add “new component” and select “Web App” from the list.
Name the Web App something appropriate as it relates to your project. (i.e. Metrolist or My Neighborhood)
Click “Save” near the bottom or side of the page to save and create a new page / node. This will serve as the container page / component for your new web app.
Navigate to the “bos_web_app” directory of the drupal 8 repository that is checked out to your local machine /docroot/modules/custom/bos_components/modules/bos_web_app.
Locate the “apps” folder / directory. If one doesn’t exist, please create. /docroot/modules/custom/bos_components/modules/bos_web_app/apps
Inside this “apps” directory create an empty folder and name it the same name you called your Web App in Step 6 of Part 1 above. /docroot/modules/custom/bos_components/modules/bos_web_app/apps/my_neighborhood NOTE: Any spaces in your app name should be treated with underscores. For example, My Neighborhood would have a folder name of “my_neighborhood”.
Locate and open the libraries yml file named “bos_web_app.libraries.yml”. This file will serve as the pointer and compiler that will tell Drupal to attach and bunde all your JS and CSS files for your application. /docroot/modules/custom/bos_components/modules/bos_web_app/bos_web_app.libraries.yml
See an example libraries.yml file on GitHub for a project that is currently being developed. https://github.com/CityOfBoston/boston.gov-d8/blob/mnl_12-9-2019/docroot/modules/custom/bos_components/modules/bos_web_app/bos_web_app.libraries.yml Drupal also has good documentation on using libraries and attaching files. https://www.drupal.org/docs/8/creating-custom-modules/adding-stylesheets-css-and-javascript-js-to-a-drupal-8-module
Once you have libraries file setup, go create the files needed, OR first create the files you’d like and then add them to the libraires.yml as laid out in Part 2 - Step 5 above.
It’s important to note that any time you add a new attached file to libraires.yml, the Drupal cache will have to be cleared for changes to take effect. You can clear the cache either through the Drupal CMS or via Drupal drush CLI
Drupal CMS: navigate to admin/config/development/performance, and click button at top of the page labeled “clear all caches”
Using Drush CLI: drush cr
After clearing the cache, you should now see your application load on the Drupal page you created and saved in Part 1 - Step 7. NOTE: You will NOT have to clear the Drupal cache every time you make a change to a CSS or JS file. This is only for new items in the libraires.yml file.
Once you have the libraries file open, add an entry with the name of your application and add / attach necessary items to your application. For example, the application “My Neighborhood” would have a library entry as such…
Drupal 9 (our current install) uses CKEditor 4, when we move to Drupal 10, it uses CKEditor 5. CKEditor 5 is currently installed in core/modules, but not used.
We currently have 2 or more versions of CKEditor we use plus extension of the plugin in 2 other components.
Our current Drupal version D9 uses the CKEditor 4 in modules/contributor folder.
Once we upgrade to Drupal 10, will need to move from CKEditor 4 to 5. That is because Drupal 10 does not use CKEditor 4.
Samples of CKEditor 5 we can explore to integrate/use can be found here:
TestPage (article)
Event
Events Content (admin)
With header image
No header
Listing Page
Listing Page Content (admin)
Landing Page
Landing Page Content (admin)
homepage
Topic Page
Topic Page (Guides) Content (admin)
With Image
Place Profile
Place Profile Content (admin)
With Header Image
Person Profile
Person Profile Content (admin)
Program Initiative Profile
PIP Content (admin)
With Image
No Image
Post
Post Content (admin)
With Image
No Image
How To
How To Content (admin)
With Image
No Image
Article
Article content (admin)
Department Profile
Department Profile Content (admin)
Public Notices
Public Notice Content (admin)
Script Page
Script Page Content (admin)
TestPage (article)
Event
Events Content (admin)
With header image
No header
Listing Page
Listing Page Content (admin)
Landing Page
Landing Page Content (admin)
homepage
Topic Page
Topic Page (Guides) Content (admin)
With Image
Place Profile
Place Profile Content (admin)
With Header Image
Person Profile
Person Profile Content (admin)
Program Initiative Profile
PIP Content (admin)
With Image
No Image
Post
Post Content (admin)
With Image
No Image
How To
How To Content (admin)
With Image
No Image
Article
Article content (admin)
Department Profile
Department Profile Content (admin)
Public Notices
Public Notice Content (admin)
Script Page
Script Page Content (admin)
Entity | Field | min/max resolution & max filesize | View: Style |
Images |
node:department_profile | field_icon | 56x56/++ - 200KB | default: (i) square_icon_56px Article: (i) square_icon_56px Card: (i) square_icon_56px Article: not displayed Published By: (i) square_icon_56px |
node:event | field_intro_image | 1440x396/++ 8 MB | default: (b) intro_image_fields featured_item: (i) Featured Item Thumbnail |
field_thumbnail | 525x230/++ 8 MB | default: (b) thumbnail_event featured_item: (p) thumbnail_event |
node:how_to | field_intro_image | 1440x396/++ 8 MB | default: (b) intro_image_fields [all others (10)] not displayed |
node:listing_page | field_intro_image | 1440x396/++ 8MB | default: (b) intro_image_fields [all others (12)]: not displayed |
node:person_profile | field_person_photo | 350x350/++ 5MB | default: (p) person_photos listing: (p) person_photos embed: (p) person_photos |
node:place_profile | field_intro_image | 1440x396/++ 8MB | default: (b) intro_image_fields Listing: (p) card_images Teaser: not displayed |
node:post | field_intro_image | 1440x396/++ 8MB | default: (b) intro_image_fields featured_item: not displayedListing: not displayed Listing short: not displayed Teaser: not displayed |
field_thumbnail | 700x700/++ 5MB | default: not displayed featured_item: (p) featured_images Listing: (i) News Item -thumbnail (725x725) Listing short: (i) News Item -thumbnail (725x725) Teaser: (i) News Item -thumbnail (725x725) |
node:program_i_p | field_intro_image | 1440x396/++ 8MB | default: (b) intro_image_fields listing: (b) card_images |
field_program_logo | 800x800/++ 2MB | default: (p) logo_images Listing: not displayed |
node:site_alert | field_icon | 56x56/++ - 200KB | default: (s) n/a svg (square_icon_56px) Embed: (i) square_icon_56px Teaser: not displayed |
node:status_item | field_icon | 65x65/++ - 200KB | default: (s) n/a svg (square_icon_65px) listing: (s) n/a svg (square_icon_65px) teaser: (s) n/a svg (square_icon_65px) |
node:tabbed_content | field_intro_image | 1440x396/++ 8MB | default: (b) intro_image_fields |
node:topic_page | field_intro_image | 1440x396/++ 8MB | default: (b) intro_image_fields featured_topic not displayed listing_long: (b) intro_image_fields listing: (b) card_images |
field_thumbnail | default: not displayed featured_topic (p) featured_images: not displayed listing: not displayed listing_long: not displayed |
para:card | field_thumbnail | 670x235/++ 2MB | default: (b) card_images |
para:columns | field_image | 200x200/++ 2MB | default: (i) Med Small Square (also Person photo a-mobile 1x (110x110)) |
para:fyi | field_icon | 56x56/++ 200KB | default: (s) n/a svg (square_icon_56px) |
para:hero_image | field_image | 1440x800/++ 8 MB | default: (b) Hero fixed image fields Separated Title: not displayed |
para:map | field_image | 1440x800/++ 8 MB | default: (b) Photo Bleed Images |
para:photo | field_image | 1440x800/++ 8 MB | default: (b) Photo Bleed Images |
para:quote | field_person_photo | 350x350/++ 5 MB | default: (i) Person photo a-mobile 1x (110x110) |
para:signup_emergency_alerts | field_icon | n/a svg | default: (s) n/a svg (square_icon_65px) |
para:transactions | field_icon | 180x100/++ - 2MB | default: (i) transaction_icon_180x100 group_of_links: (i) transaction_icon_180x100 |
para:video | field_image | 1440x800/++ 8 MB | default: (b) Photo Bleed Images |
tax:features | field_icon | svg | default: (s) n/a svg (square_icon_56px) sidebar_right: (s) n/a svg (square_icon_56px) |
entity:user | user_picture | 100x100/1024/1024 1 MB | default: (p) person_photos compact: (i) Person photo a-mobile 1x (110x110) |
entity:media.image | image | +++/2400/2400 8 MB | default: (i) original image [all others]: (i) Media Fixed Height (100px) |
Files |
media.document | field_document |
node:procurement | field_document |
para:document | field_document |
Breakpoint | Start width | end width | note |
group: hero |
mobile | 0 | 419 |
tablet | 420 | 767 |
desktop | 768 | 1439 |
large | 1440 | 1919 | Introduced in D8 |
oversize | 1920 | +++ | have a notional max-width of 2400px |
group: card |
mobile | 0 | 419 |
tablet | 420 | 767 |
desktop | 768 | 839 |
desktop | 840 | 1439 |
large | 1440 | 1919 |
oversize | 1920 | +++ | have a notional max-width of 2400px |
group: person |
mobile | 0 | 839 |
tablet | 840 | 979 |
desktop | 980 | 1279 | There is also a breakpoint at 1300 in node:pip |
desktop | 1280 | +++ | have a notional max-width of 2400px |
Breakpoint | responsive Style | style | size |
All Nodes: field_intro_image (excluding node:post) |
hero: mobile (<419px) | intro_image_fields | Intro image a-mobile 1x | 420x115 |
hero: tablet (420-767px) | intro_image_fields | Intro image b-tablet 1x | 768x215 |
hero: desktop (768-1439x) | intro_image_fields | Intro image c-desktop 1x | 1440x396 |
hero: large (1440-1919px) | intro_image_fields | Intro image d-large 1x | 1920x528 |
hero: oversize (>1920px) | intro_image_fields | Intro image e-oversize 1x | 2400x660 |
node:post field_intro_image |
hero: mobile (<419px) | Hero fixed image fields | Hero fixed a-mobile 1x | 420x270 |
hero: tablet (420-767px) | Hero fixed image fields | Hero fixed b-tablet 1x | 768x400 |
hero: desktop (768-1439x) | Hero fixed image fields | Hero fixed c-desktop 1x | 1440x460 |
hero: large (1440-1919px) | Hero fixed image fields | Hero fixed d-large 1x | 1920x460 |
hero: oversize (>1920px) | Hero fixed image fields | Hero fixed e-oversize 1x | 2400x460 |
para:photo field_image para:video field_image para:hero field_image para:map field_image |
hero: mobile (<419px) | Photo Bleed Images | Photo bleed a-mobile 1x | 420x250 |
hero: tablet (420-767px) | Photo Bleed Images | Photo bleed b-tablet 1x | 768x420 |
hero: desktop (768-1439x) | Photo Bleed Images | Photo bleed c-desktop 1x | 1440x800 |
hero: large (1440-1919px) | Photo Bleed Images | Photo bleed d-large 1x | 1920x800 |
hero: oversize (>1920px) | Photo Bleed Images | Photo bleed e-oversize 1x | 2400x800 |
find |
card: mobile (<419px) | Card Images 3w | Card grid vertical a-mobile 1x | 335x117 |
card: tablet (420-767px) | Card Images 3w | Card grid vertical b-tablet 1x | 615x215 |
card: desktop (768-839px) | Card Images 3w | Card grid vertical c-desktop 1x | 670x235 |
card: desktop (840-1439x) | Card Images 3w | Card grid horizontal c-desktop 1x | 382x134 |
card: large (1440-1919px) | Card Images 3w | Card grid horizontal d-large 1x | 382x134 |
card: oversize (>1920px) | Card Images 3w | Card grid horizontal e-oversize 1x | 382x134 |
para:column | this should be a 200x200 circle ?? |
card: mobile (<419px) | Card Images 3w | Photo bleed a-mobile 1x | 335x117 |
card: tablet (420-767px) | Card Images 3w | Photo bleed b-tablet 1x | 615x215 |
card: desktop (768-839px) | Card Images 3w | Photo bleed c-desktop 1x | 670x235 |
card: desktop (840-1439x) | Card Images 3w | Photo bleed c-desktop 1x | 382x134 |
card: large (1440-1919px) | Card Images 3w | Photo bleed d-large 1x | 382x134 |
card: oversize (>1920px) | Card Images 3w | Photo bleed e-oversize 1x | 382x134 |
post:field_thumbnail(feature) |
card: mobile (<419px) | Featured Images | Featured image a-mobile 1x | 335x350 |
card: tablet (420-767px) | Featured Images | Featured image b-tablet 1x | 614x350 |
card: desktop (768-839px) | Featured Images | Featured image c-desktop 1x | 671x388 |
card: desktop (840-1439x) | Featured Images | Featured image d-full 1x | 586x388 |
card: large (1440-1919px) | Featured Images | Featured image d-full 1x | 586x388 |
card: oversize (>1920px) | Featured Images | Featured image d-full 1x | 586x388 |
node:person_profile:field_person_profile user:user_picture |
person: mobile (<839px) | Person Photos | Person Photos a-mobile 1x | 110x110 |
person: tablet (840-979px) | Person Photos | Person Photos b-tablet 1x | 120x120 |
person: desktop (980-1279px) | Person Photos | Person Photos c-desktop 1x | 148x148 |
person: desktop (>1280x) | Person Photos | Person Photos d-full 1x | 173x173 |
node:pip:field_program_logo |
person: mobile (<839px) | Logo Images | logo square a-mobile 1x | 672x672 |
person: tablet (840-979px) | Logo Images | logo square b-tablet 1x | 783x783 |
person: desktop (980-1279px) | Logo Images | logo square c-desktop 1x | 360x360 |
person: desktop (>1280x) | Logo Images | logo square d-full 1x | 360x360 |