Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Notes on the way Drupal Entities and Configuration have been utilized in boston.gov and theHub.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
We build digital experiences designed around the needs of our constituents. We work to make these tools beautiful, welcoming, and highly useful.
Getting started is where you’ll find general onboarding/high-level info about our team, the technologies we use, and how we work together.
Standards and best practices ensure that our code and workflows are consistent across the team.
Guides is a collection of how-tos and references for the technologies we work with. These should not project-specific.
If you were getting ready to contribute code to a project, “Standards and best practices” will give you information such as what your feature branch name should look like.“Guides” is where you’d find a walkthrough of how to make your pull request.
Projects is where general documentation for a specific project can be found (project-specific technical documentation is better suited for the project README). This section is primarily the domain of product managers. Private information is kept in Google Docs that are linked to from here. External resources is where to find links to useful information and tools.“Learning resources” are things like tutorials and guides. “Reference links” tend towards specs, references, and tools.
That's a tough question but thankfully, our team is on it. Please bear with us while we're investigating.
Yes, after a few months we finally found the answer. Sadly, Mike is on vacations right now so I'm afraid we are not able to provide the answer at this point.
The regular meetings conducted by the Digital team.
Participants: Entire digital team
Accomplishments or discoveries from the previous day
Priorities for the day
Any roadblocks, questions, or hang-ups?
Monday mornings, required
Participants: Developers, Product Managers, Chief Digital Officer, Quality Assurance, UX, etc.
Format: Talk through accomplishment(s) last week, priority(ies) this week, flags
Thursday morning, required
Participants: All digital team members plus invited guests as appropriate
Format: Each team member shares a “brag,” any “flags”, and a “learn” with the rest of the team.
Friday afternoons, optional but encouraged
Participants: Any/all
Format: Team members may demo something they’ve accomplished during the week. Prompt on what you’ll demo is made by Slack on Thursday afternoon.
Participants: Anyone who is a part of the identifying project team. Will typically at least include a product manager and developer from our folks.
Format: Determined by project team, but typically includes a review of the kanban project board for part of it.
Engineers on the Digital team will treat each other with kindness and respect. We will support each other in learning new things, honing our craft, and building great software for the people of Boston. We do not look down on each other for any reason, including background, area of expertise, or identity.
We value iterative development and continuous deployment. We recognize that delivering changes in small batches means that we go faster with a lower risk of breaking something.
We value communication and shared knowledge. While we may work on separate areas of Digital’s portfolio, we develop and document so that others can step into our shoes as needed. We visualize work to better communicate and hold ourselves accountable. We celebrate each other’s wins.
We value transparency. We are part of a public institution working for the people of Boston, and we are accountable to that. We make our roadmap, work in progress, and results public to all.
We value continuous improvement. We hold retrospectives and experiment with our process in order to learn and work together more effectively.
We value consistency in user experience, and in code.
We value automation and defaults. Out-of-the-box solutions are easier to maintain and teach than custom ones.
We value security and privacy. We are stewards of the public’s data and take that responsibility seriously. We will not build software that can be used for abuse or harassment.
We value maintainability. Because projects move in and out of active development, we need good test coverage to guard against accidental breakages and easy development setup so that others can work on the code.
We value monitoring. We use services to report on exceptions and add alerts so that we know when our services are down. We record analytics for insight into how constituents use our products.
We value diversity of background, experience, and identity. Bringing people with different perspectives together to solve a problem leads to a stronger, more inclusive solution.
We run fairly process-lite, not because we reject process in general, we just only want to adopt what’s useful for doing the best work. The practices we have adopted are described in Best practices.
The overall prioritization of projects is the responsibility of the Chief Digital Officer, and should be the overall guide for deciding which new work to “pull.” (Glossary: What is a Pull System?) Engineers have some leeway in this area to try and minimize switching cost between projects.
Prioritization of a project’s features is the responsibility of the product manager for that project. Work for a project should generally be pulled in priority order. Use good judgement to pull other work in exceptional cases (for example, someone you need to work with has free cycles).
It is the responsibility of engineering to provide honest information and be a good collaborator in the prioritization process. “Responsibility” for the CDO and PM is not “sole responsibility.” Engineering is expected to be a participant in the prioritization process.
Learn more about using a Kanban board to track prioritization within a project.
These are browsers we support for development and quality assurance testing. Note: These are based on the City's top browser versions tracked via City's analytics traffic. These will be revisited quarterly by the product development team and quality assurance lead.
Google Chrome 68 and higher
Safari 11 and higher
Internet Explorer 11 and higher
Firefox 61 and higher
Edge 17 and higher
320
840
1280
This guide is Drupal specific. This will help content editors using Drupal to add content to the website. It will also help with troubleshooting issues that may arise.
Adding images/svgs
SVGs - please remove all "id" attribute in the svg. It is always the same and creates a BIG error when the browser sees the same id for so many images/svgs
Images: Always add a title or alt tag to an image. But, make it meaningful. See example below.
Editing/Adding Tables
Adding Tables: Always make sure you add a table title and/or summary and/or caption
Adding nofollow and/or hiding pages
Production: https://access.boston.gov
Adding iFrames
Production: https://access.boston.gov
Content Editor Guide
The links with the same name should have the same href values when on the same page
Using classes on html elements you want to hide from screen but allow screen readers to access:
Articles, links, tools, and other programs to help us understand accessibility.
Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them.
What to look for
Keyboard focus
on focusable elements
Tab order
Label tags
Title attributes
Keyboard access
All "clickable" elements must be keyboard accessible
Likewise all hover/focusable elements must have alternate keyboard focus matching the mouse hover
Colors and fonts
Text colors should be tested for contrast. Make sure there is good color contrast between the text and the background
A recommended minimum font size is 16 px. Use bold to add emphasis rather than italics or UPPERCASE, but use it sparingly!
Images vs background images
Use background images only if necessary
All images must have an alt tag
Actual images must be use if it gives content more meaning or is the only content.
HTML Structure
Content will be read from left to right, therefore html should be written out the way the keyboard will tab through content
Input elements should have labels. They can hidden if not part of the design, but needs to be there for screen readers
All HTML should have proper aria
attributes
Lists should have the proper role
attribute
Titles, captions, and summaries
Iframes should always have a title to explain the content of the iframe
Tables must have summaries
explaining the content of the table and Captions
foe really large tables.
Images should always have meaningful titles for screen readers to read
Anchor tags <a>...</a>
should always have a title
Links and buttons
A link should always be a link and not a placeholder
Buttons
Miscellaneous
See boston.gov/digital for a lot of historical write ups on this work.
Write up on accessibility and text to speech completed for Jeniffer Vivar Wong/Office of Language and Communications Access on 4/5/21: https://docs.google.com/document/d/1aa9wCaG3AzPsh6pPC-padNOv4YgazwyUs2VkWglvSHA/edit
Potential ideas we found/brainstormed while writing this:
Tools/things Reilly found via Googling:
https://www.techradar.com/best/best-text-to-speech-software
Things Digital could consider:
Bringing back the ‘accessibility’ header. Can’t remember if it ever got built and we just turned it off but there are designs for it
Drupal modules for text to speech - would need devs to advise
Would want to do some market research with other govs or places before implementing
Lots of reference to Google cloud text to speech or Amazon Polly - unsure if this would be included in free license or we’d need to pay
General Assembly project for general user experience/desire of this or the City spending some money and hiring a consultant for this or a USDR volunteer
What developers should take into consideration when writing code.
Testing
Adding Tables
Add "summary
" to table tag. A summary conveys information about the organization of the data in a table and helps users navigate it.
Add "scope
" to tables with headers. The scope
attribute can be set to row
or col
to denote that a header applies to the entire row or column, respectively.
Add "captions
". A caption functions like a heading for a table. Most screen readers announce the content of captions.
Adding SVGs
If it is used as an img
, must add title
attribute to it.
If svg
is use as a button, must add a tabindex
attribute
iFrames
iFrames needs to have a title
attribute
images vs background images
Images must have an alt
attribute
Images can also have a title
attribute
If an image gives a better understanding or context to a content, that images should not be a background image but an actual image.
Summary
Developers can emulate links with other elements, such as <div>
or <span>
elements and JavaScript click listeners. But, these kinds of emulated links need care. Developers wishing to emulate links must include the following:
Add tabindex=”0”
so that the link becomes keyboard focusable
Add role=”link”
so that assistive technology recognizes the element as a link
Add the styling cursor: pointer so that mouse users will recognize the element as a link.
Same applies to html elements used as buttons instead of the actual <button>
or <input>
element.
For example, the markup for an accessible emulated link might look like the following:
To avoid needing to implement the above, developers should prefer to use the <a>
tag instead.
What is css :focus and :focus-within
? How do you know when to use them? Read more about how to use :focus
here.
Note: If using role="button" instead of the semantic <button>
or <input type="button">
elements, you will need to make the element focusable and have to define event handlers for click
and keydown
events, including the Enter and Space keys, in order to process the user's input. See the official WAI-ARIA example code.
Remember that most browser will automatically validate correctly written HTML for you. Keep them simple with less layers of html tags wrapping them endlessly.
Iterators is a Certified Trusted Tester by the Department of Homeland Security and provides a code inspection-based test approach for determining software and web conformance to Section 508 standards.
Visit them at: https://iteratorstesting.com/services/accessibility-testing
Trusted Tester Tests: The Section 508 Trusted Tester program groups the WCAG requirements into related groupings that can be tested (and developed) at the same time. With the help of Iterators the VPAT documentation for boston.gov was created. You can read the documentation here.
Test #4) Keyboard and Focus
Purpose:
Many different types of users decide to use a keyboard to navigate a web page. Some
examples are screen reader users that will tab through an interface. Other users have
motor impairments and must control a computer through a switch (such as puffing into a
straw, etc).
People that use a mouse can quickly navigate a website. Additional design features are
needed to allow the same level of access to people that use a keyboard to navigate a user
interface.
Requirements:
1) Users should be able to access all functionality by using only the keyboard.
a. First, use a mouse to identify all functionality (buttons, links, etc)
b. Second, try to utilize all of the same functions using the keyboard, mainly using the tab and enter keys.
c. This also applies if popup features are implemented, such as an informational dialog, etc.
2) Tabbing through a user interface should occur in a logical order that matches the visual pattern on the screen, such as left to right, top to bottom.
3) Tabbing through a user interface should cycle completely through the interface and not be “trapped” in a cycle.
4) Focus is visible when using a keyboard
a. Sometimes a blue border is used to show the currently active element when using a keyboard. This should be easily visible with all elements.
b. Sometimes using a mouse will also highlight an element or reveal some information. These two methods should be compatible with each other.
c. The focus should always be visible and there should not be any invisible elements when using a keyboard.
Test #9) Repetitive Content
Purpose:
It is easy to skip over content using a mouse. However, keyboard navigation users must traverse many repetitive elements such as menus.
Requirements:
1) Identify all areas of a page with repetitive content.
a. A typical example is the navigation menu at the top of web page.
b. Some web pages have multiple sections of different content, such as “news”, “events”, etc. If there are many sections, then a user might like to skip the “news” section and jump to the next section without traverse all of the elements in that section.
2) Provide a “Skip” link, such as “Skip to Main Content” for each of these repetitive areas. This skip link is typically only visible when a user is using a keyboard to navigate through a web page.
Test #6) Links & Buttons
Purpose:
Keyboard users should be able to determine the context for any link or button. Often
users determine the context by visually examining nearby text or graphics.
Additionally, some links and buttons cause a change in content on a website which is
visually observable and also needs to be accessible to keyboard users.
Requirements:
The major requirement is that accessible and unique descriptions are available for each
link and button. For example, a set of links to news events might be labeled as “Read
More…”. However, the user may not be able to determine which news event is
associated with each link. The link may still be labelled as “Read More…” as long as
there is also accessible text that provides more description, such as “Read More about the
City of Boston”.
Any new content that becomes visible with a link or button should be identified with
accessible text before the user clicks on the link or button. An example of this is a
“Learn More…” button that causes a new modal dialog to be shown. Also, the keyboard
navigation should move to the new content.
Environments
Production: https://boston.gov
The Digital team’s main modes of communications.
Our main channel is #digital_team
.
The #digital_builds
channel reports build and deployment status.
The #digital_monitoring
channel reports alerts and exceptions.
Most projects have their own public channel to keep all relevant and/or interested parties in the loop on active project work or to follow along with questions/answers that would benefit at all. Feel free to follow projects that you may not be actively working on.
There are some 'culture building' channels. Feel free to add yourself to whichever of these channels.
*Note: This is largely used by Digital and Analytics within DoIT. Most others use GChat.
The team communicates internally and externally via Google Group listservs:
digital@boston.gov (entire team) - James Duffy is typically first on deck to respond to these.
feedback@boston.gov (limited folks on Digital) - James Duffy is typically first on deck to respond to these.
digital-dev@boston.gov (developers, product managers)
webmaster@cityofboston.gov (developers, product managers)
:+1::tada: Thank you for being interested in contributing to Boston.gov! :tada::+1:
This is a guideline for contributing to the development of Boston.gov. We're open to improvements, so feel free to send a PR for this document, or create an issue.
Report issues on Boston.gov
Suggest new features
Contribute to development
If you need to submit a bug report for Boston.gov, please follow these guidelines. This will help us and the community better understand your report, reproduce the bug, and find related issues.
Before you submit a bug
Verify that you are able to reproduce it repeatedly. Try multiple browsers, devices, etc. Also, try clearing your cache.
Perform a quick search of our existing issues to see if it has been logged previously.
Submit a bug
Use a clear and descriptive title when creating your issue.
Include a bulleted list of steps to reproduce your issue.
Include the URL of the page that you're seeing the issue on.
Include screenshots if possible. Bonus points if you include an animated GIF of the issue.
Include details about your browser (which one, what version, using ad blockers?).
When filing your issue, assume that the recipient knows nothing about what you're talking about. There is no such thing as too many details when filing your issue.
Bug report template
Have an idea for Boston.gov? If so, create an issue. Prior to submitting your feature request, please do a basic search of existing issues to see if it's already been suggested.
Feature template
To contribute to the development of Boston.gov, you'll need to get a development environment up and running. This section will get you started.
Contributors should first review our [[Development Standards]].
Our process resembles a Gitflow Workflow with the following specifics:
The master
branch is always ready for deployment.
All development is performed against a develop
branch.
Before release, develop
is deployed to our staging environment and tested. It is then merged into master
and master
is deployed to production.
Project setup
Each contributor should fork the primary Boston.gov repo. All developers should then checkout a local copy of the develop
branch to begin work.
For any work, pull requests must be created for individual tasks and submitted for review. Before submitting a pull request, be sure to sync the local branch with the upstream primary branch.
Pull requests should be submitted from the forked repo to the develop
branch of the primary repo. Make sure to give your pull request a clear and descriptive title and use the template below.
Pull request template
Product managers should work with stakeholders and the project team to determine what we'd like to track/count as success metrics.
Product managers are first on deck to enter tags in tag manager to track this information. Developers are a back up for this.
Ideas for potential metrics include:
Standard high level google analytics, such as page views, unique page views, browser, network, device, time on page, etc.
Conversions, such as sign ups, participation in events (in-person or online), results in City services (i.e. less towing)
Completion rates, i.e. if they visited a page that they needed to interact with something or get through a task did the 'page visits' match the 'completed rates'
Numbers of things, such as orders,
Savings or revenue: time, money, etc.
Code release details will be documented with each code deployment here
DIG-1033 - Price Filter Updates
This enhancement updates the price filter on the Metro Listing page so users have the ability to enter minimum and maximum prices to filter a price range appropriate to the listing type(rental or sales) they are searching
DIG-1043 - Show either rental units or for sale homes, not both
This enhancement separates rentals and sale properties in the search results so users are not confused by the listing type they are viewing
DIG-1714 - Metrolist price filters don't work on Sale properties
This enhancement shows the user sale properties when they are using the price filters on the Metro Listing page
DIG-1770- Metrolist Listing Form Error
Code was rolled back to the previous version after the listing form was throwing an error. Cause and Solution will be investigated for permenant fix
DIG-1659 - UI change to Boston.gov contact form
This adds email validation to the contact forms to avoid postmark errors
DIG-1679- Adjust style for document link button in "three column w/ image" component
Update the styling of document link on a three column image component so internal and external button links match
DIG-1720 -PDF Generation may fail if multiple requests occur simultaneously
This fix allows endpoint folders to accept both upper and lower case fiscal year names so the form accept process is not blocked.
DIG-1564- Add email validation to marriage intention form
This adds email validation to the marriage intention form. A user must enter matching emails before submitting the intention form. If the emails do not match the user will get an onscreen error message. This will ensure we do not receive postmark errors due to incorrect/incomplete email addresses.
DIG-807- Add email validation to registry applications
This adds email validation to the birth, death and marriage certificate request forms. A user must enter matching emails before submitting any of these request forms. If the emails do not match the user will get an onscreen error message. This will ensure we do not receive postmark errors due to incorrect/incomplete email addresses.
Digital Team
DIG-1164 Address Barcode for abatement application through assessing online
This update added code that writes a PDF and adds a barcode to the downloadable exemption information section of the abatement application
DIG-1653 Bug with last name in 'grid of quotes' component
This bug fixes the name display on the grid of quotes feature in Drupal. When editors add a persons full name to grid of quotes it will display correctly on Boston.gov
DIG-1687 'Add to calendar' button not working on 'events"
This bug fixes the error where the Add to Calendar button on the events page was not working properly. When users clicked this button to add the event to their calendar nothing happened. When users click this button now it gives them the option to add the event to their preferred electronic calendar
DIG-1712 - Update Street Numbers on Forms
This fix updates the way addresses appear across assessing forms.
Metrolist
DIG-1561 Fix Styling of Metrolist Grid of Cards
This fix updates the Grid of Cards styling on the Metrolist page to match everywhere else on the site
DIG-1584 Change "Calculate" to estimate on unit details screen
This update changes the text on the Metrolist unit details screen from calculate your eligibility to estimate your eligibility in the Eligibility Section
DIG-1652 Styling of bullets needs to be adjusted
This fixes a bug that was introduced when the bullet styling was updated to make them display better in the new tables, the bullets were being displayed in the middle of a multi-line sentence instead of in front of the first line. When bullets are used in a list the user will see the bullet positioned in front of the first sentence in a list
DIG-1509 Group Management Search Order
The search order in the Group Management tool on Access Boston Portal was enhanced so users see their search results in alphabetical order
DIG-1223 Error message in search of Permit Finder website
This fixes a bug where users where seeing an 400 error when using the permit finder website
Digital Team
DIG-1545 - Adjust feedback link in navigation for pages where feedback form appears
This adds functionality to the feedback link seen the on top of Boston.gov pages. If a Boston.gov page has the newly created feedback embedded when the feedback link is clicked the user will navigate directly to the new feedback form at the bottom of the page
If there is no feedback form embedded on the page, when the feedback link is clicked the user will see a contact form that submits an email to 311supervisors@boston.gov
DIG-1307 - Add updated styling to tables on Boston.gov
This updates all the tables on Boston.gov to match the newly designed tables created for the Elections results. These new designs are for both desktop and mobile. These updates were created to align with our commitment to design and brand consistency on Boston.gov
Metrolist
DIG-1547 Make Availability Info Page Required
Removes the check box option on the “Select Building” Page of the listing form that allows the users to skip the “Availability Info” page. This step will now be required in the submission process on the Metrolist Listing Form
Metrolist
DIG-1047 Open Project Pages in New Tab
Fixes a navigation issue for users that want to open a project page from the Build In Boston map. When clicking on a project from the map the project page will open in a new tab allowing users the ability to toggle between the map and project tabs they are viewing, and close the project pages without closing the map.
Digtial Team
DIG-1591 - Dupal Updates
Routine maintenance and code release cycle for boston.gov
Digital Team
DIG-1463 Create Boston.gov feedback form
Enhancement in Drupal to create a feedback form that editors have to the ability to add to the bottom of any page on Boston.gov
Feedback form works on both desktop and mobile
The feedback form has the following features:
Yes/No check boxes for user experience - Required field
A comment text box for users to share their experience in paragraph form
DIG-1544 'Notes' error when viewing feedback form submissions
Fixes a bug where users were seeing an error when they clicked the notes option on individual Drupal pages
Metrolist
DIG-777 Listing Form: Availability Info Screen
The following updates were made to the Availability Screen on the Metrolist form:
Set default time in Deadline Time field to 11:59PM
Add the “Remove Posting Date” date field so users can add a date to indicate when a posting should be removed so applications cannot be submitted after that date
Updated the name of the “When would you like this posted to Metrolist“ field to “Available On”
DIG-838 Equitable Treatment agreement
Three changes to the notification requiring equitable treatment and non-discriminatory practices agreement:
Moved to bottom of page, just above submit button
Added "I agree" checkbox as part of notification
Disabled "Submit" button until "I agree" has been checked
DIG-1040 Fix Pagination icons to show displayed page
This updates the Metrolist Listing form so the user sees the pagination icon highlighted to indicate which page they are currently on
DIG-1024 View Only Group Management
Enhancement to the Group Management Tool on Access Boston Portal that allows users to search and view employees/contractors list of security groups in view only mode
Users with the following security group SG_AB_GROUPMGMT_SERVICEDESKVIEWONLY will see the Group Management link on their Access Portal page
Users will have the ability to search on users by name or ID
Once the correct user is found and selected their security groups will be displayed in view only mode. No edits can be made to the security groups
DIG-878 'Grid of quotes' component image upload issues
This fix allows users to upload an image by easily clicking the “media add page” in the "Grid of Quotes" component, instead of forcing them to add the media then searching to to find it so it displays
This also fixes the display itself so that the image can been seen instead of displaying the file name
DIG-1362 Display unofficial results in the same order as the ballot
This fix allows user to see the unofficial election results data displayed in the same order as it is listed on the official election ballot when it appears on the unofficial elections results page
The election data order should also display in the same order as the ballot in the filtered drop down for searching
DIG-1393 Add field to Drupal to allow election editors / admins to edit disclaimer
This fix gives election editors or admins the ability to update the disclaimer message on the Elections Results page
The editor will see a new field to add or update a disclaimer message in Drupal
DIG-1435 Error on elections file upload crashes upload form
Fixes an issue to prevent upload crashes when an elections file is uploaded
To fix this issue the following solutions were implemented:
Made the form more tolerant to missing or orphaned data in the history object which is dynamically stored in the node_elections config settings.
Added a clear history button to the form so an admin can manage the history
Added logging into the history so that clearing and deleting history is recorded
DIG-1511: Maintenance Updates
Routine contributed module updates for Drupal
DIG- 1519 Content authors / editors unable to see drafts of unpublished content
Fixes a bug where editors were unable to see their Draft Drupal pages, When a user saves a “draft” for a new content type, they get a “temporarily unavailable” message
After an update in Drupal the DateTime module seemed to be less tolerant to formatting a date, the code was updated so that the published date will only be formatted if the node has been published
DIG-62 Unmask a Password - Sign in Screen Access Boston Portal
Added a show password feature to the sign in screen on Access Boston Portal. This allows users to click on the word 'Show' to unmask the password they are typing to make sure it is correct
NOTE: This code was developed by the digital team but released by the IAM team because this page lives on their servers
DIG-1155 Unmask a password - Change Password Screen
Added a show password feature to the Change Password screen on Access Boston Portal. This allows users to click on the word 'Show' to unmask the password they are typing to make sure it is correct
This screen is also used in the create password process for new users/employees
DIG-1156 - Unmask Password - Forgot Password Screen
Added a show password feature to the Forgot Password screen on Access Boston Portal. This allows users to click on the word 'Show' to unmask the password they are typing to make sure it is correct
DIG- 1363 - Add disclaimer message to top of unofficial election results
Added a disclaimer message to the top of our unofficial election results election card to clarify the order that results appear for users.
DIG -1374 - Add error validation in election uploads
This addresses an issue with uploading xml files in our new Election Results section in Drupal. It adds further error validation for users uploading problematic files.
DIG- 1343-Adjust styling of tables on mobile
Adjusted the styling of our tables when viewing them on mobile in our patterns library. This change eliminates adding an extra border at the bottom of each cell, and instead adds the bottom border below groups of data on mobile.
DIG - 1333 Adjust text in filtered dropdown for primary elections
Capitalized "rep" and "dem" party descriptors in the race selection dropdown
Background: The unofficial election results website gets updated with new data each election that reflects the current race. The data for the current page is currently being iframed into a website on Boston.gov. The data from the iframe is coming from the old cityofboston.gov website. It does not have City of Boston branding and it isn’t mobile-friendly.
Goal: Make unofficial election results mobile-friendly on Boston.gov
UI Design
Related Tickets
DIG- 1004 Importing Elections Data into Drupal
Created a new content type packaged in the module node_elections allowing the elections dept to upload elections results into a drupal page
Created an import page for the elections department to import election results file that will appear on the elections website
Created an import process where election results file contents are loaded into the new content type to updated the elections results site
DIG- 1093 Create display pages for unofficial results
Front end development work to create display pages for election results that align with approved UI designs and our patterns library
DIG-1206 Drupal Security update 9.4.8
DIG-854 Update patterns library to Node 18
DIG-1060 - Fixes a bug where the postmark contact form was not automatically adding the correct email address in the CC field so Boston City Workers could click reply all without having to cut and paste the email address into the To field. This fix allows users to see the email address in the To field after click reply or reply all.
DIG-881 - This updates the my neighborhood look up tool by swapping out the summer links for the winter links.
DIG-2021 -Routine scheduled updates to Drupal contributed modules
Weekly Maintenance that updates both PROD and the REPO
DIG-1002 Error message in Registry suite of applications
Fixes issue where users are getting a 400 error in Registry Suite of Applications
Issue was due to a malformed cookie
DIG-1009 Internal links considered "external" causes WSD on older pages
Fixes issue where Cabinet page links were broken
Updated code to ignore hard coded part of the URL and read the remainder of the path to display the correct page
DIG-993 Contact forms on Boston.gov failing to send
Fixes issue where users were unable to send emails via the "mail to" links on Boston.gov
A class was not registered for the email sending process via postmark
DIG-872 Add 'last updated' to 'updated' date in 'posts'
This adds "Last Updated" before dates on a published Boston.gov page so users know when Boston.gov page has last been updated
DIG-949 Verify pages to be scanned by Percy
Added a good selection of pages from Boston.gov for Percy testing
DIG-1005 Internal links considered "external" causes WSD on older pages
Fixes issue where a URL link cannot resolve because drupal considers is external
Fix is to check if the link is external before trying to load the associated node
DIG-905 Apply button error in 'commission summary' component
Fixes a 400 error users see when they click the Apply Online button on a Boards and Commission page
Issue was due to a malformed cookie
DIG-67 -Caching issue displaying incorrect breadcrumbs
This fixed an issue where the breadcrumbs on the top of a boston.gov page was not consistent with user navigation
The solution is to specify that the breadcrumb block be cached per url and not per content type
The breadcrumbs now appear consistent to the users navigation path
DIG-831 (metrolist)- Clarify "Minimum income" is annual income
Added a tooltip to the “Minimum Income” field explaining that the amount entered in this field should be annual income
DIG -925 (metrolist)- Email Language Errors
Updated confirmation email with correct language for the user
DIG-989 - Fix subdomain redirect in configuration for rentsmart.boston.gov
Updated our config file to point the rentsmart.boston.gov redirect to www.boston.gov/rentsmart
DIG-31 Custom 500 Error
This will activate the 500 error page on Boston.gov.
When a user gets a 500 error they will see the following:
Text: Sorry! Looks like something went wrong on our end. We're currently working to fix the issue. You can try re-loading the page in a few minutes, or email feedback@boston.gov with any questions or concerns.
DIG-853 Re-integrates Percy
This work re-enables Percy on Boston.gov
Percy will assist in automated testing by comparing screenshots to ensure new code does not break anything on Boston.gov
Percy tests must pass in order to merge new PRs
DIG-876 Resolves internal link WSD
This fixes a validation error for broken internal links in components
If a user enters an internal URL the page will resolve itself and navigate to the correct URL when saved
When users try to save a draft page with a broken internal link they will get an error message.
DIG-542(Metrolist) Calendar events do not show physical location
This will update calendar events to pull in the physical address of the event so users see the correct address on the site
This update will also accommodate events that will be held virtually - users will see language on screen saying the event will be virtual
DIG-738(Metrolist) Stop re-submission of form once submitted
Update to add language to the Metrolist listing form informing users the unique link they receive to submit a listing should only be used once. The following language has been added to these screen/emails
Listing Form Request - Important: If you need to submit listings in multiple properties, please request a new form for each one.
Email communication -Important: Do not reuse link. If you need to submit listings for additional properties, please request a new form
Listing form request on screen notification- Important: If you need to submit multiple listings, please request a new form for each building.
DIG-776(Metrolist) Submission Confirmation Email
Send a "Submission successfully completed" email notice to the contact associated with the listing when a submission is completed
DIG-775(Metrolist) Update Admin email alert
Update the Admin email recipient list, add meaningful info, and include a link to the Salesforce Development. Email will contain:
Submitted on: [Date/time stamp] Submitted By: [Listing Contact Name] Listing Contact Company: [Listing Contact Company] Contact Email: [Listing Contact Email] Contact Phone: [Listing Contact Phone]
Property Name: [Development Name] Property Address: [Street, City, Zip]
DIG-781(Metrolist) Apply Button on listings gets 404 Not Found
Fixed bug where Apply button on MetroList Listing page was getting a 404 page error
DIG-809(Metrolist) Issues accessing Metrolist from external devices
Fixed bug where Metrolist search page was not loading on some devices
In the listing date code \T was being read as timezone updating this to \\T fixed the issue
DIG-433 (Metrolist)Telephone number (&date) format on metrolist_listing webform
Fixed bug where user sees a formatting error after entering a phone number
DIG-824 Update BOS311 API for Chinese translation
Chinese translation was not appearing in BOS:311 App
API was updated to send correct language code so alerts show in Simplified Chinese
DIG- 865 Fix styling of events component in how-to page
Fixed styling on events and notices component so events boxes are not pushed to right of the screen and appear centered on the site
DIG-317: How-to pages broken components
Fix styling issues with How To content type from patterns library
DIG-318: Node landing page Full
Minor edit to the wrapper around the main items
DIG-319: Node Listing Page
Took out unnecessary code to make sure listing pages look correct
DIG-438: Add email re-verification field.
Added validation email field to Postmark contact form so users have to validate their email before submitting a question etc.
DIG-757: Event calendar bleeding into the bottom module
Fixed a bug where the calendar button was bleeding onto the components section in events, this has been aligned
DIG-733: Features sidebar svg
Fix icon svg files appearing too large in the features section on the ArtsBoston Calendar. They are now appearing at normal size
DIG-781: Apply button on listings gets 404
Fix the apply button on Metrolist listing so when clicked user can apply to any listing
DIG-65 - Group Management URL Security Fix
Fixes the following security issue of allowing users to access group management via copying a URL directly into the browser
Users that do not have access to group management will be directed back to the Access Boston Portal screen
Updating boston.gov Drupal website with accessibility in mind. When adding content by editing/adding html elements, what they need to keep in mind for screen readers and people with disability.
Adding Images
When adding images to content, must add caption
and/or title
and/or alt
tags. This is especially important because screen readers reads them.
The editor we are using "ckeditor" provides at least one field to enter either a caption, title, or alt tags. If all three fields are provided please enter content for all three fields.
Please see the "How guide" section on the best ways to add and edit an image.
Adding Tables
Tables summaries must be added to each table created. Editors can use "ckeditor" to add summaries to each table. The summary
must explain exactly what the table content is about.
See the "How to guide" section to learn more about adding table summaries to tables
Adding Buttons and Links
Just like images, links should always have a title if it is to be used as a placeholder or anchor tag. that is it has no content within the <a></a>
tag.
Tip for buttons
: When using <div>...</div>, <span>...</span>, or <a>...</a>
as buttons always add a role="button"
attribute to the html tag. The role should be used for clickable elements that trigger a response when activated by the user. Adding role="button"
will make an element appear as a button control to a screen reader. This role can be used in combination with the aria-pressed
attribute to create toggle buttons.
The above example creates a simple button which is first in the focus order.
Audios & Videos
Youtube videos:
HTML videos:
Other video types:
Our philosophy on how to conduct effective code reviews.
Code reviews don’t need to be – nor should they be – only about finding bugs and errors. They give us the opportunity to improve the quality of our codebase by providing feedback that helps move the code forward. Sure, that could be a fresh set of eyes catching a typo... but it’s also about learning new things, asking questions and talking through decisions, and giving praise when you see awesome code!
Our codebase is a team effort and it’s important to remember that we’re reviewing code, not each other. When you’ve spent hours/days working on a piece of code, sometimes it can be easy to take feedback personally. As reviewers, we can minimize the risk of misinterpretation (as well as provide better, more constructive feedback) by keeping a few points in mind:
, and use the passive voice in your feedback
When requesting a change, also , and perhaps even
Remember that you can also use comments to ask questions or start conversations
For a deeper dive, we highly recommend reading How to Do Code Reviews Like a Human (, ). Many of us have found it quite valuable!
Occasionally development work in our repos will be done by a partner external to the Digital team. Outlined below is how we will engage with these partners.
All developers will be oriented to the digital development standards
All developers will be briefed on shared resources (e.g. patterns library)
Development approach will be approved by the CDO
Approved approach will be scoped and and specified to the satisfaction of all parties with the minimum standards so the team can test that the code is what was requested
Product owner/Project Manager on Digital Team will oversee communication and management of project utilizing JIRA as as an organizing tool
Regular communication (e.g. weekly meetings) or stand ups as necessary between partners to understand project timelines and dependencies
assigning tickets between teams
Digital team needs to peer review all code prior to deployment
JIRA tickets should have adequate solution and documentation for updating release notes
Developer standards should be adhered to for code development
Digital team conducts the deployment into our deployment pipeline
Partner is responsible for documenting and digital team is responsible for integrating the documentation into the Digital Team's Gitbook
Developer to Developer hand over of code for future maintenance of the code
Digital Team will update release notes when code is released into the PROD code base
Tips on how to use GitBook for documentation.
See GitBook’s documentation at https://docs.gitbook.com/content-editing.
If you’re documenting how to perform a task, organize it as a Guide. Most of the docs in our GitBook space will be Guides. (e.g. How do I start a new project?)
“Standards and best practices” is where to document things like What files need to be included in every project? What library should I be using?
In order to add new pages, you need to switch into editing mode first. It’s easy to forget!
GitBook will automatically create a list of hyperlinks from H1s and H2s as the page “contents” block in the upper-right of the page. Although it isn’t semantically correct, use H1s for all section titles within a page, and H2s for subheadings.
When you are in editing mode, any changes you make will be autosaved as one draft: you can click through and edit multiple pages without needing to save before leaving a page. When you’ve finished, make sure to save the draft! Note that the “cancel” button will discard all changes you’ve made, not just the page you are currently on. Your changes will not be live until you merge in your draft.
If you add a new file in github, then you must also add it to the summary for it to appear in gitbooks.
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
Examples of behavior that contributes to creating a positive environment include:
Using welcoming and inclusive language
Being respectful of differing viewpoints and experiences
Gracefully accepting constructive criticism
Focusing on what is best for the community
Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
The use of sexualized language or imagery and unwelcome sexual attention or advances
Trolling, insulting/derogatory comments, and personal or political attacks
Public or private harassment
Publishing others' private information, such as a physical or electronic address, without explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at digital@boston.gov. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4.
This project is in the worldwide public domain. As stated in LICENSE:
This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.
The automated tests we run in our projects.
DoIT uses a set of PHPUnit driven linting tests to check coding standards are being met.
Linting tests can be run locally by developers using lint utilities in their development environments. Developers can also execute a mirror of the tests run during CI (by Travis) using Phing tasks locally.
Jest
Storybook
ESLint
Prettier
As part of the code acceptance in the CI workflow, Travis runs linting tests via Phing. Code changes cannot be pulled and merged into development branches until these tests are passed.
Unless a repo is static content or an app with a very limited lifespan (e.g. private Amazon bid site) it must have some tests, which must be easily runnable by Travis.
Test failures on the default branch must be addressed immediately.
In general, tests must pass before merging a PR, though if the change will not affect the tests (such as a documentation or script change) it’s okay to override that requirement so you can move on.
Travis should handle all packaging for deployment for maximum repeatability. For security reasons, it’s generally best to have a second process for actually executing the deploy, however.
Flaky tests are not tolerated.
All apps with UI should be set up to submit to Percy.io as part of every PR.
For React-based apps, this typically means using Storybook and is a good reason to make state-lite UI components that are fully exercised by Storybook.
The Percy.io configuration should be set for a mobile width, the smallest desktop width, and a large desktop width. (See “Testing Breakpoints” above.)
Percy.io changes must be approved before merge. Make sure there are no surprises.
Non-deterministic UI, such as Mapbox or a random icon, should be either made deterministic or removed when run by Percy. Flaky UI is not tolerated.
We use BrowserStack to live test our website and mobile apps across mobile devices and platforms. BrowserStack’s device cloud carries an exhaustive list of devices that lets us easily/speedily switch between multiple breakpoints to test our site’s responsiveness.
We are currently experimenting with to run browser integration tests on top of BrowserStack.
Code tells you how; comments tell you why.
Code comments can save a lot of time and stress for the next person working in the codebase (spoiler: it’s probably future you) by reducing the effort to understand a piece of code, or the intent behind it. ; a well-placed explanatory comment can reduce or prevent the need, and allow a developer to continue on with their initial task uninterrupted.
Standards we follow as developers.
All new software is checked in to GitHub.
Legacy software that needs maintenance gets moved to GitHub.
Absent a very compelling reason, repos are public from day one and source code is released under the public domain license.
We’ve moved to a monorepo structure, and use to manage it. This repository can be found at . develop
is its main branch and is only updated by pull request/merge, never by committing directly.
All new webapps should be created under /services-js
. Our internal modules are found at /modules-js
. We are working on migrating older apps into the monorepo over time.
Use Yarn instead of NPM when installing packages, so we can take advantage of .
Set up your editor to automatically run Prettier on save. See the guides: .
Create Storybook stories to provide documentation for your components, and to allow for visual regression testing with Percy.
Set up automatic linting with PHPUnit [???]
DoIT use a set of PHPUnit driven linting tests to check coding standards are being met. Local Testing - Linting tests can be run locally by developers using lint utilities in their development environments. Developers can also execute a mirror of the tests run during CI (by Travis) using Phing tasks locally.
Boston.gov subdomains are assigned on a case-by-case basis. Typically, anything that is a core city service will qualify for a subdomain.
If the website qualifies for a subdomain, it should adhere to the following conventions:
Example: fulfills service requests for the Mayor’s 24
Example: is the data hub for the City
does not use talent.cityofboston.gov because “Talent” could be interpreted as “Employees”, thus misleading users who are searching for the .
For an explanation on what DI is and why it's a good idea to use it, refer to the Drupal docs.
When you're using DI, you're asking the Service Container, which Drupal borrows from Symfony, to pass the correct object into your class so that you can interact with it in some way. See drupal docs for a full list of services provided by Drupal core.
The main thing to keep in mind is that any time you are writing a class in Drupal 8, you should be accessing external services through the service container (SC). If you use \Drupal::someMethod()
within a class, that is a red flag as it's an opportunity to use DI instead.
Controllers (extending ControllerBase
), blocks (extending BlockBase
) and forms (extending FormBase
) have special access to the SC. They can access it directly via their create()
method. The example below gets access to the database
and request_stack
services, and then passes those into the __construct()
method so that they can be used later on in the class. These services could also be accessed statically, \Drupal::database()
, \Drupal::service('database')
, \Drupal::request()
, \Drupal::service('request_stack')
, but since we're writing a class that's not the recommended approach.
In the other methods of our class, we can now access our services like this: $this->db->someMethod()
or $this->requestStack->someMethod()
.
If you're writing a class with functionality that will be (or could be) used multiple other places in the application, consider making it a service. For instance, if you're interacting with a 3rd party API, you may want to write a class to make common interactions easier (connect, fetch, etc).
In this case, refer to drupal docs for creating a service. The part of the .services.yml
relevant to this article comes in the arguments
line:
This instructs Drupal to reach into the SC and pass the database
and request_stack
services into your class. This replaces the create()
method in the example above. You can then add your services to your constructor like this:
Now the other methods of your class can reference these services like this: $this->db->someMethod()
or $this->requestStack->someMethod()
.
Descriptions of GitHub accounts we’ve need to make that don’t belong to people.
These accounts should all be members of the (secret) Digital Service Accounts GitHub team so we can keep track of them.
cob-deployer (Twicki) — This is a legacy service account that appears not to be currently used. It has 2-factor auth on but no registered SSH keys, so it doesn’t seem like any automated process can be using it. (The cloning of boston.settings as part of the Travis build comes from a “deploy key” specifically added to that repo.) As of 5/22 this account has no special permissions, and probably can be deleted if we can go a few weeks of deploys with nothing breaking.
cob-digital-atlantis (Atlantis) — Account for the Atlantis app used in Terraform deployment. Needs permission to the private digital-terraform repo so that it can see the Terraform templates and post its comments about plan and apply success and failure. (See: CityOfBoston/digital-terraform)
cob-digital-bot (Shippy-Toe) — Used by the internal-slack-bot tool as part of Digital monorepo deployments. Needs write access to digital repo so it can update production
branches as part of deployment. (See: Digital Webapp DeploymenT)
cob-heroku (City of Boston Heroku Deployer) — Used for connecting GitHub and our dwindling number of Heroku apps. At this point just needs read/write access to patterns so it can make apps for each PR and report the deployment back. (Admin access is needed only when first connecting the account to Heroku so it can set up webhooks.)
State and Local Gitub Reps/Team/Sales Development Team @ GitHub (don't know much more)
Tanner Hogan, githogan@github.com
Eric Johnson, elstudio@github.com
Michaela Yamamoto michaelayamamoto@github.com
Since we are using Prettier to enforce basic code style, there’s no need to go into detail on those bits of syntax (e.g. always using semicolons, avoiding extraneous whitespace, etc).
In nearly all cases, there should be only one React component per file. The file name should match the component name.
A component’s story file (or test file) should be colocated with the component.
File/component name should be descriptive.
Use .tsx
extension for React components.
Storybook files should be named ComponentName.stories.tsx
.
Unit test files should be named ComponentName.test.ts
.
Interface and type names should be in PascalCase.
Constant value names should be in ALL_CAPS.
Prefer <></>
shorthand over <React.Fragment></React.Fragment>
.
Avoid var
; use const
by default, or let
when necessary.
Use JSDoc style when documenting classes and functions, and standard //
elsewhere in the code.
Remember: TypeScript errors are for our benefit as developers, so we should always take the time to resolve type errors as often as possible. Override only as a last resort!
Declare as a field on the component’s Class definition, and set its type as a Partial of the component’s Props interface.
See https://reactjs.org/docs/refs-and-the-dom.html for detailed information. When referencing a Ref in your code, don’t forget that you need to use the current
attribute to access the actual DOM element!
See the Emotion guide.
The DoIT Digital team is currently working on a number of projects to improve the digital experience for both City of Boston’s constituents and workers. We also have a backlog of projects that are slated to begin over the next number of months.
As the project/product management team on the Digital Team grows, it will help immensely to implement some common baseline project management practices across projects so we can track digital projects from kick off to implementation.
This means we will use the same tools, processes, and language across all projects so anyone on the digital team will understand where we are on any given project and what team members are working on at any given time. This will help us accurately report on project timelines internally and out to stakeholders.
We are not going to introduce any new project management tools. We want to leverage the tools we currently use to help establish these procedures. We will continue to use Google suite and GitHub. As we establish some best practices across the Digital Team, there will always be room for improvement and modifications as we try to figure out what works best.
We have been exploring how to use GitHub to help us track projects. GitHub recently introduced a beta version of Projects we can leverage to give us a dashboard for ongoing project work.
We created a Digital Project Board on GitHub and anyone can link issues from other projects to this main project to track what is in progress. We are continually trying improve how we view projects on this board. Below we will outline and define how Digital team members should use the best practices we are putting in place.
By definition projects are a sequence of tasks that must be completed to attain a certain outcome. Digital team projects can vary in size and scope based on the requirements. A project team should be made up of the following:
Product Owner/Manager
Project Manager
Stakeholder(s)
Developer(s)
UI designer(s)
We use GitHub to keep track of the Digital team’s development work. You can find the repositories at https://github.com/CityOfBoston. New projects should be set up under Projects>Projects(Beta). Issues in these projects can be linked to the new Digital Project Board so we can view what work is in progress at a glance.
New projects can be set up under Projects>Projects (Beta). Issues in these projects can be linked to a new Digital Project Board. Linking high-level issues associated with a project allows us to see what work is in progress at a glance.
You can find the steps to set up a new project and how to link issues to the digital project board here: Project Set Up in GitHub
Project Managers should set up a GitHub Project for each of their projects. Project Managers can set up and run their project in a way that works for their team. However, using some of the same conventions outlined in the slides above will help establish common practices.
Project Managers will be assigned to approved projects from the backlog when they are ready to kickoff. They will be responsible for managing projects from start to finish.
The Product Owner/Manager defines the vision for the product and works closely with the project manager on project timelines. Product Owners are responsible for triaging bugs, new project requests and prioritizing the product backlog.
Project Managers are responsible for the following:
Scheduling Kickoff meeting - invite stakeholders, Devs, UI Team
Project Set up in GitHub
Gathering requirements
Documentation
Scheduling regular project team meetings (scrums, check ins etc.)
Communication between stakeholders and project team
Tracking the project in GitHub (creating tasks, issues)
Writing acceptance criteria
Linking appropriate project issue tickets to Digital Project Board for tracking
We have a shared Digital Google Drive where you can find the Digital Team Projects folder. This folder contains a specific project folder as well as a Templates folder. The templates folder contains a Project Overview Document template that Project Managers can use to document key project information. Anyone on the Digital team can add tools they find helpful into this folder for anyone to utilize.
We also have a GitHub project called Digital Completed Projects - (Maintenance)
This project is reserved for issues that arise on projects that have been completed. These may be bug fixes or small enhancements to projects that do not meet the requirements of a project.
You can find the steps for setting up a new project in Git Hub here: Project Set Up in GitHub. As stated above each PM can set up and run their projects in a way that works for their team. Below are some suggested common practices you can follow.
Once a project is set up in GitHub, it will be easy to track issues using the following status’/swim lanes:
To Do/Backlog
Triaged/Sized
In Development/In Progress
Ready for Review/QAT
Ready for Production
Done
Created by the PM and or Lead Developer and added to a project
Issues contain tasks or acceptance criteria for the development team
Prioritized by the team in the backlog based on sizing and timing
Moved through the swim lanes or status’ by any team member as the work is being completed
Once tested and considered complete closed by PM or Lead Dev
Once an issue is assigned QAT status the PM should review the work in a test environment against the acceptance criteria in the ticket
If the project involves extensive User Facing components the UI team member(s) should also review the stories and sign off as complete
PMs or UI team members should add any comments or questions directly into the ticket for the developer to address
Some larger projects may need to involve outside vendors for comprehensive testing
PMs should work with Dev to team to create test scenarios and use cases for testing
All issues/tickets have been tested and deemed done
Code is successfully deployed to Prod
We do have a number of positions we are trying to fill at DoiT- Digital to oversee and work on project development work. When we are staffed properly the development teams will be split into two areas:
New work team - this team will work on larger development projects that require new development and feature work. This team will work with project managers who are overseeing those larger projects.
Maintenance team- this team will work on maintenance and smaller bug fixes/issues that are not deemed to have escalated into projects. This team can undertake smaller projects if they have capacity. Maintenance teams will not typically need project management oversight, they will be managed by the lead developer.
We set up a Digital Team Project Board in GitHub to capture active project work across the Digital Team. We have a list of projects in the backlog swim lane. New projects will be vetted and approved by leaders on the Digital Team, based on available resources and budget. When a project is ready to begin, it will be assigned a Project Manager, Dev, and UI Team. The project will be added to the Digital Project Board. The Project Manager will be responsible for its progress.
Project managers can link appropriate project issues to this board and will be responsible for moving it to the applicable swim lane based on the project status.
Once you have a project setup, you can create Draft Tasks. These are not assignable or attached to a repository. Draft Tasks can be given a status. Project Managers or others can use these tasks at their discretion.
For example, tasks can be created to capture initial development work and acceptance criteria before being converted to issues.
Draft Tasks can be converted to Issues. Issues need to be associated with a repository. Issues can be assigned to multiple projects and people. Issues can be moved between swimlanes or given a status. Project managers can use issues to track the work on their project done by anyone on the team.
All issues should be assigned a project. If there is not project associated with a specific issue it can be assigned to the following project in GitHub: Boston.gov - Non Project Issues
Project managers should create a high level issue(s) to represent project work, which may have its own project board, and link it to the digital project board so the team can track ongoing projects.
Repositories are where our code base lives for each project. Issues need to be linked to a repository. We are in the process of cleaning up and organizing the repositories. The Digital Team works in a handful of repositories:
Boston.gov - d8
Digital
Patterns
Digital Documentation
Cityofboston.gov - this is specific to any migration projects from the old City of Boston site to Boston.gov
If you have a question to where an issue might be assigned please ask the lead developer on your project. We may configure these differently in the future based on how projects will be managed moving forward.
Project meetings are up to the Project Manager’s discretion. A project kickoff meeting is recommended so everyone on the team understands the Project and the roles of all team members.
Regularly scheduled project meetings are a helpful way of keeping the lines of communication open as well as tracking and moving the project forward
Suggested Meetings:
Dev Projects:
Project (scrum) meetings with devs on an as needed basis for a project - attendees to be determined by the team
Project retrospectives on an as needed basis for a project with PM's, Product Manager and devs (CDO optional)
Digital Team:
Weekly group (content, social media, design, dev) meeting on Mondays to report progress last week and agree the groups priorities for the week
Weekly formal digital meeting where Information from CDO is passed to the team, questions are asked to the CDO etc. Held on the day after the DoIT Direct Reports meeting
Weekly demo meeting where work can be showcased, brags and learns shared with the group. SME's from other teams can be invited as relevant to present (e.g. Daniel showing ArcGIS capabilities) digital-relevant information
Monthly one-on ones between Individual and CDO with individuals team leader present.
The Project Manager should be the main point of communication for a project. The team can decide on how they want to communicate with each other and stakeholders throughout the project. Setting up a project team space in google chat might be helpful for quick check-ins and questions during the project.
Project communication should be established at the kickoff meeting.
A backlog is a prioritized list of work for a project that is derived from the project requirements. This can include tasks and issues for anyone on the team. The Project Manager should manage the backlog for projects with input from the team. Keeping your backlog up to date will help report out any blockers and project timelines.
Each repository has its own set of labels. The Digital Team will be meeting to create a list of 10-15 labels we can use across repositories. We can use labels to filter work on the Digital Project Board.
We are using this google doc to pair down the list: Labels
If you need to add a label to a repository the team should approve so we can make sure we add it to all repositories.
Milestones are associated with repositories. Currently we are using Milestones in each repository to identify Low, Medium and High priority for issues not related to projects. We can group these issues into these categories for easy viewing on project board list views.
We have created a project intake form that people can fill out if they are interested working with the Digital Team on a project. This form can be shared with anyone who wishes to work with the digital team on a project: Digital Team Project Request Form
If you want to add this to your email signature here is some suggested phrasing:
Have an idea for a digital project or application? Submit your ideas here!
If someone on the Digital Team receives a request from another Department at City of Boston to report a bug or discuss an enhancement for a previously completed project on Boston.gov we established the following procedure to triage and prioritize these issues:
A GitHub issue/ticket is created in one of the following projects and assigned to James Duffy
The Boston.gov Product Owner - James Duffy - will triage the ticket
The Product Owner will connect with the original reporter of the issue and gather any requirements
The Product Owner will discuss the issue with the Development Manager -David Upton- they will decide if the issue is a bug or a potential new project
The Product Owner will prioritize the work
Bugs: The Dev Manager and Product Owner will allocate to the appropriate developer
Projects: Product Owner will hand it off to a Project Manager to start the new project process detailed in this document
For our S3 and Node-based apps, which can be pushed to production any time.
We push changes to production as soon as they’re ready. This keeps us from piling up inventory. Shipping small batch sizes is also safer.
We’ve built a deployment system that has zero-downtime, so it’s always safe to push.
See each repo’s documentation for instructions on how to deploy it.
Boston.gov and the Hub are deployed on a fixed schedule rather than continuously, owing to the disruption of pushing new Drupal changes.
Changes are made against a develop
branch rather than production
develop
branch is pushed to a staging environment on a weekly schedule
Issues in “Inventory” Kanban column get moved to “Staging” column
Issue creator / feature owner validates fixes on staging
Code gets merged to production
branch and pushed to production the following week
Issues move from “Staging” column to “Closed” after pushing to Production
Possible change: Adding a “Staging” lane to differentiate with “Inventory” (not yet pushed to staging).
Practices and workflows for using Git and GitHub on the Digital team.
Changes are made on feature branches and merged via Pull Request, regardless of whether those PRs are reviewed.
Preferred naming convention for a feature branch is service-name/feature
.
Never commit directly to develop
or production
branches.
Rebase to a single commit before making a PR.
Commit messages should be written in present tense.
See our code review guidelines.
PRs should be merged by the assignee, who is typically the person who made the PR.
Assignee should delete the branch once it has been merged into develop
.
Code reviews are required if more than one engineer has interest in the repo, regardless of the relative experience between the engineers.
Code reviews take priority over feature work, and must be responded to quickly.
Reviewer is responsible for approving changes detected by Percy.
For Drupal see:
One possible workflow / set of tips for using Git
Git has a reputation for being complicated, and part of that is because the problem that it addresses — distributed version control — is a complicated problem.
Here’s a (hopefully) short way of thinking about Git that might help you avoid getting your repo in a messy state, or at least guide you out of it when it happens.
To be successful with Git, it’s important to understand its model: starting from nothing, a series of patches, called “commits,” that, over time, build up a repository.
Commits each have an identifier, known as a “SHA” because they’re generated by taking a SHA hash of the commit’s data. Canonically they’re a 40-character hex number, but are usually referred to by the first 8 characters because those are probably unique in the repo.
Sometimes these commits follow one after another. Each has a single parent and modifies the repo in some way, like steps in a LEGO instruction book.
Other times, commits “merge” the changes from two different “parent” commits together back into one, like two roommates coming back together after the holidays. If the commits changed different parts of the repository — say, everyone got new clothes for their own closets — this merge happens easily and automatically. But, if they both brought posters that they wanted to hang in the same place in the common room, that’s a conflict that needs to be resolved.
Part of good shared Git repository use is communicating with your teammates and merging small changes frequently. The longer you spend collecting bric-a-brac on your own, the more likely it is your roommate has moved the shelf you were planning on using when you got back. And, if you have a large change planned, like refactoring a lot of code, it’s best to tell your co-workers not to pick out curtains until after the metaphoric common room has been repainted.
In concrete terms:
Regularly update your local repo with the latest version of the code by using git fetch
or git pull
and merge / rebase those changes into your in-development branch.
Keep your changes focused when you can, adding single features or modifying just a few files at a time.
When a necessary change is large, let your pals know what parts of the code base you have designs on so they can avoid making their own changes in those areas until you’re done.
This will minimize the number of conflicting changes, which will reduce the number of times your Git repo gets in a rough state.
A lot of attention when using Git is focused just on branches, but it’s helpful to think of them in terms of commits, since that’s how Git does.
A branch is nothing more / less than a name that is referring to a particular commit. What makes them particularly useful (and distinct from Git tags) is that they can be changed to point to a different commit.
It’s worth underlining that the history of changes that lead from nothing to a code base is kept not in the branches, but in the commits. The develop
branch just points to a particular commit and it is that commit — not the develop
branch — that is a diff against one or more parent commits, all the way back to 0.
In order to be convenient and useful, the git
tool will update branches — pointing them at different commits — automatically. But a key to understanding Git is to distinguish between what role the commit has vs. the role the branch has.
For example, git commit
is used to make a new commit. It takes all the staged changes in your working directory and a commit message, and makes a new commit with your directory’s current HEAD
commit as its parent. It then considers the new commit the current HEAD
of the directory.
This works even when you’re not using branches, what Git refers to as “detached HEAD state.” You can still create commits. You’ll need to use their SHAs to reference them, but they’re still there.
Now, if your repo is currently checked out to a branch (because you did git checkout my-branch
), the above description of git commit
all still applies. The git
tool just does the added behavior of updating the name my-branch
to point to the new HEAD
commit you just made.
Once you think in these terms, Git commands like reset
make a little more sense. reset
takes the current branch and points it at a different commit.
So, to undo a git commit
, you can run git reset HEAD^
(HEAD
is a special name for “what’s checked out now” and ^
means “the commit before”). reset
by default does not change the files on disk, so you didn’t lose any work. But, since you’ve changed what my-branch
is pointing at, the changes that were previously committed now appear as uncommitted. (If you check your terminal history and find the SHA from the commit, you could run git reset <SHA>
to re-do the commit by pointing the branch back to it.)
An important caveat: commits that are not either pointed to directly by a branch or in the branch commit’s ancestry are eligible to be garbage-collected by Git. They tend to last in your local repository for about 30 days, and can be referred to by their SHA during that time.
Make sure your shell prompt is Git-aware. It should show you what branch you’re on and ideally if there are uncommitted changes in your directory.
Use an editor that is Git-aware, in particular one that shows you diffs (preferably side-by-side) and lets you edit in the diff. This is invaluable for seeing your work and doing pre-push editing passes on the code before putting it up as a PR.
Visual Studio Code is very good at this, with the caveat that if you diff staged changes against the latest commit you can’t edit them with the diff tool. You can edit unstaged changes, however. You can git reset HEAD
to unstage everything if you need to, or git reset HEAD^
to un-commit your previous commit (while preserving its changes in your working directory).
You can add “aliases” to your ~/.gitconfig
file that mean you don’t have to type as much on the command line.
These make git co
short for git checkout
and git st
short for git status
.
git amend
is useful for adding the currently staged changes to the previous commit. This is very useful in development to pre-squash your commit and avoid a chain of “tmp” commits in your history. (Use git commit --amend
if you want to add to the previous commit but edit the commit message.)
git delete-merged
is a cute little housecleaning command that removes all local branches that point at commits that are parents of the current branch. So if you do: git fetch
git co origin/develop
git delete-merged
it will remove any local branch that you’ve already merged into develop
, keeping the output of git branch
cleaner.
If you’re working on a new feature or bug fix, you’ll want to make a particular branch for it so it can be code-reviewed as a PR. (See our GitHub working agreement for proper behavior.) Since you’re intending to merge it back into the develop
branch, it’s best to start by branching off of develop
. But, not just anywhere, but the latest version of develop
that your coworkers / roommates have committed.
This first uses fetch
to update our local cache of the default origin
remote (i.e. the GitHub repository) to the absolute latest. It then checks out the commit that origin
’s develop
branch is pointing to in “detached HEAD” state. Finally it creates a new branch with your new name. (You can use git co
if you’ve put in the aliases from above.)
Following this recipe will ensure that you’re starting new work from the absolute latest that has been checked in to GitHub and will keep you from accidentally committing things to a local develop
branch and getting mixed up.
Sometimes you’re working on one thing in your working directory but then realize that you need to put it aside and make a quick bug fix or something.
While Git provides a “stash” feature for saving things, it can be easy to lose or forget about the contents of your stash. Better to make a new commit.
This adds all changes, including new files and deleted files, to the index and then commits them all with a “tmp” commit message.
Then, follow the above “starting a new change” recipe to get yourself set up for the fix you need to make.
Use this recipe to get back to what you were working on, with a git reset HEAD^
to get rid of the “tmp” commit while still preserving the changes from it in your working directory.
It’s a good habit to make sure your branch is based on the latest changes from the branch you’ll be merging into (probably develop
). You also sometimes need to do this if your change is going to rely on something that got merged after you started working on it (often because of the above context switch).
This method updates our cache of origin
as before, and then runs git rebase
. Rebasing is a powerful capability of Git that can also get you into a mess of trouble. The first thing rebase
does is the equivalent of a reset --hard
to make both your current branch and the contents of your working directory match origin/develop
. It then re-commits every change from where your branch and origin/develop
diverged. (The equivalent of repeated git cherry-pick
.)
If your changes are in conflict with ones from the latest version of develop
, you’ll have to follow the instructions to resolve those.
In general, rebase
is preferred because it leaves your Git history nice and linear and clean. It’s as if someone time-travelled back to when you were first working on the feature and gave you a different place to start from.
But, there are caveats that can blow up in your face, which is why we call it the “fingers-crossed” method. git
is going to apply your commits individually, and in order. If any of them have conflicts with the new updates, resolving them can be tricky. For example, say you have commits “tmp”, “tmp 2”, and “fixed” that all change the same code. Say it took you a few tries to get a bit of code right. Remember that commits are diffs against their parent, so “tmp 2” knows how to apply changes against “tmp”, and likewise “fixed” expects the part of the repo that it affects to look like “tmp 2”.
If you rebase and there’s a conflict when applying “tmp” you’ll need to resolve it before git rebase
can proceed. It can be tempting, since you’re now editing the code, to make things look like how they ended up with “fixed”, but that’s not what you need to do. You need to resolve things so that “tmp 2” can be applied correctly.
This is one reason why you should squash your changes as you develop. Rather than make “tmp 2” and then “fixed”, use git commit --amend
to re-do the “tmp” commit. There’s no value in preserving those intermediate broken states forever in your repo’s history, after all. Rebasing just one commit is always going to go a lot more smoothly than multiple commits, since you don’t have to worry about correcting things to those intermediate states.
If you end up in a bad, confusing place, just type git rebase --abort
to cancel the rebase operation and try again with the “merge/reset” method below.
If git rebase
gave you trouble and you also don’t mind squashing your commits and re-writing the commit message, you can use this method.
This starts with the easy-to-forget git fetch
to get us up-to-date. It then merges the latest from the develop
branch into our current branch. Unlike rebase
, merge
does not rewrite history and re-apply your commits. This means that when you resolve conflicts you only need to do it between the latest versions of the develop
branch and your own. You don’t have to do the intermediate state stuff that made rebase
painful.
When used in this way, however, git merge
will typically leave behind a “merge commit” that points to the two parents (whatever commits origin/develop
and your branch were on when it was run) and contains any diff necessary to resolve conflicts. This is an untidy artifact.
To excise the merge commit from our branch’s history so that it’s neat, we do a git reset origin/develop
to point our branch to develop
’s latest commit. Since reset
without --hard
doesn’t change the working directory at all, all of our changes, including the conflict resolutions from the git merge
, are all still there. git add -A
followed by git commit
bundles them all up and commits them.
This is a combination of “Context switching” and “Updating to the latest.” The scenario is that you’re knee-deep in work on a branch when you realize that part of what you’re doing is distinct enough that it belongs in its own PR. Two small PRs is kinder and safer than one big PR.
You should start this by following “updating to the latest” so that your branch is up-to-date with develop
.
Ready?
This starts with a git reset
so that all of the changes we’ve been working on are out-in-the-open as uncommitted. (This is why making sure we’re up-to-date with origin/develop
is so important before starting.)
Then, use git add <filename>
to selectively add changes in that you want to put up for review now. If some files have changes where you only want some of the changes you can often use your editor or git add -p
to stage just what you need.
Once that’s done, do a git commit
and use a proper commit message to describe these changes. Keep note of the SHA of this new commit. You’ll need it.
Next we add everything else and commit it, which you might recognize from the context-switching recipe. It's relevant as well that this commit has your intermediate commit as its parent. That means that your future changes will probably merge cleanly after the intermediate commit is merged.
git co <sha>
with the SHA of the intermediate commit will check it out as a detached HEAD. This removes any of the later changes from the working directory, where they could mess up pre-push tests and such. Then it‘s a git co -b
to give these changes a name and git push
to send them up.
Sometimes, either because pre-push tests fail or because you got PR feedback to make changes, the version of your extracted change that got committed is different from the one your later work is based on.
This can be especially tricky if you were tidy and squashed your changes, since the repo now has two slightly different commits that make essentially the same changes. And “slightly” is more than enough to give them different SHAs, which makes them completely different to Git.
You might have some luck with the “merge/reset” method of updating after the extracted changes get merged to origin/develop
, but you can often rebase you way out of it as well.
After the standard git fetch
, use git log
to see how many commits you’ve made since extracting the other PR. If you haven’t done anything since, or have been using --amend
to squash changes, there will only be one. If there are more, you’ll need a ^
after HEAD
for each change.
This use of rebase
takes all changes since HEAD^
(meaning: the commit before the current one) and replays them on to origin/develop
. Depending on whether or not the test/PR fixes you made intersect with your later work, this will be either annoying to resolve or automatic.
Anything you’ve committed locally is saved in your local repository for 30 days, even if no branches are referring to it.
Before you do anything risky to your repo, it can be good to just run a git add -A
/ git commit -m tmp
to get it stored. You can then note down the SHA of that commit or even run git branch saved-branch-name
to checkpoint it with a more permanent name, before running git reset HEAD^
to undo the commit.
If you’ve been committing fairly regularly, you can typically always get yourself back to a good state. If you lost a SHA or need to otherwise undo, you can run git reflog
to print out a history of all of the commit/checkout–type changes you’ve made in your repository. That can often help you find a place you want to get back to.
git reset --hard <sha>
is a useful tool to get your branch and working directory to a specific commit, particularly after a merge or rebase goes south, and you can try again.
“Squash” your branches down to a single commit before pushing them up for review. The local checkpoints you made along the way to finishing your feature are not relevant for the future.
It’s ok to force-push. Doing a git push --force
has historically been super-dangerous because Git would update all branches on the remote repo to what they look like locally, destroying other folks’ work. This has given it a bad reputation. But, force pushing can be useful for keeping the commit history clean, such as when implementing changes from a PR. The rule of thumb is: don’t force-push branches that other people are using. That gets messy because you’re rewriting history on someone and they have to take special actions to accommodate you. But typically you shouldn’t be working from other people’s branches. Get them committed to develop!
Give your main branch a better name than “master.” It’s a word that has not-great connotations while also being wholly undescriptive. Is your main branch where development work happens? Call it “develop”. Does it automatically reflect production? Call it “production”.
Don’t use a your local branch called develop
. In fact, run git branch -D develop
to delete it. It’s too easy to start working from develop
without realizing that it’s long out-of-date relative to GitHub, or to make commits on your local develop
branch that you then can’t find anymore. Use git fetch
and git checkout origin/develop
to get your working directory to the latest from GitHub.
The main City of Boston website is a CMS developed using Drupal (a PHP - MySQL framework) hosted on Linux - Apache (LAMP) in the Cloud. Implemented as Drupal 7 in 2016, migrated to Drupal 8 in 2019, then Drupal 9 in March 2022.
Legacy website and applications (pre-Drupal) hosted 'on-prem' at the City. Nearly all of these items live on zpcobweb01 with sql databases -- mainly ZPDMZSQL01 and ZPCOBSQL22. Most of these are .asp or .aspx files.
Our web app stack includes React, NextJS, Node.js/Hapi, and you can check out a more complete list by following the link below.
Some of these are attached to vsql22 for data.
Digital is in charge of maintain/help support/be technical advisors on nearly all of the mobile apps obtained by the City. They can be found on these.
We use for describing our infrastructure. See the repo.
Do not make one-off changes in the UI or with the command line. Everything should be updated through Terraform so we have transparency about what we’re running and why.
Terraform changes should be made through .
Acquia externally hosts our main website, boston.gov in their cloud instance.
Heroku is appropriate for very one-off apps (such as the 311 crowdsource app) where we don’t mind not being on a boston.gov URL or waiting for the dyno to spin up after inactivity.
We only want to use free tier dynos going forward.
We need to get off all paid Heroku services, migrating to AWS/S3 as appropriate. Staging apps, however, should generally deploy to AWS to match production-like environments.
We should never be surprised by an outage. CloudWatch and/or Updown.io should be monitoring our apps and the services they depend on.
If an alarm goes off regularly, either its root cause should be fixed or the alarm should be adjusted or removed.
We have a handful of CloudWatch alarms to monitor our instances, VPN, and services.
Exceptions we cannot eliminate should be silenced on the Rollbar side or prevented from throwing in the first place. If necessary, they should be converted to logs that can be monitored.
Yes, parking tickets goes down every night and it’s annoying.
How we document code and workflows.
If you’re working on a task that’s missing documentation (or you think of something randomly) but you don’t have time to write it up, add a card for it to the documentation project board:
This (GitBook) is the place to document technical information that isn’t specific to a particular project. We should be liberal with documenting our workflows and standards, and aim for the standard that any new developer could independently find everything they needed to start working on any of our technical tasks.
Any technical information that’s specific to a particular project should be in a README.md
at the root directory of the project’s source files. Examples include: how a new developer could install the project locally, a list of npm/Yarn tasks... basically, anything that “should be in a README”. is a great reference for the types of information that should be included.
Also see project specific information further down in Gitbook. The project’s page in the “Project” section of this GitBook should include a link to its README on GitHub.
Documentation comments are super-useful! See the dedicated page for more guidance.
Attach a GitHub branch to an Aquia environment.
On demand instances of the Drupal site (boston.gov) are useful to demonstrate new features or functionality sand-boxed away from the
These on demand versions of boston.gov are designed to be housed on a near-duplicate environment to the production site, and be in a normal browser from anywhere by people with the correct link.
Acquia provide 6 environments to CityOfBoston.
The dev, stage(test) and prod
environments are associated with git branches used in the and can not be attached to different branches or repository tags without disrupting and potentially breaking the workflow.
The dev2, dev3, ci and uat
environments can track any desired branch or tag (even develop-deploy
or master-deploy
) without disrupting the .
This process has been decommissioned and some of the processes below are no longer implemented in scripts.
This page is left here only to provide background should COB decide/require to have Drupal in an AWS managed container.
You can push your local repository up to a test instance on our staging cluster on AWS. This will let you show off functionality using data from a staging snapshot of Boston.gov.
You will need a full development environment and Drupal 8 installed on your local machine (refer to earlier notes).
Get a “CLI” IAM user with an access key and secret key.
Use aws configure
to log your CLI user in locally. Use us-east-1
as the
default region.
Request your CLI IAM user credentials from DoIT.
To push your local repository up to the cluster, run:
Where <variant>
is the variant name you created in CityOfBoston/digital-terraform
.
This will build a container image locally and upload it to ECR. It will then update your staging ECS service to use the new code.
By default, the container startup process will initialize its MySQL database with a snapshot of the staging environment from Acquia.
After the container starts up and is healthy, the doit
script will print useful URLs and then quit.
Direct SSH access is not generally available on the ECS cluster. To run drush
commands on your test instance, you can visit the webconsole.php
page at its domain. This will give you a shell prompt where you can run e.g. drush uli
to get a login link.
The webconsole.php
shell starts in docroot
.
Talk to another DoIT developer to get the webconsole username and password.
NOTE: Each time you deploy code to your test instance it starts with a fresh copy of the Drupal database.
If you want to preserve state between test runs, log in to webconsole.php
and run:
(The ..
is because webconsole.php
starts in the docroot
.)
This will take a snapshot of your database and upload it to S3. The next time your test instance starts up, it will start its sync from this database rather than the Acquia staging one.
The database will also be destroyed when the AWS containers are restarted for any reason. It is good practice to stash your DB regularly.
To clear the stash, so that your database starts fresh on the next test instance push, use webconsole.php
to run:
Here is a snapshot of the doit script referred to above.
Elsewhere this might be termed spinning up an on-demand instance of the site.
Make sure you have the latest copy of the main Drupal 8 repository cloned to a folder <repo-root-path>.
Checkout the branch develop
and make sure the latest commits are pulled (fetch+merged) locally.
Commit your work to a new branch (on-demand-branchname
) off the develop
branch .
Push that branch to GitHub, but do not create a PR or merge into develop
.
Edit <rep-root-path>/.travis.yml
file and make the following additions:
(Note: replace <on-demand-branchname>
with on-demand-branchname
.)
Edit <rep-root-path>/scripts/.config.yml
file and make the following additions:
(Note: This partial example addition is configured to deploy to the Ci environment on Acquia)
(Note: replace <on-demand-branchname>
with on-demand-branchname
.)
Commit the .config.yml and .travis.yml
changes to on-demand-branchname
and push to GitHub - but do not merge into develop
.
Make a small inconsequential change to the code and commit to the on-demand-branchname
branch, and push to GitHub. This will cause the first-time build on Travis, and deploy into the on-demand-branchname-deploy
branch in the Acquia Repository.
The "on-demand" environment is now set. Users may view and interact with the environment as required. See Notes in "gotcha's" box below.
Once you have finished the demo/test/showcase cycle, you can merge the on-demand-branchname
branch to develop
- provided you wish the code changes to be pushed through the continuous-deploy process to production
.
Finally you can detach the on-demand-branchname
branch from the Acquia environment, and set it back to the tags/welcome
tag.
You can direct users to the URL's below, select the environment you switched to the on-demand-branchname-deploy
branch (in step 8) from the table below.
Housekeeping.
When finished with the environment, you should consider rolling-back the changes you made to .travis.yml
and .config.yml
in steps 4 & 5 before finally merging on-demand-branchname
to develop.
It is likely that the on-demand instance is no longer required, and its unnecessary for the the on-demand-branchname
to be tracked by Travis.
Also as a courtesy, change the branch on the environment back to tags/WELCOME
so it is clear that the environment is available for use by other developers.
Deploying: If you switch the code on the Acquia server from on-demand-branchname-deploy
to some other branch or tag, and then back again - then in Acquia's terminology each switch of branch is a "deploy" of the code. GitHub is not affected by this change, so nothing will run on Travis, but once each switch is complete, Acquia'spost-code-deploy
hook script will run.
- That deploy-hook script will sync the database from the stage
environment and will overwrite any content in the database. Therefore, any content previously added/changed by users will be lost.
Information on which team members have access to these can be found here:
These alarms are sent to the digital-dev@boston.gov email address and posted in the #digital_monitoring Slack channel via a custom .
We use to track server-side and client-side exceptions.
We use for blackbox monitoring of our sites’ availability.
Install the .
To create a place to upload your code, follow the instructions in the repository to make a “variant” of the Boston.gov staging deployment.
The Travis build can be .
Login to the Acquia Cloud console. In the UI switch the code in the Ci/Uat environment to the on-demand-branchname-deploy
branch.
This will cause a , which will copy across the current stage
database and update with configuration from the on-demand-branchname
branch.
Updating: If you push changes to on-demand-branchname
in GitHub (which eventually causes Acquia'son-demand-branchname-deploy
to be updated) - then in Aquia's terminology you are "updating" the code.
Any commits you push to the GitHubon-demand-branchname
will cause and update the code on the and this will cause Acquia'spost-code-update
hook script to run.
- That update-hook script will backup your database and update and new configurations but will not update or overwrite any content (so changes made by users will be retained).
Library
Purpose
Can transform and polyfill advanced JavaScript syntax to be supported on older browsers
CSS-in-JS library
Static code analysis for JavaScript
React library for building forms
Query language for APIs; GraphQL client
Node.js web server framework
JavaScript unit testing library
Monorepo management library
Provides server-side rendering for React
Code style enforcement tool
Front end UI library
A module bundler for JavaScript
Node package manager
Module
Purpose
Jest preset for projects that use Babel in their build process
Jest preset for pure TypeScript projects. Loads files with ts-jest-babel-7
TypeScript configuration files that can be used in other packages so that we have a default set of TypeScript configurations
Utilities for type-safe GraphQL resolvers
Common React components
Environment
URL
uat
(public DNS entry)
ci
(public DNS entry)
dev2
(no DNS - make entry in local hosts file)
dev3 (pending)
https://d8-dev3.boston.gov
(no DNS - make entry in local hosts file)
City of Boston strives to automate the develop, test, package and deploy process at each step from local development to deployment in live production,
This page is out of date and needs review (as at 17 June 2021)
The repository is cloned in a local folder and ready for building.
This entry condition can be achieved:
If you have not yet built the boston.gov website on your local machine, or
If you have cloned a new branch or created a new branch that you wish to build, you can run the doit rebuild quick
script, or
If you have the repository cloned, but wish to delete it and rebuild a fresh website from a branch on the GitHub repository, you can run doit rebuild full <branch>.
If you don't specify a branch, then develop
will be used.
Local developer responsible for creating local development environment.
The local build process is defined and controlled by Lando when lando start
is executed.
The doit
scripts serve to prepare the cloned repository prior to running lando start
Lando
lando start
causes the following processes to be run from lando.yml
3 standard Linux (ubuntu) containers are created. One optimized as an appserver with Apache, one optimized as a database server with MySQL and one with Node.
Install the required/dependent packages and tools -including Phing and Composer.
Create and install XDebug and other Apache/PHP settings files.
Set apache vhosts and container's network configs. (done by Docker via Lando).
Start all 3 containers.
Launch the phing script setup:docker:drupal-local
.
Phing
The phing scriptsetup:docker:drupal-local
in reporoot/scripts/phing/tasks/setup.xml
executes the following:
Download Drupal dependencies into Apache appserver container - including Drush. (done using Composer).
Download confidential settings and copy into Drupal file system (using Git).
Install Drupal by Installing a new Database on the database container. (using Drush).
Install Drupal modules and load configuration files. (using Drush).
Run Drupal's Update process to load updated-settings from modules. (using Drush).
Modify Drupal settings with localized settings.
Reset the admin password and issue login url. (using Drush).
Run Linting Test using PHP Linting. (done by PHP via Phing, launched by Travis).
Run Code Sniffer Test. (done by Sqizzlabs via Phing, launched by Travis).
(coming soon) Run Behat behavioral tests. (done by Behat via Phing, launched by Travis).
(coming soon) Run PHPUnit functional tests. (done by PHPUnit via Phing, launched by Travis).
For local development, the docker container build is controlled by Lando, with Phing being used to build Drupal.
When a Pull Request is created to merge code into the develop branch on Github a test build and some automated testing is run by Travis. Travis is used in place of Lando to initiate and control the build process as described above (i.e. Travis is used to build docker containers on Github/Travis infrastructure, whereas Lando builds docker containers on local machines). In both cases the Travis and Lando scripts are very similar in structure and identical (as possible) in function. Once the containers a built, both tools use the same Phing scripts to build and initiate Drupal.
(coming soon) Terraform will be used to spin up on-demand test/develop/experiment/demo instances of the containers (i.e. the websites) on AWS infrastructure. In this case Terraform scripts will be used to control the build in place of Lando - but (as with Travis) will be as similar as possible in function. Again, once the containers a built on AWS, the same Phing scripts will be used to build Drupal.
Run Linting Test using PHP Linting. (done by PHP via Phing, launched by Travis).
Run Code Sniffer Test. (done by Sqizzlabs via Phing, launched by Travis).
(coming soon) Run Behat behavioral tests. (done by Behat via Phing, launched by Travis).
(coming soon) Run PHPUnit functional tests. (done by PHPUnit via Phing, launched by Travis).
For local development, the docker container build is controlled by Lando, with Phing being used to build Drupal.
When a Pull Request is created to merge code into the develop branch on Github a test build and some automated testing is run by Travis. Travis is used in place of Lando to initiate and control the build process as described above (i.e. Travis is used to build docker containers on Github/Travis infrastructure, whereas Lando builds docker containers on local machines). In both cases the Travis and Lando scripts are very similar in structure and identical (as possible) in function. Once the containers a built, both tools use the same Phing scripts to build and initiate Drupal.
(coming soon) Terraform will be used to spin up on-demand test/develop/experiment/demo instances of the containers (i.e. the websites) on AWS infrastructure. In this case Terraform scripts will be used to control the build in place of Lando - but (as with Travis) will be as similar as possible in function. Again, once the containers a built on AWS, the same Phing scripts will be used to build Drupal.
When and if a new environment is setup on Acquia for CoB, the following steps should be followed:
When a new environment is added, it will have a 3-4 character name (e.g. uat
or dev2
etc). This checklist refers to this environment short-name as the envname.
This change adds the specified domains to the acquia-purge registry. This means the varnish cache for these domains will be automatically purged. If a sub-domain is attached to an environment and is NOT listed here, then it will not be automatically purged as content is changed.
This change directs the new environment to request images and files from a shared (linked) folder rather than the default sites/default/files
folder. The folder is linked to conserve file space as each environment basic requires the same sets of images and files.
The following steps need to be completed to allow single sign on via Ping Federated.
To use the environment as a Drupal site, you need to attach a branch from the Acquia git repository. For detailed instructions see On Demand section.
City of Boston use docker containers for local development.
Lando is used by the Drupal development team to manage the docker containers and provide basic tooling for the local development environment.
Set up environment for Drupal development on various operating systems.
Select your operating system from below, and follow the instructions to setup your development environment and prepare to install the City of Boston Drupal 8 website.
Tip
You can (re)use an existing key on your development computer, so long as it meets the requirements of GitHub.
How to create SSH keys for github
Be sure you load the public keys you create into GitHub.
Tip
You can (re)use an existing key on your development computer, so long as it meets the requirements of Acquia.
City of Boston recommend the Ubuntu 16.04 or later distribution. While other Linux distributions will operate well, the instructions below assume the use of Ubuntu and, in particular, the apt
package manager.
Check Docker pre-requisites.
If using PHPStorm, install Docker-machine
At their core, Mac operating systems are similar to Linux and therefore the same basic steps apply to Macs as they do for Linux.
Git is usually installed, and on most operating systems verifying is achieved by typing the command below at a terminal prompt. This process has the advantage of prompting to install git if its not there.
Enter the command below. This will install a brew-community version of Lando, including docker as explained here.
Using brew is quick and simple and will definitely get you started. If you later find that you have issues with Lando and/or Docker versions, then follow the instructions on this page under the title "Install DMG via direct download" to get the latest versions.
Because Drupal is most commonly installed on Linux servers, City of Boston DoIT does not recommend using Windows® as a developer machine due to the increased difficulty in emulating the most common Drupal production web server.
However, if you have no alternative, or harbor an unquenchable desire to use Windows® then the following best practices and instructions should get you headed in the right direction.
There are many IDE's capable of being used to write, verify and deploy PHP code. City of Boston do not endorse any particular platform, but have successfully used the following:
Notepad++ (basic text editor)
Sublime Text (improved text editor)
VIM (Linux-based advanced text editor)
Visual Studio Code (full IDE)
Eclipse (full IDE)
PHPStorm (full IDE)
Tool
Purpose
AgilePoint
Form generator/manager
App hosting
BoldChat
Real live chat platform used by 311; paid for by DoIT
Browserstack
Allows testing on different devices
Version control / code management
GChat
Realtime messaging for those not on Slack
Google Analytics and Google Tag Manager
Web traffic analytics
Google Forms
Form generator/manager
Google Meet
City's main form of video conferencing
Google Optimize
A/B testing
Invoice Cloud
Main payment processor for the City; managed by Enterprise but we link out/often confusion on if a website problem or problem with this platform.
Lando
Containerized build tool/utility
Visual regression testing
PHPStorm
JetBrains IDE for PHP (for Drupal)
Sending transactional emails
Server logging
Quality Assurance and behavior mapping
Realtime messaging
Powers global search for boston.gov
Stripe
Payment processor used for Registry suite
Terraform
AWS Infrastructure Scripting
Continuous integration
Upaknee
Uptime monitoring
Visual Studio Code
Developer IDE for javascript etc
Webex
City's main platform for large video presentations and conferencing (if Google Meet doesn't accommodate)
General familiarity with the Drupal platform is a baseline requirement for anyone working on the site. We suggest reading through the following user guide to Drupal 8:
Additionally, Acquia's free training program Acquia Academy offers a series of Youtube video tutorials which can be found here, including a Drupal 8 Beginner's Course:
Creates a drupal 8 container, mysql container and node container, and connects them all up.
For more detailed install and usage instructions for various platforms, see "More Help" below.
Ensure you have set up your development environment as described here:
Clone the public into a local folder
git clone -b <branchname> git@github.com:CityOfBoston/boston.gov-d8.git
(City of Boston DoIT recommends that the develop branch be used)
On host computer, change directory to the repository root and use lando to create and start containers:
lando start
Depending on the power of the host machine, the Drupal 8 build process for boston.gov can take more than 15-20 minutes. The composer install and site install (esp. config import) tasks can take 5-10 minutes each - with no updates being directed to the console.
-> You can follow the process by inspecting the log files in docroot/setup/
there are links to these files in the console.
From the repository root (on host):
to view a list of available lando commands:lando
to view phing tasks: lando phing -l
to run drush commands:lando drush <command>
lando ssh
to login to docker container as www-data
lando ssh -user=root
to ssh and login as root
lando ssh <servicename>
servicename = appserver / database / node
To reduce typing at the console, you can add the following aliases to your ~/bashrc
, ~/.bash_aliases
or ~/.bash_profile
files on your development (host) OS.
With these aliases, typing (in a console) lls <folder>
will use lando to run ls -la <folder>
in the default container (in our case appserver) and list files there. Whereas, ls <folder>
will list the folder locally (i.e. on the host) as usual.
For more information on installation, usage and administration of the development area, go to the next section.
Run phpcs on your custom modules
PHP CodeSniffer (https://github.com/squizlabs/PHP_CodeSniffer) is already included with our D8 project with composer. If you run a lando composer install
you should have it available at ./vendor/bin/phpcs
1. You need to specifically download the Drupal coding standards using the coder module. You can do this globally for your computer by running:
2. You need to make sure phpcs knows about your newly installed coding standard (note the path below assumes you're using Ubuntu, yours might be different on a mac):
3. Now you can run this manually against your custom modules:
If you're looking for more info, here's a good place to get started:
You will use the Git as the version control system for the City of Boston website and to manage code in the Acquia Cloud environment. If you are not already familiar with Git, you will want to check out this in-depth Git series. The first seven videos will give you most of what you need to know:
Introduction to the Git Series
What is Version Control?
Installing and Configuring Git (Dev Desktop comes with Git)
Getting Help with Git
Git Crash Course
Working with Git Branches and Tags
Moving through Git History
The next fourteen videos are more advanced Git topics, so you might want to save those for a later time.
Find the full Git series at:
You will see the listing of videos in the series in upper right panel; there are also a couple of different levels of scrollbar, so it’s easy to miss the later videos.
For some people working within lando containers slows down and crashes their environment. To fix this they can work outside lando containers (patterns.lndo.site) and directly with localhost:3030
The local development version of the CDN is hosted by Fleet at http://localhost:3030. This local CDN is served (by Node/Fractal) from your local environment.
Super-powers are granted randomly so please submit an issue if you're not happy with yours.
Once you're strong enough, save the world:
Three options for setting up a development environment on Windows.
Because Drupal is most commonly installed on Linux servers, City of Boston DoIT does not recommend using Windows® as a developer machine due to the increased difficulty in emulating the most common Drupal production web server. However, if you have no alternative, or harbor an unquenchable desire to use Windows® then the following best practices and instructions should get you headed in the right direction. There are 3 strategies to choose from:
This is the most complicated solution to setup, but allows the developer to use any windows-based tools desired to manage the Drupal codebase and databases.
The git repo is cloned to a local Windows folder on the Windows host. This repo folder is mounted into a Linux (Ubuntu) Docker Container (like a VM). Docker manages the virtualization and the container contains all the apps and resources required to host and manage the website locally for development purposes. Git commands are run either from the Windows host, or from the container. Lando (a container manager tool) provides a “wrapper” whereby commands (e.g. Docker, Lando, Git, Phing, Drush, Composer, SSH etc) are typed into a console on the Windows host, and Lando executes them inside the container. To be clear, with this strategy:
The container hosts the website
The developer normally changes/adds/removes Drupal files in the Windows folder on the Windows host
Changes to custom Drupal files (i.e. to files in the mounted folder) either on the host or in the container are immediately available to both the host and container without restarting docker or VMs
The developer normally runs dev tools such as Git, Drush, Phing and Composer in the container, using Lando commands
The Windows host does not require to have tools other than Docker, Lando and VBox or Hyper-V installed on it
Some developers still like to have git installed on the Windows host so their IDE tools (e.g. PHPStorm) can manipulate the repos directly
Developers’ need to interact directly with the container (i.e. via ssh) is minimized, and
This installation creates a developer environment suitable for a Linux-based production deployment.
Due to Lando requirements to use Docker CE (not Docker Toolkit), which in turn requires Hyper-V, you: NEED to have a Windows 10 64bit Professional or Enterprise version CANNOT use Windows 7 or earlier CANNOT use Windows Home or Home Pro as Hyper-V is required by Lando and does not ship with home versions.
These 6 steps are all performed on the host (i.e your Windows®) PC.
This is required to supply a Linux core which is needed by Docker to generate the necessary containers.
Install Windows Subsystem for Linux (preferred method)
These instructions also depend on having a current version of Windows® 10 (version later than Fall Creators Update
and pref build 16215 or later).
To install WSL support, do the following:
Open Windows Powershell as Administrator
Run:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
Restart Windows when prompted
Taken from here
Install Linux Distro
DoIt suggests you install the Linux distribution from the Microsoft Store which most closely matches the Linux distro you will use on your production webservers. If you are unsure, install Ubuntu or Debian.
Install Hyper-V
If Hyper-V is not enabled when the Linux subsystem was installed (check by typing “Hyper-V” in the start menu), then follow these instructions.
If you are not using WSL, then Git for Windows provides a bash terminal for the Windows host. Installing Git for Windows is a convenient way to get this, and also gives the developer the option to directly execute git commands (against the repo) from the Windows host. This step is optional if you use WSL, or if you are confident with some other tool to provide a bash style console. Use Git for Windows from here. This is a good tutorial to step thru installation.
If you are using WSL and have enabled Hyper-V for your virtualization, then use the Docker “community version” from here - this link also guides you through an install.
Download the latest Windows .exe installer from here.
On Windows®, DoIT recommends:
In order to use VS Code for Drupal development, use this guide as a starting point. The editor is highly configurable with many extensions available. You will likely want to customize it further based on your needs.
Pickup from step 3 on the quick install guide.
This solution may be a quick and viable option if you have a powerful Windows machine to use as the host, and are not doing much development which required extensive use of an IDE. Depending on your setup, there may be issues with IPAddress routing, requiring complex configurations.
This method is not used by City of Boston DoIT, the preferred solutions on Windows machines are A or B.
For Windows® versions before 10 Fall Creators Update, we recommend that VirtualBox (free from Oracle) is used
For later versions you should use enable and use Hyper-V within Windows.
In the VM, install a Linux distro as close as possible to the production distro you will use, and unless you are very comfortable with the Linux CLI, be sure to install a distro with a GUI.
Once the Linux distro is installed, then follow the setup instructions for Linux.
Steps 1 - 7 must be completed while the computer is connected to the city network.
Using Windows POWERSHELL (as Administrator):
Launch POWERSHELL as administrator: search powershell
from Windows search
Alternative strategy
This may work without Windows requesting a restart at the end.
Using CMD (console):
To open a CMD console search for cmd
in the Windows search
Alternative strategy:
This may provide a more fault tolerant WSL environment when we are switching from City network to external network (because we are controlling where the distro is installed, and its not on the user's profile).
Using LINUX (WSL) console:
To get the Linux console, open a CMD console, type: wsl
@see https://docs.microsoft.com/en-us/windows/wsl/wsl-config
These configuration files tweak the WSL environments to enable a better developer experience based on a standard CoB laptop configuration (i.e. minimum i7 chip, 32GB RAM and SSD harddisk).
Using a POWERSHELL console from the windows host:
Using a LINUX console (WSL):
Using LINUX console
If you have accessing the internet from WSL first try RESTARTING the computer.
If that does not work, using a LINUX console try:
=> then restart the computer.
Mount your development folders into WSL using the LINUX console:
Replace c:/Users/xxxx/sources
with the location in the windows host where you plan to keep all development source files.
This is the folder where you will be cloning the CoB repos.
If in doubt, create a sources
folder in your windows home folder, and for the command above just replace xxxx
with your CoB supplied EmployeeID/User Account.
Replace yyyy
with the accountname you used when you installed WSL (you can find this in the LINUX console by running cd ~ && pwd
- the path displayed be in the format /home/accountname
Download installer from https://docs.docker.com/desktop/windows/install/
Double click the installer to launch: + Click OK to accept non-windows app, + Select WSL2 as the backend (rather than Hyper-V)
Docker desktop does not automatically start after the install, you need to start it the first time from the Start menu.
Restart your computer after this step.
If you do not, and subsequently restart the computer while off the city network, your installation will be broken, and you will have to remove Docker and WSL, and start over.
(see "Docker Fails to Restart" notes below to fix broken/non-functional WSL installs)
Verify AWS is installed using LINUX console:
You should see an output something like:
aws-cli/2.7.4 Python/3.9.11 Linux/5.10.102.1
.....
Obtain your secret access keys for AWS from the AWS administrator, and then create the AWS credentials file using the LINUX console:
Alternatively. you could also create and edit the credentials file using vim
which is installed in the WSL instance (from step 5 above).
Add your ssh keys to into your windows account (typically into a windows folder on you home drive) and then from a LINUX console:
Replace xxxx with your EmployeeID/User Account from CoB.
Microsoft Visual Studio Code (VSC)
PHP Storm
Using POWERSHELL:
Using POWERSHELL:
Using LINUX console:
Replace xxxx
with your CoB supplied EmployeeID/User Account.
Replace yyyy
with the accountname you used when you installed WSL.
Using LINUX console:
Replace yyyy
with the accountname you used when you installed WSL.
Using LINUX console:
Using Powershell (as Administrator):
From Powershell console reinitialize WSL:
From LINUX (WSL) console reset the nameserver so you can access the internet:
Where X.X.X.X is the IPAddress: 8.8.8.8 (confirm if there should be a different address) when in the office and 10.241.241.70 when not on the city network but using a VPN.
If, when restarting the computer, Docker fails to start and/or you get the following error when starting WSL:
The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.
To fix this, perform the following steps.
Step 1: Using Powershell (ps) as Admin:
Step 2: Then using a CMD shell (as Admin)
Step 3: Restart Docker for Windows from the start menu.
Contact the AWS administrator to get credentials for logging into the AWS console and (if necessary) interacting with AWS via the command line.
Once you have a login to the AWS console: if you wish to use the AWS-CLI, or use any other command line program which connects to AWS (e.g. git for CodeCommit) you will need to register/add an SSH key on your AWS-CLI account.
You can use an existing ssh key, or create a new one.
You need to install the AWS CLI if you, or a tool you use, needa to interact with AWS from the command line - for example:
To use terraform to maintain AWS
To deploy webapps to AWS
To modify AWS objects from the command line
Follow the instructions here.
You want to install the AWS-CLI on you local machine, not inside a container. Follow the Mac, Windows or Linux instructions accroding to the OS you are using.
Verify AWS is installed using LINUX console:
You should see an output something like:
If not then return to the "Install AWS-CLI" section above.
Obtain your secret access keys for AWS from the AWS administrator, and then create the AWS credentials file using the LINUX console:
Alternatively, you could create and edit the ~/.aws/credentials
file using any text editor.
The following is a printout of the console from a typical build following the instructions on Installation Instructions page.
Specifically this output is from the command:
The log above was generated using lando start
with this.lando.yml
landofile.
The log above was generated using lando start
with thisconfig.yml
project file.
Setting up the Visual Studio Code editor to work well with Drupal
Under Extensions in the left sidebar, search for "PHP Debug" and click "Install"
Edit .vscode/launch.json
Add the following configuration:
Click in top navbar navigate to file > preferences > settings
Under Workspace Settings
expand the Extensions
option
Locate PHP CodeSniffer configuration
and scroll down to the Standard
section and click the "Edit in settings.json" link
Add the following configuration to your Workspace Settings
:
This is a series of videos around site building and administration tasks. The individual videos in the series are listed in the upper right panel of the screen:
Under Extensions in the left sidebar, search for "phpcs" and click "Install"
The configuration system for Drupal 8 and 9 handles configuration in a unified manner.
By default, Drupal stores configuration data in its (MySQL) database, but configuration data can be exported to YAML files. This enables a sites configuration to be copied from one installation to another (e.g. dev to test) and also allows the configuration to be managed by version control.
TIP: Configuration data (aka settings) includes information on how custom and contributed modules are configured. Think of configuration as the way developers define how the Drupal back-end functions, and what options will be available to content authors.
Configuration is very different to content. Content is information which will be displayed to website viewers in Drupal nodes. Content is also stored in the database, but is not managed by the configuration system.
See this Drupal Framework Elements Overview.
Drupal has a built in configuration management system, along with drush CLI commands to import and export configurations.
Configurations are saved in a folder (the config sync directory) on the webserver hosting the Drupal website. This folder is defined in the settings array $settings['config_sync_directory']
which is defined in the settings.php
file. This folder is defined relative to the docroot
folder typically outside of the docroot for example:
drush cex
exports configurations from the the database into the config sync directory.
drush cim
imports configurations from the config sync directory into the database.
Module Exclusions: The Configurations for an entire module can be excluded from both of the drush cim / cex
processes by defining them in the $settings['config_exclude_modules']
array in the settings.php
file. For example:
WARNING / CARE: If you add modules into this list, then they will be removed from the core.extensions.yml
file during the next config export. This means these modules will be uninstalled/disabled on any environment in which these configs are imported.
As a rule of thumb - only add modules to this array that you wish to be removed for all environments other than the one you are developing on.
The Drush CLI is the main CLI utility and is installed and enabled on the CoB Drupal backend.
config:delete (cdel)
Delete a configuration key, or a whole object.
config:devel-export (cde, cd-em)
Write back configuration to module's config directory.
config:devel-import (cdi, cd-im)
Import configuration from module's config directory to active storage.
config:devel-import-one (cdi1, cd-i1)
Import a single config item into active storage.
config:diff (cfd)
Displays a diff of a config item.
config:different-report (crd)
Displays differing config items.
config:edit (cedit)
Open a config file in a text editor. Edits are imported after closing editor.
config:export (cex)
Export Drupal configuration to a directory.
config:get (cget)
Display a config value, or a whole configuration object.
config:import (cim)
Import config from a config directory.
config:import-missing (cfi)
Imports missing config item.
config:inactive-report (cri)
Displays optional config items.
config:list-types (clt)
Lists config types.
config:missing-report (crm)
Displays missing config items.
config:pull (cpull)
Export and transfer config from one environment to another.
config:revert (cfr)
Reverts a config item.
config:revert-multiple (cfrm)
Reverts multiple config items to extension provided version.
config:set (cset)
Set config value directly. Does not perform a config import.
config:status (cst)
Display status of configuration (differences between the filesystem configuration and database configuration).
Drupal is an alternative CLI and is installed and enabled on the CoB Drupal backend.
config:delete (cd)
Delete configuration
config:diff (cdi)
Output configuration items that are different in active configuration compared with a directory.
config:edit (ced,cdit)
Change a configuration object with a text editor.
config:export (ce)
Export current application configuration.
config:export:content:type (cect)
Export a specific content type and their fields.
config:export:entity (cee)
Export a specific config entity and their fields.
config:export:single (ces)
Export a single configuration or a list of configurations as yml file(s).
config:export:view (cev)
Export a view in YAML format inside a provided module to reuse in another website.
config:import (ci)
Import configuration to current application.
config:import:single (cis)
Import a single configuration or a list of configurations.
config:override (co)
Override config value in active configuration.
config:validate (cv)
Validate a drupal config against its schema
These are unique to the drupal CLI, rarely needed but can be useful for manually creating configs for cusom modules.
generate:entity:config (gec)
Generate a new config entity
generate:form:config (gfc)
Generate a new "ConfigFormBase"
generate:theme:setting (gts)
Generate a setting configuration theme
It is possible to override configurations in the php files on the Drupal back end.
Normally the configurations a developer will wish to override will be in a xxx.settings.yml file. This is where settings type configurations are defined and saved by contributed and custom modules.
The strategy to globally override a config setting for the entire Drupal site is to alter the $config
array in the settings.php
file.
Because the main settings.php
file can include different settings files for different environments, we can add global overrides to an environment-specific settings.php file to implement an override for only that environment.
TIP: Code in a settings.php file can be conditional, so the override can be made to be conditional on the value of a local (or environment) variable.
Example 1- Core config override: The system.maintenance.yml
file contains a message
key to control text that appears on the site maintenance page when shown. To override the message
key set in the system.maintenance.yml
file, place this in an appropriate settings file.
Example 2- Custom/Contributed Module config override: The salesforce.settings.yml
file supplied by the salesforce
module contains key to authenticate against a salesforce.com account in order to sync data. To override the consumer_secret
key set in the salesforce.settings.yml
file, place this in an appropriate settings file.
Override/Secrets Best Practice:
It is best practice not to save passwords and other secrets (incl API keys) in configuration files, as these will end up in repositories, and could be made public by accident.
Instead, passwords and other secrets should be stored as Environment variables on the Drupal web server, and then be set in an appropriate settings.php
file.
Example: recaptcha secret key saved as environment variable bos_captcha_secret
This means that passwords and other secrets are saved on the environment to which they apply so there is less (or no) need for environment-specific overrides.
It also means that all secrets are managed the same way, and can be changed on the environment and take effect immediately without needing to redeploy any code.
PHP commands retrieve a current configuration settings are as follows:
These commands will get the original config value, ignoring any overrides:
This information is adapted from this Drupal Resource, and contains more advanced techniques and discussion.
To assist with configuration management, there are a number of contributed modules.
The contributed modules are generally deployed to help manage situations where different configurations are desired on different environments.
Although this is not a contributed module, the use of .gitignore
allows a way to prevent configurations from making their way into repositories, and replicating upwards from the local development environments to the Acquia dev/stage/prod environments.
Simply add specific config files (and/or wildcards) to the .gitignore
file in the root of the repository.
Provided the files do not already exist in the repository, they will be ignored by git during commits and pushes from the local repository.
Example: .gitignore in repository/project root.
TIP: If you don't prefix the entry with any folder paths, then all occurrences of the file will be ignored. This includes files from config exports (drush cex
) and also from config_devel exports (drush cde
- see below.)
This module provides configuration import protection. If you are concerned that importing certain configurations when using drush cim
(which is used during a deploy) will overwrite existing configurations on a site, then config ignore will help prevent this.
Specific files to be ignored during an import can be added to the ignored_config_entities
key of the config_ignore.settings.yml
file. This array can also be overridden/extended by altering the $config['config_ignore.settings']['ignored_config_entities']
array in an appropriate settings file.
The .yml
extension is dropped and wildcards can be used to select entire modules, entities, etc:
ignored
_config_entities:
- salesforce.settings
- ...
- 'core.entity_view_display.node.metrolist_development.*'
Note: This module only provides protection when drush cim
is executed. When drush cex
is executed, the config_ignore settings are not considered and a full set of configs is still exported.
If you can't use $settings['config_exclude_modules']
(because you maybe only want to exclude just the module.settings.yml
file from a module) then use gitignore to stop it being committed to the repo and deployed.
CoB Local Development.
CoB use config_ignore
as a fail-safe protection.
Configurations that are set in the production system at runtime (usually settings) via the UI and are therefore different to the config in the ../config/default
folder are added to config_ignore so that they cannot be imported over the site settings should the files exist in the folder.
This module provides configuration separation. Configurations can be split into different folders and imported/exported independently.
Drush Command Summary:
config-split:activate
Activate a config split.
config-split:deactivate
Deactivate a config split.
config-split:export
Export only split configuration to a directory.
config-split:import
Import only config from a split.
config-split:status-override (csso)
Override the status of a split via state.
Config split can be used to create a number of different configuration sets which can be applied on different environments and/or at different times. This is an ideal way to control which modules are installed on which environments, and even to provide environment-centric settings (for settings controlled via config).
This module provides custom module configuration installation. If you anticipate your custom module will be used as a "contributed" module on another site - or will be enabled or disabled individually - then you will want to save its configuration into an install
folder inside the custom module.
Drush Command Summary:
config:devel-export (cde, cd-em)
Write back configuration to module's config directory.
config:devel-import (cdi, cd-im)
Import configuration from module's config directory to active storage.
config:devel-import-one (cdi1, cd-i1)
Import a single config item into active storage.
For developers using PhpStorm IDE how and where to update your settings/preferences to make debugging and developing in Drupal easier.
That's a tough question but thankfully, our team is on it. Please bear with us while we're investigating.
Yes, after a few months we finally found the answer. Sadly, Mike is on vacations right now so I'm afraid we are not able to provide the answer at this point.
Caching considerations for Drupal with Acquia
City of Boston use Acquia to host all non local (docker) servers on our deployment workflow.
Acquia's servers are contained within an Acquia Cloud subscription and implement a Varnish cache outside the load-balancer, as described here.
The release of Drupal 8 contains a rebuilt cache strategy using "Tags". Drupal 7's cache expired items based on a lifetime for that item. Drupal 8 introduces another option called cache invalidation. This is where you set the cache lifetime to be permanent and invalidate (purge) that cached item when its no longer relevant. Drupal 8 does this by storing metadata about the cached item. Then, when an event occurs, such as an update on a node, the metadata can be searched to find all cache items that contain computed data about the updated node, and can be invalidated.
Memcache (for the purposes of this summary document) can be considered to be a low-level cache which optimizes caching by saving more dynamic process responses to memory. The principal value is to minimize requests between the Drupal kernel and MySQL for queries that are run multiple times during bootstrap and page requests.
Memcache is not used on boston.gov (at this time).
You can inspect the headers of requests to a webserver to see if varnish is enabled, and if content was served from the Varnish and/or Drupal caches.
This terminal command will return the headers from a request to a URL:
Examples:
Is "passive" caching: Varnish is not aware of the origin of html content it serves/caches.
Is outside of the Acquia load-balancers and is the first cache a user request hits.
Does not cache content for authenticated users.
Is fully independent from the Drupal kernel, and therefore is decoupled from Drupal -except for a purge module provided by Acquia which manipulates a Varnish API. - https://docs.acquia.com/resource/caching/purge/ (Beware: notes are for Drupal 7) - https://www.drupal.org/project/acquia_purge
Drupal documentation says in Acquia Cloud, pages are cached for 2 minutes by default.
Varnish will accept caching instruction from a web-page headers, so we use Advanced Page Expiry (APE) in drupal to send specific cache instructions to Varnish. The default caching time (set by APE) for CoB drupal pages is 4 weeks (i.e. overrides default 2minutes with 4 weeks!).
On boston.gov, the Acquia Purge module is configured to remove entities (pages) from the Varnish cache as they are updated by content editors in Drupal. This invalidation process uses queues in Drupal. The Drupal queue processor is triggered by cron and runs until the queue is exhausted.
On production
cron runs every 5 minutes,
so (if there is no active queue) it could take up to 5 minutes for content changes to appear.
Onstage
anddevelop
cron runs every 15 minutes.
Acquia provide the memcahed
libraries on its environments, and will configure special memory allocations for memcache on request.
Memcache modules are not enabled on the City of Boston Drupal 8 environments.
Images:
Static Content: (typically web-pages built from a Drupal content type)
Drupal entities are cached using tags.
Drupal caching is managed by the Drupal kernel and the advanced_page_expiration module (APE).
When an entity (bit of content) is updated in Drupal its tags are invalidated. Pages which use that content (and which are are already cached by Drupal) are also invalidated. Next time that page is requested a rebuild/regeneration and re-cache occurs within Drupal.
When a page is invalidated in Drupal, Varnish is notified and the page is also invalidated in the Varnish cache.
Because Drupal caching and invalidation is now so effective, the page-expiry for nodes should be set to a large value (> 1 month). This is done in the APE configuration.
Dynamic Content: (typically REST end-points and web-pages built from, or containing Drupal views)
Views by default honor the tag generation and invalidation process whereby a view is cached with a tag, but the view invalidation model is not very refined (to refine the invalidation of views tags consider views_custom_cache_tags module - but (as of version 8.x.1.1) custom coding is required to implement). If a view is based upon the entity type node, then any change that invalidates a node tag will also invalidate the view. Although this causes (potentially) unnecessary invalidation of views, it is an effective way to ensure current content is returned from a view. If the view display is a page, then the invalidation the views does bubble up to Varnish (provided it is using a tag-based cache strategy).
Views can be given a lifetime, and set to expire a certain time after the last time the views underlying query was run. As I understand, with time-based caching there is no invalidation of the node, but as the content expires it will be re-cached by Drupal using the traditional (Drupal 7) method. The page containing the view should be set to expire at a relatively short period (in APE) - around the same time value as the view cache expiry. Unless told otherwise Varnish expires the page after 2 minutes.
REST endpoints should be given an expiry in APE.
The Varnish cache performs 2 functions, one intended and one somewhat unintended.
Reduces load on the application server (i.e. webserver), but also
The cache will continue to serve cached pages even if the application server (webserver) is down or otherwise unavailable. Any cached pages in varnish will continue to be served until the pages expire in the cache. Note: Not all pages are cached, and authenticated sessions are not cached.
If the installation has completed without errors, then you should be able to check the following:
The repo that was checked out in Step 1 of the installation instructions is hosted on your dev computer, and is mounted into each of the docker containers. As you make changes to the files on your dev computer, they are instantly updated in all of your local docker containers.
The production/public website is hosted by Acquia and can be accessed at https://www.boston.gov.
The local development version of the public website can be viewed at: https://boston.lndo.site. This local copy of the Drupal website is served (by Apache) from the appserver
docker container, and its content is stored and retrieved from a MySQL database in the database
docker container.
You will find the CityOfBoston/patterns repo cloned into the root/patterns
folder on your host dev computer.
The production/public patterns library is hosted by City of Boston from our AWS/S3 infrastructure and can be accessed at https://patterns.boston.gov.
The local development version of the patterns library is hosted by Fleet and can be viewed at https://patterns.lndo.site. This local copy of the Fleet website is served (by Node/Fractal) from the patterns
docker container.
You will find the CityOfBoston/patterns repo cloned into the root/patterns
folder on your host dev computer.
The gulp, stencil, fractal and other services running in thepatterns
docker container will automatically build the local fleet static website into root/patterns/public
from the underlying files in real-time as they are changed.
The production/public patterns CDN is hosted by City of Boston from our AWS/S3 infrastructure at https://patterns.boston.gov.
The local development version of the CDN is hosted by Fleet at https://patterns.lndo.site. This local CDN is served (by Node/Fractal) from the patterns
docker container.
You will find the CityOfBoston/patterns repo cloned into the root/patterns
folder on your host dev computer.
The gulp, stencil, fractal and other services running in thepatterns
docker container will automatically build the local fleet static website into root/patterns/public
from the underlying files in real-time as they are changed.
Custom theme which presents the front-end UI to all users. .
Breadcrumbs are an informative device which appear on many pages on the site. Breadcrumbs provide the user a sense of location within the site and a way to logically navigate back to the homepage.
A breadcrumb is an ordered collection of crumbs, with each crumb having a title and a link.
Drupal has a built-in breadcrumbs methodology, which will attempt to build out a pathway based on the URI (e.g. /departments/housing/metrolist
) defined by the pages (i.e. nodes) URL Alias.
It does not matter if the URL Alias is set manually or automatically, the value shown in the back-end editor form once the node is saved is used to build out the breadcrumb.
The Drupal core process creates the breadcrumb by scanning the path represented by the URI, and testing if a local page exists for each path element. It stops adding crumbs when a path element does not resolve.
FOR EXAMPLE an article is created with a URI (as defined in its URL Alias):
/departments/housing/boston/housing-information-in-boston.
When the page is rendered, Drupal scans the articles URI and
if we have a breadcrumb setting which stipulates that the homepage should always be shown as the first crumb, then a crumb of home
with a link to https://site
is created, then
checks if /departments
is a valid URI. https://site/departments
is a valid URI, so it creates a crumb of "departments" with a link to https://site/departments
, then
checks if /departments/housing
is a valid URI. https://site/departments/housing
is a valid URI, so it creates a crumb of "housing" with a link to https://site/department/housing
, then
checks if /departments/housing/boston
is a valid URI. https://site/departments/housing/boston
is NOT a valid URI - there is no page with that name on https://site
so the breadcrumb scanner stops evaluating at this point, but
if we have a breadcrumb setting to display the actual page in the breadcrumb then a final crumb of housing information in boston
is added, with no link (because this is the page showing).
The final breadcrumb in this instance would be HOME > DEPARTMENTS > HOUSING > HOUSING INFORMATION IN BOSTON with links on the first 3 crumbs.
When evaluating if a page exists on the site, Drupal only considers URL Aliases and does not check URL Redirects.
So in the example above, the boston
crumb/link still would not appear in the breadcrumb even if a place_profile
page for Boston existed with the URL Alias of /places/boston
and a URL Redirect for /departments/housing/boston
.
Where Drupal core cannot build out its own breadcrumb trail, there is some additional custom code intended to help make a logical breadcrumb.
The custom breadcrumb code only functions when it determines that Drupal has not built out the entire breadcrumb.
If Drupal has been able to build out all parts of the URI path, then the Drupal breadcrumb is used.
The custom code scans URL redirects as well as URL Aliases when building out the breadcrumbs.
Care: Redirects which are manually made on the page admin/config/search/redirect
are usually considered "external" by default. Breadcrumbs which use an external link may behave unexpectedly when clicked.
Example: the breadcrumb on d8-dev.boston.gov may open a page on www.boston.gov when clicked.
Solution: Do not create redirects for internal (i.e. Drupal hosted) pages on in the admin/config/search/redirect
page. Instead create redirects using the redirect function on the "advanced" tab of the editor form for a page.
Some URI paths are hard-coded to build specific breadcrumbs.
For example pages which have a URI path starting with government/cabinet
. The custom code ignores the "government/cabinets" part of the path and then build the breadcrumb from the remainder of the path.
The custom breadcrumb object is built here: bos_theme/bos_theme.theme::bos_theme_preprocess_breadcrumb()
The breadcrumb is styled here: bos_theme/templates/navigation/breadcrumb.html.twig
City of Boston use Acquia to host our Drupal website.
Acquia provide a number of different environments for COB to use. One of those environments is production the others are non-production - named: stage, dev, uat, ci & dev2.
Detail on deployment is covered elsewhere, but in summary we are able to "bind" certain branches of our GitHub repo (CityofBoston/boston.gov-d8) to these Acquia environments, and when changes occur in those branches, a deployment is automatically triggered.
Therefore, the way we branch-off, push-to and merge the "bound" branches is important.
The develop
branch is bound to the Acquia dev environment, and the master
branch to the stage environment. Changes cannot be made directly onto the master
branch, and changes should not be made directly onto the develop
branch - except when hotfixes are needed.
Best Practice is to create a working branch off develop
, then check out that working branch
locally.
Updated code should be committed to the locally checked out copy of the working branch
Updating the local working branch
will update the local containerized website for testing.
Periodically, the local working branch
should be pushed to the remote working branch
in GitHub.
Updating the working branch
in GitHub will not trigger any deploys or update any website.
To start the deploy to the dev environment, a PR is created in GitHub to merge the working branch
in GitHub into the develop
branch in GitHub.
Merging will trigger a build and the website on the dev environment will be updated.
When ready to deploy to the stage environment, a PR is created in GitHub to merge the develop
into the master
branch in GitHub.
Merging will trigger a build and the website on the stage environment will be updated.
To deploy to the production environment, use the Acquia Cloud UI - see continuous deployment notes.
We can bind a branch to the dev2, ci or uat environments so that we can share proposed or interim website changes with stakeholders or other individuals where a local containerized website is not appropriate. These environments can be considered on-demand, and the way to update them is similar but slightly to the normal deploy piepline, requiring an extra branch.
Branches attached to environments other than dev, stage and production in Acquia are termed environment branches (see also On-Demand Instances).
Initially, an environment branch
is created from the develop
branch.
This environment branch
is then bound to the desired Acquia environment (dev2, ci or uat).
Developers then create a working branch
off the environment branch
and check out that working branch
locally.
Developers commit their work to the local copy of the working branch
which can be pushed to the remote working branch
in GitHub whenever desired.
Updating the local working branch
will update the local containerized website for testing.
Updating the working branch
in GitHub will not trigger any deploys or update any website.
When ready to update the website on the bound environment, using a PR, the GitHub copy of the working branch
is merged to the environment branch
in GitHub.
Merging will trigger a deploy to the bound Acquia environment (i.e. dev2, uat or ci) and update the website on that environment.
Stakeholders can be directed to the website on the Acquia environment.
Once the project or piece of work is complete, a PR to merge the GitHubenvironment branch
to the develop
branch is created.
Merging will trigger a deploy to dev and update the website.
To continue to deploy to stage and production environments, follow the notes in Normal Deploy Pipeline above.
Sometimes a picture is worth 1,000 words.
In the above diagram,
Lines with an arrow indicate a merge to the branch in the direction of the arrow.
Lines with a dot connector indicate the creation (or updating) of a branch - and when the line is to a local branch it is a checkout to a local branch.
The master
branch is the production branch and cannot be pushed/merged to directly.
The correct way to update master
is to merge the develop
branch into the master
branch.
At all times the master
branch should be a copy of the code on the production environment. (see continuous deployment)
Green arrows cause a deployment process:
Only if the branch being merged into is bound to an Acquia environment, and
This is controlled/executed by Travis, taking approx 3 mins (uses 30 Travis credits), and
The website hosted on the Acquia Environment is updated during the deploy.
Orange arrows cause a build, test and deployment process:
Only if the branch being merged into is bound to an Acquia environment, and
This is controlled/executed by Travis, taking approx 30 mins (uses 300 Travis credits), and
The website hosted on the Acquia Environment is updated during the deploy.
Travis is configured so that this is extended process usually only runs when committing to the develop
branch - triggering a deploy to the Acquia Dev environment as the first step of the deployment pipeline.
Black arrows indicate a simple commit/merge process with no building or deploying:
Best practice reuquires that a working branch
is not bound to Acquia Environments
Merging does not trigger Travis, there is no deploy and 0 Travis credits are used
Note: A GitHub environment branch
can be bound to one or more Acquia Environments. When this is the case, deploys will occur simultaeously to all bound environments when the GitHub environment branch
is updated.
Travis always controls deploys, but only one set of credits is used per environment branch
merge regardless of how many Acquia environments it is bound to.
Using Lando in City of Boston.
For our purposes, Lando is a PHP-based tool-set which does 3 main things:
Provides a pre-packaged local docker-based development environment,
Provides a wrapper for common docker container management processes,
Provides a means to execute development commands inside the container from the host.
Lando curates an appropriate LAMP stack for Drupal development, largely removing the need for this skill in the local development team. The stack is contained within:
Docker images that are maintained by Lando.
A configuration file (landofile) which Lando parses into the necessary dockerfiles and utility scripts
COB uses a landofile which can be found at /[reporoot]/.lando.yml
Lando provides a CLI for tasks developers commonly need to perform on the container.
A full list of defined Lando commands can be obtained by executing:
lando
Command
Explanation
Starts all 3 lando containers, building them if they don't already exist.
Stops all 3 containers, but does not delete or destroy them. They can simply be restarted later.
Will rebuild the container using the values in the .lando.yml
and .config.yml
files.
If the containers have persistent images, these will be reused.
Any content in the database will be lost,
Project files cloned/managed by git will be left intact.
Will destroy the container.
If the containers have persistent images, these will be retained.
Any content in the database will be lost,
Project files cloned/managed by git will be left intact.
Lando provides a CLI for tasks developers commonly need to perform in the container.
Command
Explanation
Opens a bash terminal on the appserver docker container.
If the -c switch is used,
lando ssh -c "<command>"
then a terminal will be opened, the command provided will be run in the container and then the session will be closed.
eg: lando ssh -c "ls -la /app/docroot/modules/custom"
lando drush
Executes a drush cli command in the appserver container:
lando drush <command>
eg lando drush status
Note: a drush alias can be passed like this:
lando drush @alias <command>
eg: lando drush @bostond8.prod en dblog
lando drupal
Executes a Drupal cli command in the appserver container:
lando drupal <command>
lando composer
Executes a Composer command on the appserver container:
eg: lando composer require drupal/paragraphs:^1.3
lando drupal-sync-db
Executes a cob script which copies the database from the stage environment to the local development environment, and sync's all the configurations etc.
lando drupal-sync-db
lando drupal-pull-repo
Executes a cob script which pulls the latest project repository from gitHub and then clones and merges the private -repository. Finally it runs sync tasks to update the DB with any new configurations.
lando drupal-pull-repo
To update the repo's without sync'ing the content, execute:
lando drupal-pull-repo --no-sync
lando validate
Locally runs the linting and PHPCode sniffing checks that are run by Travis.
lando switch-patterns
Allows you to switch between patterns CDN hosts.
lando switch-patterns 2
switches to the local CDN in the patterns container
lando switch-patterns 3
switches to the production CDN
lando switch-patterns 4
switches to the stage patterns CDN.
A full list of defined Lando commands can be obtained by executing:
lando
lando drupal-pull-repo
lando drupal-sync-db
lando drupal-pull-repo --no-sync &&
lando drupal-sync-db
lando rebuild
or to be completely sure, run these commands from the
lando destroy &&
rm -rf <repo-path>
git clone -b develop git@github.com:CityOfBoston/boston.gov-d8.git <repo-path>
lando start
We use 2 custom themes, one which presents the backend and one which presents the front-end.
When you make an html.twig
file and add it to the templates folder of a custom theme you are pretty much done (after refreshing caches!). The Drupal theme rendering processes detect the template and uses it in preference to any template of the same name from a parent or default theme. You don't really have to do anything more than add the file and refresh cache.
But, if you add a template to a custom module -even if your intent is just to override a theme default template (e.g.field.html.twig
) or to provide a suggested template, there are a few extra things you must do.
Using the example of a custom content type (node) called "node_landing_page", the steps below fully implement a template to be used to render the nodes full
display.
Note: Drupal automatically generates the suggestion fornode__landing_page__full
which can be used for rendering the "default" (i.e. "full") display.
You can generate other suggestions using the hook_theme_suggestions_hook
hook.
Create the twig template you wish to use, and give it a name that matches an existing Drupal theme suggestion with ".html.twig" as the extension.
In rare cases you may want to create a new template suggestion. Do this by returning an array of suggestions from ahook_theme_suggestions_hook()
in your custom module (see last example below).
Convention is to name the template using an "entity breadcrumb" style, with "--"'s between entities and no spaces.
Save the template file in a folder called templates
in your custom modules root folder. In our example docroot/modules/custom/node_landing_page/templates
.
- You could organize files by creating a sub-folder tree - but if you do, you will then have to specify the path
to your template in the hook_theme
- see step 3 below.
In the hook_theme
of your module you must define your new template. This hook is read by the Drupal core theme engine and loaded into a template cache (aka register). Whenever a change is made to this hook you need to clear all caches to load your changes into the cache.
In hook_theme
return an assoc array with key-value pair nested arrays for each template you wish to define.
- The outer keys (template-keys) should be one for each of the templates you are defining. Keep it simple and traceable by setting the the template-key name to be the template filename without the ".html.twig". Important: Replace all "-"'s with "_"'s in the template-key string. (in our example the template-key is node__landing_page_full
)
- The value for the key (template-key) is an array with a required base_hook
and several other optional fields.
The base_hook
should define the entity type this template is used to render (in our case node
but other common entities we theme are field, region, block, paragraph, taxonomy_term
) .
[optional] The render element
defaults to elements
if not specified.
[optional] If you wish to use a template file which is not the same name as the suggestion (with "_"'s replaced with "-"'s) then you must specify its name in the template
field. Omit the "html.twig" extension. This could be useful if you want 2 display to share the same template.
[optional] If you want to use a custom path to the template file (i.e. not the default templates folder) then use the path
field.
(see bos_link_collections_theme
in boston.gov for example)
(see "Our Example hook_theme" below for the complete hook)
[optional] Once the cache is cleared you can then catch pre-process events using hook_preprocess_hook
in our example this would be node_landing_page_preprocess_node
(to catch all node pre-process events) or node_landing_page_preprocess_node__landing_page__full
(to catch only this new template pre-process events) - notice that the hook uses the template-key
defined in the hook_theme
array.
[optional] You can also catch template_preprocess_hook
events (in our example this is template_preprocess_node__landing_page__full
).
This hook is commonly used to create a content
variable which contains all the rendered (or renderable) elements of the elements
(or whatever the field is named in the templates render element
) array.
Our Example template file:
Our Example hook_theme:
Our Example hook_preprocess_hook (version 1):
Our Example hook_preprocess_hook (version 2):
Our Example template_preprocess_hook:
Our Example hook_theme_suggestions_hook:
CoB custom modules - usually taxonomy, nodes and paragraphs.
The following development conventions are being followed in developing boston.gov modules.
City of Boston have the following naming and grouping conventions for custom modules:
Templates for the component should be saved in:
To add a customized template, select a suggestion for the base (node, field, region etc), then
Save the template in the folder above
In the module_name_theme()
hook in module_name.module
add the following:
If a new suggestion is needed, then add the following:
Where XXX is the appropriate entity type (node, field, region, etc etc) to add a suggestion to.
Wherever possible, the style provided from the patterns library should be used. In practice this means that boston.gov can be styled by a Drupal developer ensuring that the twig template files provide HTML structured and given classes that the patterns library expects.
Should the need arise, then the patterns library style sheets can be overridden. Typically this is done at the module level, although if multiple modules will use the override, consider placing it in the bos_theme
theme.
To add overrides,
Create the style sheet module_name.css
and appropriate markup in the relevant template (see above section),
Save the stylesheet in:
Update (or create) the module_name.libraries.yml
file with the following:
Using a module_name_preprocess_HOOK()
hook in module_name.module
attach the css where and only when it is required. For example:
Wherever possible, JavaScript should not be used on boston.gov. This is to maintain compatibility with as many browsers as possible, and to maximize accessibility for screen readers etc.
Should the need arise, then a JavaScript library can be created and deployed. Typically this is done at the module level, although if multiple modules will use the override, consider placing it in the bos_theme
theme.
To add overrides,
Create the JavaScript library module_name.js
,
Save the library in:
Update (or create) the bos_modulename.libraries.yml
with the JavaScript directive - for example you could add the following:
Using a bos_modulename_preprocess_HOOK()
hook in bos_modulename.module
attach the JavaScript library where and only when it is required. For example:
Drupal 8 defines settings and configurations in YML files with the actual "current" settings and configurations being stored in the Drupal (MySQL) database.
When the website is deployed or the web-server is restarted, configurations are re-read from the database. To reload the configuration and settings from yml files requires a manual (usually drush
) process to be run by a developer.
Clearing the sites caches causes cached configurations and settings to be replaced with values from the database. Clearing caches does not reload yml files.
YML files in a modulesdocroot/modules/custom/ ... /module_name/config/install
folder will be imported into the database when a module is first installed.
YML files in the docroot/../config/default/
folder will be imported into the database when the configuration is imported via the Drupal UI, or the drush
command.
Current (run-time) settings and configurations in the database can be exported to the docroot/../config/default/
folder via the Drupal UI, or the drush
command.
If the config_devel
module is enabled then a modules configuration can be exported to the modules config/install
folder.
The dependent configurations are defined in the module_name.info.yml
file as follows:
To export these configurations to the config/install
folder use the following drush command:
Modules should try to reuse field.storage.entity-type.field_name
configurations wherever possible.
field.storage.entity-type.field_name
configurations should be:
1. saved in the modules parent module (e.g. bos_components
or bos_content
)to enable sharing, and
2. added to the parents config_devel
section of the .info.yml
file.
Custom theme which presents the back-end UI to content authors and editors. .
Modules can contain multiple paragraphs grouped by similar function.
A good example module can be found at:
Module naming convention is to call the module bos_moduleName
. The "moduleName" should be indicative of the paragraph/s contained within the module.
Sub-pages in this section assumes an example module is to be named bos_module_name
- with the module folder:
Developer notes for content type (node) design and implementation.
Modules can define multiple content types (nodes) grouped by similar function.
A good example module can be found at:
Module naming convention is to call the module module_name
. The "module_name" should be indicative of the node/s contained within the module.
Sub-pages in this section assumes an example module is to be named module_name
and therefore the module folder would be:
Entity
Field
min/max resolution & max filesize
View: Style
Images
node:department_profile
field_icon
56x56/++ - 200KB
default: (i) square_icon_56px Article: (i) square_icon_56px Card: (i) square_icon_56px Article: not displayed Published By: (i) square_icon_56px
node:event
field_intro_image
1440x396/++ 8 MB
default: (b) intro_image_fields featured_item: (i) Featured Item Thumbnail
field_thumbnail
525x230/++ 8 MB
default: (b) thumbnail_event featured_item: (p) thumbnail_event
node:how_to
field_intro_image
1440x396/++ 8 MB
default: (b) intro_image_fields [all others (10)] not displayed
node:listing_page
field_intro_image
1440x396/++ 8MB
default: (b) intro_image_fields [all others (12)]: not displayed
node:person_profile
field_person_photo
350x350/++ 5MB
default: (p) person_photos listing: (p) person_photos embed: (p) person_photos
node:place_profile
field_intro_image
1440x396/++ 8MB
default: (b) intro_image_fields Listing: (p) card_images Teaser: not displayed
node:post
field_intro_image
1440x396/++ 8MB
default: (b) intro_image_fields featured_item: not displayedListing: not displayed Listing short: not displayed Teaser: not displayed
field_thumbnail
700x700/++ 5MB
default: not displayed featured_item: (p) featured_images Listing: (i) News Item -thumbnail (725x725) Listing short: (i) News Item -thumbnail (725x725) Teaser: (i) News Item -thumbnail (725x725)
node:program_i_p
field_intro_image
1440x396/++ 8MB
default: (b) intro_image_fields listing: (b) card_images
field_program_logo
800x800/++ 2MB
default: (p) logo_images Listing: not displayed
node:site_alert
field_icon
56x56/++ - 200KB
default: (s) n/a svg (square_icon_56px) Embed: (i) square_icon_56px Teaser: not displayed
node:status_item
field_icon
65x65/++ - 200KB
default: (s) n/a svg (square_icon_65px) listing: (s) n/a svg (square_icon_65px) teaser: (s) n/a svg (square_icon_65px)
node:tabbed_content
field_intro_image
1440x396/++ 8MB
default: (b) intro_image_fields
node:topic_page
field_intro_image
1440x396/++ 8MB
default: (b) intro_image_fields featured_topic not displayed listing_long: (b) intro_image_fields listing: (b) card_images
field_thumbnail
default: not displayed featured_topic (p) featured_images: not displayed listing: not displayed listing_long: not displayed
para:card
field_thumbnail
670x235/++ 2MB
default: (b) card_images
para:columns
field_image
200x200/++ 2MB
default: (i) Med Small Square (also Person photo a-mobile 1x (110x110))
para:fyi
field_icon
56x56/++ 200KB
default: (s) n/a svg (square_icon_56px)
para:hero_image
field_image
1440x800/++ 8 MB
default: (b) Hero fixed image fields Separated Title: not displayed
para:map
field_image
1440x800/++ 8 MB
default: (b) Photo Bleed Images
para:photo
field_image
1440x800/++ 8 MB
default: (b) Photo Bleed Images
para:quote
field_person_photo
350x350/++ 5 MB
default: (i) Person photo a-mobile 1x (110x110)
para:signup_emergency_alerts
field_icon
n/a svg
default: (s) n/a svg (square_icon_65px)
para:transactions
field_icon
180x100/++ - 2MB
default: (i) transaction_icon_180x100 group_of_links: (i) transaction_icon_180x100
para:video
field_image
1440x800/++ 8 MB
default: (b) Photo Bleed Images
tax:features
field_icon
svg
default: (s) n/a svg (square_icon_56px) sidebar_right: (s) n/a svg (square_icon_56px)
entity:user
user_picture
100x100/1024/1024 1 MB
default: (p) person_photos compact: (i) Person photo a-mobile 1x (110x110)
entity:media.image
image
+++/2400/2400 8 MB
default: (i) original image [all others]: (i) Media Fixed Height (100px)
Files
media.document
field_document
node:procurement
field_document
para:document
field_document
Key ++ = not specified (unlimited) (b) = background, responsive. (p) = HTML5 Picture, responsive. (i) = Image, svg or picture, non-reponsive.
The following breakpoint grups and breakpoints are defined in D8:
Breakpoint
Start width
end width
note
group: hero
mobile
0
419
tablet
420
767
desktop
768
1439
large
1440
1919
Introduced in D8
oversize
1920
+++
have a notional max-width of 2400px
group: card
mobile
0
419
tablet
420
767
desktop
768
839
desktop
840
1439
large
1440
1919
oversize
1920
+++
have a notional max-width of 2400px
group: person
mobile
0
839
tablet
840
979
desktop
980
1279
There is also a breakpoint at 1300 in node:pip
desktop
1280
+++
have a notional max-width of 2400px
Breakpoint
responsive Style
style
size
All Nodes: field_intro_image (excluding node:post)
hero: mobile (<419px)
intro_image_fields
Intro image a-mobile 1x
420x115
hero: tablet (420-767px)
intro_image_fields
Intro image b-tablet 1x
768x215
hero: desktop (768-1439x)
intro_image_fields
Intro image c-desktop 1x
1440x396
hero: large (1440-1919px)
intro_image_fields
Intro image d-large 1x
1920x528
hero: oversize (>1920px)
intro_image_fields
Intro image e-oversize 1x
2400x660
node:post field_intro_image
hero: mobile (<419px)
Hero fixed image fields
Hero fixed a-mobile 1x
420x270
hero: tablet (420-767px)
Hero fixed image fields
Hero fixed b-tablet 1x
768x400
hero: desktop (768-1439x)
Hero fixed image fields
Hero fixed c-desktop 1x
1440x460
hero: large (1440-1919px)
Hero fixed image fields
Hero fixed d-large 1x
1920x460
hero: oversize (>1920px)
Hero fixed image fields
Hero fixed e-oversize 1x
2400x460
para:photo field_image para:video field_image para:hero field_image para:map field_image
hero: mobile (<419px)
Photo Bleed Images
Photo bleed a-mobile 1x
420x250
hero: tablet (420-767px)
Photo Bleed Images
Photo bleed b-tablet 1x
768x420
hero: desktop (768-1439x)
Photo Bleed Images
Photo bleed c-desktop 1x
1440x800
hero: large (1440-1919px)
Photo Bleed Images
Photo bleed d-large 1x
1920x800
hero: oversize (>1920px)
Photo Bleed Images
Photo bleed e-oversize 1x
2400x800
find
card: mobile (<419px)
Card Images 3w
Card grid vertical a-mobile 1x
335x117
card: tablet (420-767px)
Card Images 3w
Card grid vertical b-tablet 1x
615x215
card: desktop (768-839px)
Card Images 3w
Card grid vertical c-desktop 1x
670x235
card: desktop (840-1439x)
Card Images 3w
Card grid horizontal c-desktop 1x
382x134
card: large (1440-1919px)
Card Images 3w
Card grid horizontal d-large 1x
382x134
card: oversize (>1920px)
Card Images 3w
Card grid horizontal e-oversize 1x
382x134
para:column
this should be a 200x200 circle ??
card: mobile (<419px)
Card Images 3w
Photo bleed a-mobile 1x
335x117
card: tablet (420-767px)
Card Images 3w
Photo bleed b-tablet 1x
615x215
card: desktop (768-839px)
Card Images 3w
Photo bleed c-desktop 1x
670x235
card: desktop (840-1439x)
Card Images 3w
Photo bleed c-desktop 1x
382x134
card: large (1440-1919px)
Card Images 3w
Photo bleed d-large 1x
382x134
card: oversize (>1920px)
Card Images 3w
Photo bleed e-oversize 1x
382x134
post:field_thumbnail(feature)
card: mobile (<419px)
Featured Images
Featured image a-mobile 1x
335x350
card: tablet (420-767px)
Featured Images
Featured image b-tablet 1x
614x350
card: desktop (768-839px)
Featured Images
Featured image c-desktop 1x
671x388
card: desktop (840-1439x)
Featured Images
Featured image d-full 1x
586x388
card: large (1440-1919px)
Featured Images
Featured image d-full 1x
586x388
card: oversize (>1920px)
Featured Images
Featured image d-full 1x
586x388
node:person_profile:field_person_profile user:user_picture
person: mobile (<839px)
Person Photos
Person Photos a-mobile 1x
110x110
person: tablet (840-979px)
Person Photos
Person Photos b-tablet 1x
120x120
person: desktop (980-1279px)
Person Photos
Person Photos c-desktop 1x
148x148
person: desktop (>1280x)
Person Photos
Person Photos d-full 1x
173x173
node:pip:field_program_logo
person: mobile (<839px)
Logo Images
logo square a-mobile 1x
672x672
person: tablet (840-979px)
Logo Images
logo square b-tablet 1x
783x783
person: desktop (980-1279px)
Logo Images
logo square c-desktop 1x
360x360
person: desktop (>1280x)
Logo Images
logo square d-full 1x
360x360
Modules can contain a single vocabulary taxonomy.
A good example module can be found at:
Module naming convention is to call the module vocab_moduleName
. The "moduleName" should be indicative of the taxonomy contained within the module.
Sub-pages in this section assumes an example module is to be namedvocab_module_name
- with module folder at:
Custom nodes deployed in boston.gov have a navigation menu which sits below the introduction text on each page.
The in-page menu requires the node to embed paragraphs, the node--xxxx.html.twig to contain a <div> and for each embedded paragraph to have a key field.
If the node has components (paragraphs) embedded, then the node will have a field called field_components
and this field will be of a type Entity reference revisions
. The field will allow only paragraphs, and will specify the paragraph types that are allowed on the node.
To enable in-page navigation, each paragraph must have a (text field) field_short_title
, and to reduce confusion for content editors, that field should be named "Navigation Title".
To make the menu look nice and work well on mobile devices, content editors and authors should be encouraged to keep the content added to the Navigation Title to 20 chars or less.
To enable the in-page navigation menu, the nodes template should include the following:
This block should ideally be located below the title and intro-text sections.
When there is more than 1 paragraphs embedded in a nodes web page, an in-page navigation menu should appear on the page. The menu should be styled from the patterns library.
UX Desktop: When the page first loads, the menu should display above the fold. As the user scrolls down the page, the menu should collapse into a fixed toolbar at the top of the page, below the seal menu with the seal retracted. Theme should come from patterns.
UX Mobile: Menu should appear as a collapsed set of drawers with a chevron icon to expand. Css from patterns controls the collapse across the responsive page width.
In either UX, when the user clicks on the menu, the page should scroll smoothly down to the correct paragraph display on the webpage.
The twig template (e.g. node--xxx.html.twig
) for the node is responsible for locating the menu on the node. The code required is described above.
On-page menu elements are rendered from the bos_theme_preprocess_node()
and bos_theme_preprocess_field()
hooks in bos_theme.theme
found in /themes/custom/bos_theme/
.
The page click and scrolling is provided by component-navigation.boston.js
which is found in /themes/custom/bos_theme/js/
.
To make a paragraph include itself in the in-page navigation menu, it just needs to contain a text field named field_short_title
(and for that field to be included in the display being used on the node).
From the building housing landing page a user can click and open a map showing all the current projects. The map and sidebar list are both generated via Building Housing View.
Entry point: /buildinghousing (click show map)
Custom CSS:
docroot/modules/custom/bos_content/modules/node_buildinghousing/css/node_bh_landing_page.css
docroot/modules/custom/bos_content/modules/node_buildinghousing/css/views_bh_listings.css
Views Template: docroot/modules/custom/bos_content/modules/node_buildinghousing/templates/views-view--bhmaps--maplist.html.twig
Views Functions: docroot/modules/custom/bos_content/modules/node_buildinghousing/node_buildinghousing.views.inc
Custom Markers: docroot/modules/custom/bos_content/modules/node_buildinghousing/images
Boston.gov use Drupal core workflow and moderation modules.
CoB use the following modules for moderation:
Content Moderation: [core] Provides moderation states for content.
Workflows: [core] Provides UI and API for managing workflows. This module can be used with the Content moderation module to add highly customizable workflows to content.
Moderation Note: [contrib] Provides the ability to notate elements of a moderated Entity.
Moderation Sidebar: [contrib] Provides a frontend sidebar for Content Moderation.
This page contains a list of sample nodes for content verification.
TestPage (article)
Event
With header image
No header
Listing Page
Landing Page
homepage
Topic Page
With Image
Place Profile
With Header Image
Person Profile
Program Initiative Profile
With Image
No Image
Post
With Image
No Image
How To
With Image
No Image
Article
Department Profile
Public Notices
Script Page
TestPage (article)
Event
With header image
No header
Listing Page
Landing Page
homepage
Topic Page
With Image
Place Profile
With Header Image
Person Profile
Program Initiative Profile
With Image
No Image
Post
With Image
No Image
How To
With Image
No Image
Article
Department Profile
Public Notices
Script Page
City of Boston support development of discrete React (and other JS framework) WebApps. Because these services will be hosted on Drupal there is a custom Drupal webapp launcher and some conventions to
Have stable local build of Drupal 8 website running on your machine.
Make sure you are “logged in” or have “admin” access to view the CMS and add new content / nodes.
Using Drush: lando drush uli
Using Drupal web login: https://boston.lndo.site/user/login?local
Navigate to Content menu item (make sure you are logged into Drupal to view) https://boston.lndo.site/admin/content
Scroll to bottom of the page and add content item by clicking “Add Content”, . Select “Listing Page” content type.
Give new Page content a Title. This is required.
Click on the “Components” tab on the left menu
Find the dropdown Select menu to add “new component” and select “Web App” from the list.
Name the Web App something appropriate as it relates to your project. (i.e. Metrolist or My Neighborhood)
Click “Save” near the bottom or side of the page to save and create a new page / node. This will serve as the container page / component for your new web app.
Navigate to the “bos_web_app” directory of the drupal 8 repository that is checked out to your local machine /docroot/modules/custom/bos_components/modules/bos_web_app.
Locate the “apps” folder / directory. If one doesn’t exist, please create. /docroot/modules/custom/bos_components/modules/bos_web_app/apps
Inside this “apps” directory create an empty folder and name it the same name you called your Web App in Step 6 of Part 1 above. /docroot/modules/custom/bos_components/modules/bos_web_app/apps/my_neighborhood NOTE: Any spaces in your app name should be treated with underscores. For example, My Neighborhood would have a folder name of “my_neighborhood”.
Locate and open the libraries yml file named “bos_web_app.libraries.yml”. This file will serve as the pointer and compiler that will tell Drupal to attach and bunde all your JS and CSS files for your application. /docroot/modules/custom/bos_components/modules/bos_web_app/bos_web_app.libraries.yml
See an example libraries.yml file on GitHub for a project that is currently being developed. https://github.com/CityOfBoston/boston.gov-d8/blob/mnl_12-9-2019/docroot/modules/custom/bos_components/modules/bos_web_app/bos_web_app.libraries.yml Drupal also has good documentation on using libraries and attaching files. https://www.drupal.org/docs/8/creating-custom-modules/adding-stylesheets-css-and-javascript-js-to-a-drupal-8-module
Once you have libraries file setup, go create the files needed, OR first create the files you’d like and then add them to the libraires.yml as laid out in Part 2 - Step 5 above.
It’s important to note that any time you add a new attached file to libraires.yml, the Drupal cache will have to be cleared for changes to take effect. You can clear the cache either through the Drupal CMS or via Drupal drush CLI
Drupal CMS: navigate to admin/config/development/performance, and click button at top of the page labeled “clear all caches”
Using Drush CLI: drush cr
After clearing the cache, you should now see your application load on the Drupal page you created and saved in Part 1 - Step 7. NOTE: You will NOT have to clear the Drupal cache every time you make a change to a CSS or JS file. This is only for new items in the libraires.yml file.
Notes on bos_admin theme for UX when adding content via admin pages.
To keep a clear and clean editor experience which uniform across the site, the form display configuration for nodes will contain groups.
There will be a root (parent) group of type tabs
. This group will contain child groups of type tab
. Each tab group will contain the nodes fields.
Recommended Grouping Layout:
1. Required: Create a parent tabs
group called group_main
(the name is not important).
2. Create child tabs
groups with the following layout:
- Basic Information: Contains custom fields required by the new content-type,
- Sidebar Components: Single Entity reference revisions
field for sidebar paragraphs,
- Components: Single Entity reference revisions
field for main page paragraphs,
.. then other tab
groups is needed (try to minimise if possible).
The use of further nested groups is discouraged, except for grouping which occurs within paragraph components that are exposed in Components or Sidebar Components tabs.
If other groups are required to help clarify the form display, they should be details
type groups, and should be set to be collapsible, and be collapsed by default.
IMPORTANT:
For site consistency, ensure any and all Entity reference revisions
(i.e. paragraphs) on the node are set to "Paragraphs (EXPERIMENTAL)"
in the form display.
The bos_admin
theme makes some changes to the node administration forms.
Config settings provided by drupal core and drupal contributed modules are moved into a tab
called advanced, and are set as children of the tabs
group as defined above.
This manipulation is done in the hookbos_admin_form_alter()
found in bos_admin.theme
file at themes/custom/bos_admin
.
The moderation state, revision log note and save / preview / delete buttons are grouped together in a details group and moved to the right sidebar area of the administration form.
This manipulation is done in the hookbos_admin_form_alter()
found in bos_admin.theme
file at themes/custom/bos_admin
Drupal 9 (our current install) uses CKEditor 4, when we move to Drupal 10, it uses CKEditor 5. CKEditor 5 is currently installed in core/modules, but not used.
We currently have 2 or more versions of CKEditor we use plus extension of the plugin in 2 other components.
Our current Drupal version D9 uses the CKEditor 4 in modules/contributor folder.
Once we upgrade to Drupal 10, will need to move from CKEditor 4 to 5. That is because Drupal 10 does not use CKEditor 4.
Samples of CKEditor 5 we can explore to integrate/use can be found here:
Building Housing allows constituents to see a full inventory of projects and parcels managed by the city of Boston.
Entry point: /buildinghousing (click show map)
URL pattern: /buildinghousing/{ProjectName}
Custom CSS: docroot/modules/custom/bos_content/modules/node_buildinghousing/css/node_bh_project.css
Field Templates: docroot/modules/custom/bos_content/modules/node_buildinghousing/templates/snippets
Field Formatters: docroot/modules/custom/bos_content/modules/node_buildinghousing/src/Plugin/Field/FieldFormatter
Helper Functions (Pre-process, alters, fields):
docroot/modules/custom/bos_content/modules/node_buildinghousing/node_buildinghousing.module
docroot/modules/custom/bos_content/modules/node_buildinghousing/src/BuildingHousingUtils.php
Parcel Map - settings
Uses BuildingHousingUtils->setParcelGeoPolyData() and Arcgis to set the Polygon geo data for the parcels
Photo gallery - settings
Building Housing Project Map (same as above Map feature)
Project information
Developer Information - Custom field in node_buildinghousing.module
Project Goals
Project Timeline - Altered field to combine other fields
docroot/modules/custom/bos_content/modules/node_buildinghousing/src/Plugin/Field/FieldFormatter/EntityReferenceTaxonomyTermBSPublicStageFormatter.php
docroot/modules/custom/bos_content/modules/node_buildinghousing/src/BuildingHousingUtils.php
Project Type - Custom field in node_buildinghousing.module
Contact information - Custom field in node_buildinghousing.module
Email sign-up - Custom field in node_buildinghousing.module
Feedback form - settings
The largest on-page component for a Building Housing Project record in Drupal is its timeline. The timeline visually displays a Projects past and projected events & information in a time ordered list.
The main code is contained here:
docroot/modules/custom/bos_content/modules/node_buildinghousing/src/Plugin/Field/FieldFormatter/EntityReferenceTaxonomyTermBSPublicStageFormatter.php
Various templates are defined to create each timeline article, one template per timeline type.
Timeline items are gathered from various places in the system.
- text posts are items in the field_bh_text_updates
field of the bh_update
entity. Tiles are inserted using the messages create date for ordering. If the message is updated in SF, it continues to use the original created date. If the message is deleted in SF it should be deleted from the timeline. Posts are sourced from Salesforce Chatter and imported using .
- documents are file objects which are referenced as items in the field_bh_attachment
field of the bh_project
. Tiles are inserted using the create date of the attachment for ordering. If Drupal detects an update to the attachment in SF, it continues to use the original created date for ordering. Documents are sourced from Salesforce and imported using .
- a single rfp tile is created if the field_bh_rfp_issued_date
field is not empty Tile is inserted using the date in the field_bh_rfp_issued_date
field. If the date is changed (or deleted) in SF, the rfp tile will be moved or removed from the timeline. RFP date is sourced from Salesforce as part of .
- meetings are bh_meeting
objects which are referenced as items in the field_bh_update_ref
field of the bh_update
entity. Tiles are inserted using the meetings start date. If the meeting is updated in SF, meeting will move accordingly in the timeline. Meetings are sourced from Salesforce and imported using .
- stages
Timeline icons are controlled by css and are defined in function getStageIcon()
in EntityReferenceTaxonomyTermBSPublicStageFormatter.php
The various stages are defined at items in the bh_public_stage
taxonomy
Converting D7 structures to D8
Login to the website and go to the paragraphs admin page (/admin/structure/paragraphs_type
) and delete the paragraph you want to work on
Step 1 above may delete some of the field.storage dependencies (field definitions), so just re-import all the bos_component module config to make sure you get all the shared config back into the database: lando drush config-import --partial --source=/app/docroot/modules/custom/bos_components/config/install
Create the module scaffolding using drush, for example: lando drush componetize bos_discussion_topic --components=discussion_topic
Add hook_theme() to .module file to connect to the paragraph template
Copy the corresponding paragraph template from boston.gov-d8/docroot/themes/preConversion/component
and put it in the scaffolding that the drush command from step 3 created: docroot/modules/custom/bos_components/modules/bos_discussion_topic/templates
Enable the module: lando drush en bos_discussion_topic
In the Drupal UI, add the new bundle to the field_components
paragraph types list for the Test Component Page content type: /admin/structure/types/manage/test_component_page/fields/node.test_component_page.field_components
Create a test page with the component added to review admin UI and display
Importing a single config file:
Exporting database config directly to your module (Important: the config file needs to be referenced in your module's info file under the config-devel
key): lando drush config-devel-export bos_cabinet
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
(admin)
Once you have the libraries file open, add an entry with the name of your application and add / attach necessary items to your application. For example, the application “My Neighborhood” would have a library entry as such…
A DND Development Officer is able to create a Meeting object in Sales Force, with all the meeting information, and attach it to a Project in Sales Force. When CRON runs on Drupal it then will sync any new or updated Meetings from Sales Force with a Drupal BH Meeting. After the new meeting is crated in Drupal, we also creat a Drupal Event so that the meeting will be listed on the Boston.gov Events page. The Meeting is also then displayed on the corresponding Drupal BH Project.
BH Meeting Content Type: /admin/structure/types/manage/bh_meeting
Sales Force Mappings: /admin/structure/salesforce/mappings/manage/bh_community_meeting_event
Templates:
docroot/modules/custom/bos_content/modules/node_buildinghousing/templates/snippets/bh-project-meeting-notice.html.twig
docroot/modules/custom/bos_content/modules/node_buildinghousing/templates/snippets/bh-project-timeline-meeting.html.twig
Helper Functions (Pre-process, alters):
docroot/modules/custom/bos_content/modules/node_buildinghousing/node_buildinghousing.module
docroot/modules/custom/bos_content/modules/node_buildinghousing/src/BuildingHousingUtils.php
This feature allows Drupal entities to sync back and forth with Sales Force Objects via the Drupal Sales Force module. It is primarily used by DND to use the data and access that is already on DND's Sales Force server to automatically sync with the Boston.gov Drupal site. This is controlled by field mapping configurations in the Drupal Sales Force module. Currently, all syncing is scheduled to happen on Drupal CRON run, every 5 minutes, with only updated objects.
Sales Force Mappings:
Building Housing - Projects (/admin/structure/salesforce/mappings/manage/building_housing_projects/fields)
bh_project --> Project__c
Building Housing - Website Update (/admin/structure/salesforce/mappings/manage/bh_website_update/fields)
bh_update --> Website_Update__c
Building Housing - Project Update (/admin/structure/salesforce/mappings/manage/building_housing_project_update/fields)
bh_update --> Update__c
BH Community Meeting Event (/admin/structure/salesforce/mappings/manage/bh_community_meeting_event/fields)
bh_meeting --> Community_Meeting_Event__c
Building Housing - Parcels (/admin/structure/salesforce/mappings/manage/building_housing_parcels/fields)
bh_parcel --> Parcel__c
Building Housing - Parcels-Project Assoc (/admin/structure/salesforce/mappings/manage/bh_parcel_project_assoc/fields)
bh_parcel_project_assoc --> ParcelProject_Association__c
Sales Force Settings:
Building Housing - Projects (/admin/structure/salesforce/mappings/manage/building_housing_projects)
Building Housing - Website Update (/admin/structure/salesforce/mappings/manage/bh_website_update)
Building Housing - Project Update (/admin/structure/salesforce/mappings/manage/building_housing_project_update)
BH Community Meeting Event (/admin/structure/salesforce/mappings/manage/bh_community_meeting_event)
Building Housing - Parcels (/admin/structure/salesforce/mappings/manage/building_housing_parcels)
Building Housing - Parcels-Project Assoc (/admin/structure/salesforce/mappings/manage/bh_parcel_project_assoc)
Troubleshoot Sales Force connection issues
If Drupal and Sales Force are not connecting or syncing please check the Authorization from Drupal to Sales Force (/admin/config/salesforce/authorize/list). You may need to Re-auth or even make a new connection if you need to connect to a lower development or testing environment on Sales Force. If you need access to an instance contact DND's Sales Force developer/administrator.
If a single item is not syncing or if you need info about the Drupal to Sales Force connection you can view the list this admin page. If you edit the instance you then have the option to force pull or push the Drupal entity with the Sales Force Object. If there is an issue you should see an error message in the response. You can also find other useful info like timestamps and record ids.
A generic form which is attached to email addresses found on boston.gov, and handles sending emails to those addresses.
This app uses the bos_email
provided services as described here.
A contact us form template is maintained (within script tags) in bos_theme/templates/snippets/contactFormTemplate.html.twig
and is included on every boston.gov (Drupal) page.
The patterns library contact form javascript function start()
(in scripts/components/contact.js
) is executed when a boston.gov (Drupal) page loads.
The start()
function scans the completed page looking for email addresses anywhere in the html being served. Essentially, it:
- replaces the default mailto
directive for each email address with a click event listener which will trigger the handleEmailClick()
function, and
- attaches a click event listener to the forms' submit button and calls the handleFormSubmit()
function when the user clicks the submit button on the contact form.
TODO: The handleEmailClick()
function is called once when the page has finished loading.
It should be extended to also run when ajax events return data to the page (since email addresses could be served by XHR/AJAX as well as traditional document events)..
@see DIG-910
When an email address is clicked on the page, handleEmailClick()
- copies the template form from the script tags,
- inserts the correct email recipient to a hidden field,
- inserts all this onto the page and displays the contact us form, and
- in the background makes an ajax request to /rest/email_token/create,
generating and saving a unique "session" token in the form.
When the submit button is clicked, handleFormSubmit()
validates the form, and then submits, along with an authorization token to /rest/email_session/contactform
.
The PostmarkAPI.php
in the bos_email
module provides a "guaranteed delivery" type service. It tries to send the email via the postMark API, and if it fails for some reason, queues the email for later delivery in a Drupal queue.
Drupal will retry the email until it is able to send it to the postMark service.
Email tracking with Drupal has been discontinued with the use of the PostMark service. All logging can be obtained from the PostMark UI/Console.
Emails which fail to send can be viewed in the email_contactform queue.
This requirement could be obselete, and a requirement from earler versions of the form. Can consider removing this "feature" and reverting to having the sender be the email address provided by constituent. That way the cob employee/recipient can simply reply to the email.
Emails are sent from an email address that is generated for each email sent. The format of the email address is:
{random_string}@contactform.boston.gov
We don't send the email from the original sender's email address as that could be a vector for an email spoofing attack.
The reply_to header of the email is set to the constituents email first and the unique address of the contact form second. When someone replies to the email the to address of the email is set to the constituents email first and the unique address second. This delivers two copies of the email. One goes directly to the constituent, and one to the contact form API. We log the response time using the copy that gets sent to the contact form API. Once the reply email is delivered to the constituent, further replies will be direct between the constituent and city.
Diagram of the overall flow (needs updating)
The contact us email which is sent from postMark to the cob recipient is a plain text email.
The PostmarkAPI is capable of generating HTML emails. A nicer experience for cob staff would be to receive an html email.
As well as the tokens etc, we could consider introducing an IP lock for say 60sec after a contact form submission is made (the timer should be managed on the server side, inside the endpoint). It should not affect genuine users but would minimize the impact (and success) of flood attacks and relay exploits on the endpoint. If there were a rapid second submission from the same IPAddress, we would flash a warning back to the user to try again after 60 seconds.
At the moment, there is no confirmation of a successful submission other than the on-screen notification. We have the submitters' email address, so we could send an email confirming the submission (see next enhancement).
There is little in the way of validation of the email the submitter provides as a contact for responses.
The JavaScript validation process does checks that a "visually" valid email pattern is entered, (i.e. it is an email pattern string) but does not validate the email address exists.
2022 this ticket (DIG-438) added a second email address box to try to prevent typos in the email address. While it helped a lot, it is not 100% effective and some emails from innocent errors and malicious actors (e.g. spammers) still slip through.
2023 DIG-1675 provided some modules which can check for DNS validity, disposable emails and also email blacklists. This would further help reduce invalid sender email addresses being provided.
If we send email confirmations, then we could use that process to determine if the email address is active and does not have temporary errors (mailbox full etc). Since we are using AJAX it would be simple to end the confirmation email, wait a period of time (say 10sec) and then query postMark to see if the email was delivered. If it was then return success, if not then flag to the submitter on the form. This would not be 100% effective because some errors take time to be reported, and we cannot wait too long during validation. The other two approaches above should still be implemented to try to filter out malicious actions, and to detect innocent errors before consuming email server resources.
A drupal module that displays various pieces of information about the entered address.
The my neighborhood application is a Drupal component that can be added to any page on boston.gov. Currently, it is here: https://www.boston.gov/my-neighborhood
There are two Drupal endpoints associated with this application.one for receiving the updates
One for receiving the updated records on a nightly basis (updates): https://www.boston.gov/rest/mnl/update
One for receiving the full load of records: https://www.boston.gov/rest/mnl/import
This page has information on the status of Drupal import scripts that run nightly and once a month.
The data used in this application come from a variety of GIS data sources. This spreadsheet lists them all in addition to the workflow that brings each one into Civis.
These datasources are combined with the SAM address dataset in Civis. This workflow is the one that combines all the datasets.
Every night, the my neighborhood workflow runs and sends the any records that have updated or changed to boston.gov. Once a month on the 1st, the workflow sends the entire load of records to Drupal.
NOTES: There are two cards that show data which are hard coded and don't sync with Civis. They are the mayor's name in the "YOUR MAYOR" card found in "mnl_config.js" file and the "YOUR AT-LARGE CITY COUNCILORS" card found in "Representation.js" file.
In addition to the hard coded items, there too is a data dependency on ReCollect. We built an endpoint in Drupal "rest/recollect" found in Drupal module "bos_mnl" to query ReCollect API with the user's inputted address and returns the next trash and recycling date for that specific address.
Jonathan Porter (Analytics division of DoIT) is the best point of contact for the Civis portion. Matt McGowan (Digital division of DoIT) is the best point of contact for the Drupal REST information.
Product
Product requirements (historical and current)
Historical work that *potentially* isn't reflected above: https://drive.google.com/drive/u/0/folders/1QVSDn6CgJiMY7dbYEiKKDRRX1cYOEHmu
This document outlines to process for getting the Budget Fiscal Year website together. It has been shared with OBM (Office of Budget Management).
Drupal Building Housing records are synchronized from MOH SalesForce on a schedule. Salesforce is the authoritative source, and data should not be added or changed in Drupal.
There are 6 synchronizations with Salesforce which run in the following order, every cron run (so every 5 mins) The order is important, because Projects must be created before Attachments & Website Updates before Meetings & Chatter postings.
Building Housing - Projects
bh_project
Project__c
Building Housing - Website Update
bh_update
Website_Update__c
Building Housing - Project Update
bh_update
Update__c
BH Community Meeting Event
bh_meeting
Community_Meeting_Event__c
Building Housing - Parcels
bh_parcel
Parcel__c
Building Housing - Parcels-Project Assoc
bh_parcel_project_assoc
ParcelProject_Association__c
Each synchronization process does the following: A Drupal Application runs a Salesforce API object query to identify any records in the SF object which have been deleted or which have their last updated date after a last updated date stored by Drupal for that SF object. The identified records are then added/updated or deleted in Drupal. At the end of the process Drupal updates its last updated date for that object with the latest SF updated date found in the import. This date is then used as a high-water mark for the next import cycle.
This synchronization imports Project records from Salesforce Project__c
object into Drupals' bh_project
entity.
This synchronization manages project stages, documents and messages to appear on the timeline. It extends and replaces the functionality for the Update__c object which is imported for legacy reasons in Building Housing - Project Update
.
There is only ever 1 Website Update (Website_Update__c
) record per Project (Project__c
) record in Salesforce.
There is a rule in Salesforce to stop multiple records which would potentially create confusion for project stages etc.
If multiple Website Update records do exist for a Project in Salesforce, then all records will be imported into Drupal, but ONLY the last (when ordered by createdDate) will be used in the Project Timeline.
There should be no new Update__c records being created in SF. However, there are legacy records containing data which must be included in Drupal. Even though we do not normally expect the sync to process these objects, the code is important if the data is to be recreated accurately and completely (for example if a Salesforce purge is performed).
This handles legacy TextMessages (now use chatter) and document attachments (now use Website Update Attachments).
This synchronization imports Community Meetings event records from Salesforce Community_Meeting_Event__c
object into Drupals bh_meeting
entity.
This is a simple mapping and the import does little except cleaning up any URLs and address fields.
The bh_meeting
record holds a reference to it's parent bh_update
which is linked to the bh_project
.
If a Meeting event is updated, or deleted in SF, then the associated record will be updated in Drupal, and if necessary will move on the timeline.
-- OUT OF DATE Feb 2023 --
Digital team is responsible for showing unofficial election results on Boston.gov.
On election night, an HTML file of partial results is generated during tabulation and copied to the cityofboston.gov’s web server. The contents of this file are inserted into the Boston.gov unofficial election results page via a client-side AJAX request.
Historically, election night traffic for these results has been more than cityofboston.gov was comfortably able to handle. We previously ran a job on Heroku that copied the page from cityofboston.gov and put it on an S3 bucket, and the Boston.gov page referenced that S3 version.
As of January 2019, we’ve modified both the source file and our Incapsula settings so that Incapsula caches all requests for the file for 60s. This let us turn off the Heroku job.
Building Housing allows constituents to see a full inventory of projects and parcels managed by the city of Boston.
This Drupal App allows residents to browse all active housing, open space, commercial, and to be decided (TBD) projects. It also provides information on city-owned land for sale.
Residents can click search for a specific project and/or view a map of all projects. A drill-down into a project page displays goals, a timeline, photos, meetings and more.
Search for Projects on the Building Housing Map
View details of a Project
Auto-create Community Meeting Events from Sales Force on CRON (5 minutes)
Auto-create and update Projects from Sales Force on CRON (5 minutes)
Drupal
Sales Force
Google Maps API
node_buildinghousing (docroot/modules/custom/bos_content/modules/node_buildinghousing)
Views
Webforms
Geolocation
Salesforce
The following Drupal Entities are created to warehouse/cache data which originates in Salesforce.
The following nodes have records which are populated (add, update and delete) by mappings between Salesforce and Drupal which are run each cron cycle.
The following taxonomies have been created and their list items are maintained manually by Drupal developers. Taxonomy items can be added and deleted as needed, but usually will need adjustments to code to work as required.
node
bh_project
The primary content_type for a Building Housing Property. Contains meta data about the Project and links to updates, attachments, parcels etc.
node
bh_update
Contains information about updates to a project. This includes certain status changes, attached documents, links to community meeting records and comments from CoB Project Managers to insert into the timeline.
node
bh_meeting
Contains information about Community Meetings held by CoB with residents regarding Building Housing Properties.
node
bh_parcel
The official parcel number and top-level info - with GIS coordinates for the parcel.
node
bh_parcel_project_assoc
Possibly deprecated ? I believe the parcel# is now saved in the field_bh_parcel_id field of bh_project.
node
bh_contact
Deprecated (no data)
node
bh_account
Deprecated (no data)
taxonomy
bh_project_stage
The overall project stage. (usually when project status = active).
Linked directly from bh_project
.
taxonomy
bh_project_status
The status of the Project.
Linked directly from bh_project
.
taxonomy
bh_funding_stage
The funding stage for the Project.
Linked directly from bh_project
.
taxonomy
bh_project_type
The broad project type for the Project.
Linked directly from bh_project
.
taxonomy
bh_project_update_type
Update type for a bh_update
.
Linked directly from bh_update
.
taxonomy
bh_property_type
The property type.
Linked directly from bh_parcel
.
taxonomy
bh_public_stage
The Project stage as used in the timeline.
Linked directly from bh_project
.
taxonomy
bh_neighborhood
taxonomy
bh_record_type
taxonomy
bh_disposition_type
List of disposition types - for use in map.
view
building_housing
view
bh_maps
view
building_housing_updates
page
buildinghousing/[propertyname]
This is the landing page for information about a property.
This is a customized page for the node bh_project
and contains a timeline and information about parcels.
As at March 2023
The Google reCAPTCHA v3 returns a score for each request without user friction. The score is based on interactions with your site and enables you to take an appropriate action for your site.
Install Google reCAPTCHA
Make sure to register reCAPTCHA v3 keys (Secret and Site) here.
Follow the installation and configuration instructions here to add reCAPTCHA to your Drupal site: https://www.drupal.org/docs/8/modules/recaptcha-v3/installation-and-configuration
Finally navigate to your site and set up form(s) that needs recaptchae added.
reCaptcha analytics
Url to view scores: https://www.google.com/recaptcha/admin/site/430165885
Environment variables
All variables are in Acquire: https://cloud.acquia.com/a/develop/applications/
To update environment variables properly, make sure to update private repo with the current variable name and then add to Acquire. Only add Api key to Acquire.
Make sure you add the "recaptcha_v3.settings" in gitignore.
Metrolist allows Boston residents to search for affordable housing. The Search and AMI Estimator experiences are built in React (this repository). The rest of the app is built in Drupal, with the underlying data layer provided by Salesforce. The core UX is composed of the following:
Homepage Links to Search, AMI Estimator, and introductory information. Route: Controlled by: Drupal Search Lists housing opportunities in a paginated fashion and allows user to filter according to various criteria. Route: Controlled by: React APIs in use: Developments API AMI Estimator Takes user’s household income and household size, and calculates a recommendation for which housing opportunities to look at. URL: Sub-routes:
/metrolist/ami-estimator/household-income
/metrolist/ami-estimator/disclosure
/metrolist/ami-estimator/result
Controlled by: React APIs in use: AMI API Property Pages Route: /metrolist/search/housing/[property]?[parameters] Controlled by: Drupal Developments API Lists housing opportunities as a JSON object. URL: AMI API Lists income qualification brackets as a JSON object, taken from HUD (Department of Housing and Urban Development) data. URL:
Prerequisites:
Node.js
Yarn or NPM (These docs use yarn
but it can be substituted for npm
if you prefer.)
Git
Read/write access to CityOfBoston
GitHub
⚠️ Warning: These docs were written for a standalone installation of the Metrolist React codebase, which outputs JavaScript files that can be committed to the Drupal monorepo separately. However, the React codebase has since been subsumed into the monorepo, rendering certain build instructions herein out-of-date. Please refer to the Boston.gov documentation for further instruction.
yarn start
runs:
ipconfig getifaddr en6
(or ipconfig getifaddr en0
if en6
isn’t found), which determines which LAN IP to bind to. This allows testing on mobile devices connected to the same network.
webpack-dev-server
. This compiles the ES6+ JavaScript and starts an HTTP server on port 8080 at the address found in the previous step.
Note: The ipconfig
command has only been tested on a Mac, and it also may not work if your connection isn’t located at en6
or en0
.
This runs webpack-dev-server
without launching a new browser window automatically.
There are Node.js scripts available under _scripts/
to aid development efforts.
Located at _scripts/component.js
, this facilitates CRUD-style operations on components.
This copies everything under _templates/components/Component
to src/components/Widget
and does a case-sensitive find-and-replace on the term “component”, replacing it with your new component’s name. For instance, this index.js
template:
…becomes this:
Subcomponents can also be added. These are useful if you want to encapsulate some functionality inside of a larger component, but this smaller component isn’t useful elsewhere in the app.
This creates the directory src/components/Widget/_WidgetGadget
containing this index.js
:
This renames the directory and does a find-and-replace on its contents.
⚠️ Known issue: The component renaming algorithm does not fully find/replace on subcomponents.
The domain from which this data is fetched can be specified with the following environment IDs:
Acquia environment
etc.
The default value is ci
, as that should have the most recent data set in most cases.
Sets the version number for Metrolist in Drupal’s libraries.yml
file and this project’s package.json
file.
Prefer readability for other developers over less typing for yourself.
HTML/CSS:
JavaScript:
Use Functional Programming principals as often as possible to aid maintainability and predictability. The basic idea is for every function to produce the same output for a given set of inputs regardless of when/where/how often they are called. This means a preference for functions taking their values from explicit parameters as opposed to reading variables from the surrounding scope. Additionally, a function should not produce side-effects by e.g. changing the value of a variable in the surrounding scope.
metrolist/
__mocks__/
: Mocked functions for unit/integration tests.
_scripts/
: CLI tools
_templates/
: Stubbed files for project scaffolding. Used by CLI tools.
coverage/
: Code coverage report. Auto-generated. (.gitignore
’d)
dist/
: Build output. Auto-generated. (.gitignore
’d)
public/
: Static files such as images, favicon, etc. These files are not used by Drupal, which uses its own tempalting; only in development. Thus, images have to be copied to the appropriate directory prior to deployment.
src/
: React source.
components/
: React components.
globals/
: SASS variables, mixins, etc. which are used cross-component.
util/
: Utility functions.
index.js
: React entrypoint.
index.scss
: App-wide styles. (Use sparinginly; prefer component-scoped.)
serviceWorker.js
: Service Worker code from Create React App; not currently used.
setupTests.js
: Jest configuration.
_redirects
: Netlify redirects.
DEVNOTES.md
: Notes taken during development.
package.json
: Project metadata/NPM dependencies.
README.md
: Project documentation (this file).
yarn.lock
/package-lock.json
: Yarn/NPM dependency lock file.
Every React component consists of the following structure:
Component/
__tests__
: Integration tests (optional)
Component.scss
: SASS styling
Component.test.js
: Unit test
index.js
: React component
methods.js
: Any methods that don’t need to go in the render function, for tidiness. (optional)
All classes namespaced as ml-
for Metrolist to avoid collisions with main Boston.gov site and/or third-party libraries.
Vanilla BEM (Block-Element-Modifier):
Blocks: Lowercase name (block
)
Elements: two underscores appended to block (block__element
)
Modifiers: two dashes appended to block or element (block--modifier
, block__element--modifier
).
When writing modifiers, ensure the base class is also present; modifiers should not mean anything on their own. This also gives modifiers higher specificity than regular classes, which helps ensure that they actually get applied.
An exception to this would be for mixin classes that are intended to be used broadly. For example, responsive utilities to show/hide elements at different breakpoints:
Don’t reflect the expected DOM structure in class names, as this expectation is likely to break as projects evolve. Only indicate which block owns the element. This allows components to be transposable and avoids extremely long class names.
Always include parentheses when calling mixins, even if they have no arguments.
Don’t declare margins directly on components, only in wrappers.
Currently this is used for previewing on Netlify, to get a live URL up without going through the lengthy Travis and Acquia build process.
To make asset URLs work both locally and on Drupal, all references to /images/
get find-and-replaced to https://assets.boston.gov/icons/metrolist/
when building for production. Note that this requires assets to be uploaded to assets.boston.gov
first, by someone with appropriate access. If you want to look at a production build without uploading to assets.boston.gov
first, you can run a staging build instead.
This is identical to the production build, except Webpack replaces references to /images/
with /modules/custom/bos_components/modules/bos_web_app/apps/metrolist/images/
. This is where images normally wind up when running yarn copy:drupal
.
Aliases exist to avoid long pathnames, e.g. import '@components/Foo'
instead of import '../../../components/Foo'
. Any time an alias is added or removed, three configuration files have to be updated: webpack.config.js
for compilation, jest.config.js
for testing, and .eslintrc.js
for linting. Each one has a slightly different syntax but they all boil down to JSON key-value pairs of the form [alias] → [full path]. Here are the same aliases defined across all three configs:
webpack.config.js
:
jest.config.js
:
.eslintrc.js
:
All mailto:
links require the class hide-form
to be set, otherwise they will trigger the generic feedback form.
Every component should have its own unit test in the same directory. This is enforced by the Component test stub (_templates/components/Component/Component.test.js
), which contains the following:
So when running yarn component add
, you automatically generate a test that fails by default. You have to manually uncomment the call to render
(and ideally write more specific tests) in order to pass. This is designed to be annoying so it isn’t neglected.
When testing interactions between two or more components, or for utility functions (src/util
), put tests in a nested __tests__
directory.
One example of this is the Search
component, which contains a separate test file for every FiltersPanel
+ ResultsPanel
interaction,:
You have to run a browser without CORS restrictions enabled. For Chrome on macOS, you can add this to your ~/.bash_profile
, ~/.zshrc
, or equivalent for convenience:
This will prevent you from running your normal Chrome profile. To run both simultaneously, install an alternate Chrome such as Canary or Chromium. For Canary you would use this command instead:
Then in a terminal, just type chrome-insecure
and you will get a separate window with no security and no user profile attached. Sometimes Google changes the necessary commands to disable security, so check around online if this command doesn’t work for you. Unfortunately no extensions will be installed for this profile, and if you install any they will only exist for that session since your data directory is under /tmp/
.
Change base.href
to the Google Translate iframe domain,
Perform the navigation,
Change base.href
back to boston.gov immediately afterward to make sure normal links and assets don’t break.
To do this automatically, there is a custom Metrolist Link
which wraps the React Router Link
and attaches a click handler with the workaround logic. So, anytime you want to use React Router’s Link
, you need to import and use @components/Link
instead. This is the technique used by the Search component to link to the different pages of results.
If instead you want to use React Router’s history.push
(or the browser-native history.pushState
) manually, you can import these helper functions individually:
This is the technique used by the AMI Estimator component to navigate between the different steps in the form.
As you can see, the hierarchical relationship between Widget and Gadget is reflected in the naming. The React display name is WidgetGadget
, and the CSS class name uses a element gadget
belonging to the widget
block, i.e. widget__gadget
.
Due to , the AMI API is not fetched live from the AMI Estimator. Instead, it is fetched at compile time using this script, which caches it as a local JSON file at src/components/AmiEstimator/ami-definitions.json
.
www
or prod
→
ci
→
dev2
→
Consistent and readable JavaScript formatting is enforced by + an ESLint auto-formatter of your choice, such as .
.env
, .env.development
, .env.production
: configuration (environment variables).
.eslintrc.js
: configuration.
.travis.yml
: configuration.
babel.config.js
: configuration.
postcss.config.js
: configuration. Used to postprocess CSS output.
webpack.config.js
, webpack.production.js
, webpack.staging.js
: configurations for different environments.
Avoid parent selectors when constructing BEM classes. This allows the full selector to be searchable in IDEs. (Though there is a VS Code extension, , that solves this problem, we can’t assume everyone will have it or VS Code installed.)
is installed to enable the same CSS helper functions that are used on Patterns, such as font-size: responsive 16px 24px
.
This first runs a production Webpack build (referencing webpack.config.js
), then copies the result of that build to ../boston.gov-d8/docroot/modules/custom/bos_components/modules/bos_web_app/apps/metrolist/
, replacing whatever was there beforehand. This requires you to have the repo checked out and up-to-date one directory up from the project root.
We’re using + to ensure that future development doesn’t break existing functionality.
We’re using React Router for routing, which provides a Link
component to use in place of a
. Link
uses history.pushState
under the hood, but this will fail inside the Google Translate iframe due to cross-domain security features in the browser. (For an in-depth technical explanation of why this happens, see ). So in order to make app navigation work again, we have to hack around the issue like so:
Option
Description
-m
, --major
Sets the left version part, e.g. 2.x.x. If omitted, major will be taken from existing Metrolist version.
-n
, --minor
Sets the middle version part, e.g. x.5.x. If omitted, minor will be a hash of index.bundle.js for cache-busting.
-p
, --patch
Sets the right version part, e.g. x.x.3289. If omitted while minor is set, patch will be a hash of index.bundle.js for cache-busting. If omitted while minor is not set, patch will not be set.
-f
, --force
Allow downgrading of Metrolist version.
--help
This screen.