Decoupled Drupal: POWDR’s Front End Architecture Build

August 24, 2017
0
Skiing!

This is the last installment in the decoupled Drupal project we've working on with Elevated Third and Hoorooh Digital. The project we're documenting was one we worked on for Powdr Resorts, one of the largest ski operators in North America.

The first installment in the series was A Deep Dive into a Decoupled Drupal 8 Project. Part two offered a radical change of altitude, from Andy Mead, Drupal Developer at Elevated Third: Decoupled Drupal: A 10,000-foot View. Part 3 was on Decoupled Drupal Technologies and Techniques

In this final installment in the series, Denny Cunningham, Lead Front End Developer at Hoorooh Digital at Hoorooh Digital, discusses the three main areas that needed to be addressed during the build of POWDR’s front end architecture: Routing & Syncing with the API, Component Driven Content, and the Build Process & Tools.

Introduction

For a front end developer, there’s no shortage of tools available. Every day there are new tools emerging, existing tools changing, and old tools being deprecated.

One of the biggest challenges is keeping your arsenal up-to-date and making decisions to ensure your clients stay at the forefront of tech. Many times I’m presented with designs and a tight deadline long before a back end technology has even been considered. In the past, the best technique was to forge ahead building the front end, and set aside time later to integrate the front end code into any number of CMS or ecommerce solutions.

It’s rare that this process would go smoothly. More often than not, the front end would end up an inefficient hodge podge of what was originally written. In addition, the integrated front end, and all front end code written moving forward, would be intimately tied to the constraints of the chosen backend technology.

With the emergence of APIs driving just about every type of data and 3rd party tool you can think of, why not manage content and ecommerce data the same way? This allows the front end to do what it does best — render content in a clear and concise manner, with smooth interactions and transitions, all while sitting inside a pleasant UI experience.

In this article we’ll discuss the three main areas that needed to be addressed during the build of POWDR’s front end architecture:

  • Routing & Syncing with the API
  • Component Driven Content
  • Build Process & Tools

Routing & Syncing with the API

Regardless of the front end tools chosen, it’s important to figure out a plan for routing, and how the chosen routes will sync with the API.

In our case, since we would be using Angular, choosing Angular UI Router was an easy choice. Angular’s UI router allows you to set up simple routing with ease, or to take it to the next level and create complex nested routing of your applications controllers and templates.

An easy way to determine a plan for your routing is to analyze a project’s UX. In our case, we determined that the routing would require up to four levels of routing: site-section, main-page, sub-content, and content-trays.

My personal preference is to count the homepage as “level 0,” and load homepage content when rendering the wrapper of the site. This approach allows us to nest all other levels under our homepage. It also makes the homepage easily accessible even when a user loads the site from a deep link elsewhere within the website.

By nesting the deeper layers within each other, you can create shared content spanning any number of pages, and lessen load when transitioning between states. Since our information architecture needed to be maintained by the CMS, we worked with Elevated Third to come up with an initial call from the API on site load, which gathers a tree structure of routes, persistent wrapper content, and our homepage content.*

Sticking with the decoupled philosophy, we decided that the tool used in Drupal for managing the information architecture and content would not be the front end team’s concern as long as we were in agreement with the backend team on the JSON structure of the API calls. In our case, we iterated over the nested tree structure to configure our routing and generate our navigation structure for the entire application. This set the stage for all the interactions between page states.

With all of the routing configured, each state’s levels are able to share logic for determining which API call to make next in order to request the needed content. Next we worked with Elevated Third to map out the nuances of the level structure so that as a user navigates between routes they never have to request content that has already been previously requested.

With a mix of tracking which levels are being navigated and Angular’s caching, we were able to cleanly transition between states and only request the minimal amount of content needed for the user’s latest interaction.

Component Driven Content

With other decoupled systems I’ve architected, when it came to making API requests for page content we were primarily requesting large chunks of markup or markdown that would render within the level structure. However, with the release of Drupal 8, and components being one of the biggest new features in Angular 2, we decided to take the opportunity and challenge ourselves to create a component based architecture.

The byproduct of this approach has produced highly customizable components which opened our eyes to the possibilities of future development. Again, I’m not going to spend much time on the back end development other than that we used paragraph types in Drupal that map to our front end components.

After Elevated Third taught us how to build the first few paragraph types by customizing the field settings, our front end team was able to build out the paragraph types with little support from the back end team. It was easy to tie their front end components to the CMS.

Automating the relationship of the back end data to the components was, along with routing, one of the most important aspects of this project. Components were not the only challenge on this project, but they were the first challenge of the project where we knew what we wanted to do, but not exactly how we would accomplish the task.

When we started this project, Angular 2 was still in beta, but there was a stable release of Angular 1.5 which had Angular 2 components available for use. This release allowed us to build with the knowledge that the new tools were coming over the horizon. This allowed us to keep up with where the leading frameworks were heading.

We knew we would be triggering components based on their naming, just like directives in Angular 1. In the past, we’ve embedded markup in the CMS with Angular directives, so I knew that compiling the content coming through from Drupal would be key. However, triggering a compiled bit of markup versus triggering the generation of a component based solely on a name passed by the CMS had us perplexed.

We came up with something we called the “component injection service.” Using Angular’s dependency injection, we would inject our service wherever we needed to inject components within the application. The service would accept a component or collection of components from the CMS, along with a collection of settings, data, and in some cases groups of nested child components.

The service would then convert the name of the Drupal paragraph type to its associated component name. Finally, the service would create a new Angular scope merged with the component’s settings via data binding, generate the html element and attributes of the component, compile the component, and render the component on the page.**

In the past, we’ve built projects using Ember, Backbone, and custom frameworks. However, this was the first project where we utilized components, and the benefits were hugely apparent. We’re yet to use React on a project, but having spun up a couple React scaffolds, I could see a similar approach being used within the React framework.

Build Process & Tools

Note: As I began writing this segment, I started going through the plethora of tools that make up the front end application. I soon realized it would be far too long and could be an entire article in itself. Instead, I decided to write a more general overview.

Because POWDR’s front end architecture requires many websites sharing the same codebase, we created a development and distribution process. When the developer runs the application locally, they must specify a flag for the client property they would like to build, and the environment they would like to point to. For example: serve (with or without :dist) --property=propertyname --env=dev.

The build process then looks at the config files for that property and determines which scripts, plug-ins, and components to include in the build. The process then gathers the assets and styles for the site and processes them for the build. The big difference in the process here is based on whether the developer is running locally or packaging a site to be deployed to the Acquia server.

For example, if running locally, tools like the CSS preprocessor and Typescript engine will be watching for local changes so that when the developer makes changes those changes are processed and their browser is automatically reloaded so they can test their changes.

Or when running the dist step the CSS preprocessor styles/Typescript are written/compiled to CSS/Javascript, minified, cache busted and prepared for deployment. The entire suite of websites is built in a single index.html template file that is customized based on the property being built. Within this step is where the proper Google Analytics accounts and Google Tag Manager accounts are configured, along with tools for helping older search engines that can’t handle single-page applications index all the pages available via Angular’s routes.

In Conclusion

In the past 15 years or so, I’ve worked in Coldfusion, WAMP, LAMP, Flash/Actionscript, VB.Net, C#, Ruby on Rails, and Node environments. I’ve built sites in Drupal, WordPress, Joomla, SiteCore, Contentful, and more proprietary CMSs than I can remember.

But when it comes down to it, I’m a front end developer and that’s what I’m most passionate about. I’ve always enjoyed working with back end developers to create the best system possible for our clients.

Yet it never fails: the backend can have huge impacts on how I do what I do best. With decoupled systems I can prototype an API using Node so my frontend developers can get started as soon as the UX and designs hit our inbox, while backend technologies are still being scrutinized. The chosen backend technology is not going to change the fact that the frontend will be built with the latest version of HTML/CSS/Javascript.

The separation of concerns provided by decoupling allows for the dev-ops and sys-ops to focus on platform, data, and business requirements, while the front end team can focus their energy on building the best possible UI experience.

Should you build a decoupled Drupal front end? That’s still a hard question to answer. Decoupled systems are a fairly new concept and present new challenges while solving a lot of problems caused by a front end that’s heavily integrated in the backend system.

From our experience, the most immediate noticeable benefit of going decoupled occurs when you have a network of front end sites and apps, similar or dissimilar, that need to share a front end codebase and share content and data.

Since POWDR has a dozen disparate websites and systems, and that number grows every year, moving everything to shared platform was an easy choice. POWDR’s resorts and their Woodward camps have a plethora of tools, processes, and potential digital systems that could benefit from sharing data and content between their websites and apps.

An API-driven CMS was clearly a superior solution.

Thanks for reading!

* We’re currently working on an optimization to cache the initial call’s data as a part of the front end build for a more efficient load.

** Post launch I realized that, because our component injection service was binding and appending data in the DOM, when the user moved to a new section of the site Angular wasn’t releasing the newly generated scopes for each injected component -- only the DOM elements were being released. The memory leak wasn’t noticeable until quite a few page changes, but it was noticeable enough to warrant a refactor. The component service was originally set up so the components had their own scope and no parent scope, in an attempt to maintain the independence of components. However, the easiest way to keep track of the generated scopes was to pass in the scope of whatever was going to host a particular set of components simply as a placeholder for trash collection when releasing the component’s scope upon navigating to another section of the site. Thanks to the front end’s caching layer, performance wasn’t affected because the components didn’t have to be re-injected upon revisiting a page.

Special thanks to Chris Cruz for a careful reading of this post.

Sign-up for our Developer Blog Newsletter

Thanks!

Add comment

By submitting this form, you accept the Mollom privacy policy.