Skip to main content

Amon Carter

Ongoing Optimizations
Amon Carter

Following Phase two, we have engaged with the client to improve and optimize sections of the website and create a more accessible site. Along with basic improvements for the client, we have taken deep dives into the infrastructure of the site, which included migration of all artwork imagery to an AWS S3 bucket. We also took deep dives into the actual code of the site to improve performance where we found areas to optimize code and change techniques to squeeze as much performance out of Drupal as we can.

AWS s3 File system

For the sites overall size, we had a lot of media which did not match up with our hosting needs. The site has close to 100k items in their collection and has multiple images per collection, plus just the normal images throughout the site. To handle this, we enabled the S3 File System module to handle most images through out the site. We did not have it simply "take over the public" files. We used it as a separate file stream.

After another stage of performance enhancements, we also added Cloudfront in front of the s3 bucket. Just providing an extra layer of a performance boost.

Performance Enhancement Discovery

After taking care of some basics maintenance, the client came back to us with "how can we make the site faster"? As with a lot of Drupal sites, the site was just lagging a bit. For this stage, we ran a number of basics tests and evaluations to get started.

  • HTML Validation
  • Chrome Lighthouse
  • Webpage Speed Tests
  • New Relic Monitoring

Evaluating the results from all these and by setting up some ongoing monitoring, we were able to create a solid list of tasks, big and small. Working through this list and prioritizing with the client, we have been working through the list and steadily improving the site.

Code Optimizations, part 1

New Relic - and some simple code clean up

Since we are hosting on Pantheon, we have access to New Relic monitoring as well. Overall it is a pretty handy tool and I'd recommend it. I evaluated the logs a little bit and noticed there was a specific View that was being called a lot. It was just too much for what view it was and also, it was not a simple view. So I dove into the code.

We quickly found an issue in the node_preprocess_node() function in the theme. Basically, any display mode for artworks, we were calling this view. We had not properly scope when that view was called. Once we found it, we were able to quickly clean that up as well as a few smaller things.

Ajax specific content

There were a few components on the site we decided to AJAX content in. This helps us be able to cache individual pages longer. For example, we have a block in the header displaying the Museum Hours. This content is on every page, as well as changes every 24 hours. By AJAXing that content, we can cache the rendered page much longer. And only the simple AJAX call to get the hours busts cache regularly.

Code Optimizations, part 2

Amazon CloudFront CDN

To help a bit with optimization, since all our images are being stored on Amazon S3 bucket, we added the CDN to help with performance.

Caching Specific Display mode

We use a teaser or block display mode a lot for showing grids of artworks. Since artwork data are really only changing on sync, these teaser views can be cached for a very long time. Using the Permanent Cache Bin module, I tried out creating a permeant cache that won't be busted by Drupal's normal clear cache. Now when an object is synced, I cache that rendered view and its ready to go when its needed.