Client Work Hacks

July 27, 2023 10 min read

Practical tips for imperfect agency work.

If your work is perfect this doesn’t apply to you.

I was recently walking through some work I did for a web agency, and I ended up demonstrating some tools and services I use to try and look smart.

I’m not afraid of looking dull, and I finally welcome mistakes as part of the process, but I want to find and fix my blunders quickly. Accepting mistakes is one thing, but expecting others to notice or live with them is apathetic and unprofessional when it’s in my power to do better.

I want to make new and exciting mistakes, not repeat old ones.

If your work is motivated by caring for people, you already look for ways to support what they do and most of this would occur to you naturally if it hasn’t already.

But I’m constantly exploring the obvious, so here are some little meditations I’ve not written before in chronological order!

Consider the Job

I’d like to riff on an idea best articulated by Michael E. Gerber in The E-Myth Revisited: if you think the job is building a website, you’re wrong.

The client hires you because they need a website, but that site needs to do something more than exist as a website.

It may not be the thing they approach you with, either.

They may need it to sell t-shirts, but secretly the job is to free up designers or developers and give marketers more control.

Or it may be that internal, big-company politics prevent a project from getting off the ground and you need to be a diplomat that tackles the messy parts out of view in order to deliver a vital proof of concept.

Much of my work came from clients that hired a slick design agency who hyped an underperforming or unfinished project, leaving the client to salvage what they could and find someone attentive and careful to see things through. My job was often to reassure them I’d be able to finish the job, then do it.

Your most ingenious contraption is worthless until all the stakeholders are on board, content editors are confident with it, and someone has arranged to continue caring for it.

When you know exactly what the job is, you’ll be able to ask questions that make sure your work is doing its job.

Leave Notes

I know you’re smart, (looks around, lowers voice) and I know you make mistakes.

If you take the time to explain what you were doing, I’ll be able to appreciate how thoughtful you were doing the thing and writing about it, and I’ll have a much better chance of recognizing potential mistakes.

I’ll do the same thing for you!

Bonus: anyone else that stumbles onto our writing will know how great we are and get a chance to spot our mistakes.

It can be the client, our coworkers, future developers that expand or inherit the project, or a yet-to-be-imagined audience.

When you take the time to write plainly, you make your work accessible.

Someone may understand a directly-relevant business goal or support thread better than you, and they get to follow closely even if they’re not a developer. Keeping things inaccessible is gatekeeping, and I have a hard time imagining a scenario where gatekeeping leads to the best outcome for everyone.

Here are some offenses to humanity:

  • A project without a clear scope of work and expectations.
    Clear expectations demonstrate expertise, account for unknowns, and put everyone at ease by being specific wherever you can and offering a process for tackling the squishy parts.
  • Tasks, issues, and PRs without descriptions.
    The only person living in your head is you. Help everybody else by explaining what you intended to do and why. Pretend for just a moment you might have made a mistake, or what you did is not obvious to anyone that looks at it, and that you’re saving someone a lot of time and guessing. (Could even be future you, so treat yourself!)
  • Building a thing without showing anyone how to use it.
    First, you’re shortchanging yourself if you don’t bother to see if your work makes sense to whoever will use it. Second, you’re not done when the code’s done and the tests pass; you’re done when the thing is serving its purpose in the world. Write documentation, record a video, or hop on a screen share. Pass the baton in stride—don’t make someone chase you for it.1

Take the time to document your work and your intentions, whatever it is you’re doing. If the thought makes you nauseous, find a weirdo like me that would love to help you with the writing.

If you think this doesn’t apply to you, it extra special applies to you.

Test Before Production

I’ve probably broken a global search widget on every damn site I’ve built.

Not because I’m doing bold and daring things with it, but because once it’s working I move on until someone has a question or a problem with it. An index goes sideways, a bundle is mangled or mis-cached, and it dies quietly in plain sight. Nobody knows until some poor soul pokes at it.

I can quickly prevent some human suffering with a test that reads like instructions for a person: go to the homepage, click the search field, enter a keyword, and look for a specific result to appear.

Gone are the days of pushing something to staging and frantically clicking through the things I can remember biting me earlier, because I have Playwright to smoke test the front end.

Any time I inadvertently break something a user could’ve clicked through and been sad about, I fix it and write a test so it’s harder to repeat the same mistake.

Not only does this save me time, Playwright (and other front-end testing suites) can run the same test in multiple browsers, under different network conditions, at different viewports, etc.—whatever I take the time to configure.

Here are my smoke tests for this little site:

tests/features.spec.ts
import { test, expect } from "@playwright/test"

test("search works", async ({ page }) => {
  await page.goto("/search")
  await expect(page).toHaveTitle(/Search/)
  await page.getByTitle("Search keyword(s)").fill("fascinating")
  await expect(page.locator(".search-results")).toContainText(
    "Martian Time-Slip"
  )
})

test("dark mode toggle works", async ({ page }) => {
  await page.goto("/")
  await page.getByTitle("Switch to dark mode").click()
  await expect(page.locator("html")).toHaveClass("dark")
  await page.getByTitle("Switch to light mode").click()
  await expect(page.locator("html")).not.toHaveClass("dark")
})

They were quick to write, they should be straightforward to read even if you’ve never used Playwright, and they run quickly via GitHub Actions on every commit.

You can run checks for accessibility targets, performance budgets, and whatever else you can think of!

Smoke testing is only the tip of this iceberg, and you already know that testing is a whole thing regardless of what your relationship with it is like.

When I’m building custom server-side tools, like a Craft CMS plugin or module that does something important for the site, I’ll write tests with Pest or Codeception that ensure my code behaves like I intend. Not only are those safeguards, the tests themselves are a form of documentation that demonstrates how I’ve expected the code to work.

I wrote a little side project that slurps reading notes off a Kindle, and it includes some simple string-handling methods I could very easily screw up. So I started with these tests:

<?php

use mattstein\dekindler\StringHelper;

test('generates expected slugs', function () {
    expect(StringHelper::slugify('Hello World'))->toEqual('hello-world');
    expect(StringHelper::slugify('Hello World', '_'))->toEqual('hello_world');
    expect(StringHelper::slugify('Let’s try this: use some interesting characters?'))
        ->toEqual('lets-try-this-use-some-interesting-characters');
});

test('normalizes author names', function() {
    expect(StringHelper::normalizeAuthorName('Watts, Alan W.'))
        ->toEqual('Alan W. Watts');
    expect(StringHelper::normalizeAuthorName('Alan W. Watts'))
        ->toEqual('Alan W. Watts');
});

Again these should be easy to read, and you don’t need to look at my project or the class I’m testing to understand what it should do or even add your own tests.

I did the same thing thinking through and trying different clumps of text it parses, and when I discover an unexpected twist I can update the thing to continually improve it without sliding backwards.

It may be a lot to ask you to familiarize yourself with all the code I wrote, but you can surely look at my tests to see how I expect things to work and maybe even spot cases I didn’t consider or flat out got wrong.

You might even say “wow Matt you don’t know how to write a parser!” and I can agree with you and refactor the whole thing and use those tests confirm I didn’t break it.

I smoke test and sometimes write tests first, but I don’t worry much about testing methodology or code coverage. I don’t discount the value of having a coherent approach to testing, especially with a team, but I’m only motivated to use tests to improve my thinking and protect me from my own real or imagined mistakes.

If you can readily think of things you’ve biffed before and you’d like to catch sooner, there is probably a way to introduce tests that can help you with it.

Monitor Production

At least one person in your timeline has said “I test in production and so do you.”

Shipping something off to production without knowing how it performs isn’t testing, that’s neglect.

To test, you need some visibility into how things are going.

There are lots of tools for monitoring uptime and server vitals. I mostly use Hetrix Tools, but I’ve been through services like Pingdom, Uptime Robot, and Status Cake that can do the same job. This depends a lot on how you host, but I frequently use virtual servers and want to know what their CPU, memory, and disk usage look like. It’s important to watch usage trends to separate random internet nonsense from steadily-growing problems that warrant investigation. Whenever I can get a glimpse of analytics, I like to compare traffic and performance trends.

I also use Sentry, Bugsnag, or Rollbar to capture errors and exceptions in production. (Also curious about Highlight but haven’t tried it yet.)

Not only can exception handlers expose problems you may not have encountered locally, but the tools include an entire system for managing the issues: filtering out noise, discerning frequency and urgency, assigning ownership of an issue or discussing it, and creating smooth links with support systems and source control. Each report typically includes a stack trace and session details that can skip all kinds of back-and-forth without being an affront to the victim’s privacy.

You can monitor user experience, too!

I’ve helped with some A/B testing efforts and gleaned insights from analytics, and one of my favorite tools for observing visitor behavior is FullStory. It offers complete visual playback of a visitor session, so you can see where someone hesitates or moves their cursor or falls down a hole with something that’s broken. You can also zoom out to get a look at trends, my favorite of which are dead clicks: elements users repeatedly click on that don’t do anything. These are obvious opportunities for improvement. I hesitate recommending FullStory because I’m also a fan of privacy and not collecting information you don’t call out in a privacy policy and intend to use in a meaningful way—but that’s a topic for another post. Intentional design is about functionality and not just appearance, so tools like this can be used to evaluate design decisions and support improvements.

tl;dr

Work with people you care about and try to make sure your work is doing its job.


p.s.

Since it’s 2023, I feel obligated to point out I’ve read or used each thing I mentioned here—no affiliate links or undisclosed incentives. I’m also an organic human person writing with neurons, typos, and spirit and not farming it out to AI.

Footnotes

  1. This is an ownership mindset, which is the opposite of “not my problem” thinking.