Peter Ward

Why the customer should be doing UAT testing

Written by Peter Ward | Feb 25, 2026 11:41:20 PM

Most software doesn’t fail because it’s technically wrong. It fails because it behaves slightly differently from how people expect—usually at exactly the moment they’re busy, distracted, or under pressure.

This table exists to catch those moments before they become business problems.

Use the custom application exactly as you would on a normal day. Don’t try to be clever. Don’t try to be thorough. Just try to get your work done.

  • When something works, note it.

     

  • When something feels confusing, note that too.

     

  • And when something breaks, definitely note it.

    Each row represents one real task you attempted. Short, plain language is not only acceptable—it’s preferred. Precision is less important than honesty.

Think of this less as “testing software” and more as observing reality.

Your feedback here reduces risk, prevents surprises, and saves everyone time later—especially you.

Where do you start?

Instructions to Testers

A short instruction block explains how to use the form:

  • Testers should use the application as they normally would
  • Each row equals one task or scenario
  • Notes should be
  • short and written in plain language
Feedback Table Structure

The core of the document is a table designed to standardize feedback, with these columns:

  • Scenario / Task – what feature or workflow was tested
  • What Were You Trying to Do? – the user’s intent
  • What You Expected to Happen – expected system behavior
  • What Actually Happened – observed behavior
  • Result (Works / Confusing / Broken) – quick classification
  • Comments / Issues – additional context or problems

 

Example Row

An example entry (“Create request”) demonstrates:

  • How detailed each column should be
  • What “good” feedback looks like
  • How to mark a task as Works when behavior matches expectations

 

Blank Rows for Tester Input

The remaining rows are intentionally empty, providing space for:

  • Multiple test scenarios
  • Repeated usage over time
  • Aggregated feedback from one tester or multiple testers

 

Final Few Words

Test because you don’t know what matters

Most organizations overestimate how well they understand user behavior. Sutherland argues that small, seemingly trivial changes can have outsized effects, and the only reliable way to discover them is to test rather than theorize.

Don’t just test what’s easy to measure

I'm critical of testing that focuses only on obvious, rational metrics (speed, cost, efficiency) while ignoring psychological factors like reassurance, trust, perceived effort, or status.

Good testing should ask:

  • Does this feel easier?
  • Does this reduce anxiety?
  • Does this increase confidence or clarity?

 

Run many small tests, not one big “proof”

I favor cheap, fast, low‑risk experiments over heavyweight, “scientific” tests that take months and must justify themselves politically.

  • Small tests encourage curiosity
  • Big tests encourage defensiveness
  • Frequent testing lowers the cost of being wrong

The goal isn’t certainty—it’s learning velocity.

 

 

Expect surprises—and treat them as assets

If a test result feels obvious in hindsight, that doesn’t mean the test was pointless. Sutherland often points out that the value of testing is in discovering things no one would have confidently predicted.

If your tests always confirm expectations, you’re probably: Testing too cautiously, or testing the wrong things