top of page

AI is evolving faster than we think—but will it ever truly get us?

AI supervisor in a hard hat and high-vis vest oversees people tumbling on stairs during a fire drill. Sign reads "FIRE DRILL TODAY."
When AI takes 'learning through experience' a little too literally...

Moore’s Law says tech doubles in speed/capability every 2 years or so. But is that still true in the age of AI? I looked into it and what I found blew me away:

According to Mo Gawdat (former head Google X, so he knows a thing or two about AI) suggests that AI’s capabilities are doubling every 5.7 months.

5.7 months?! Just let that sink in.

That’s not just faster processors or bigger models, it’s a shift in capability. And it raises a huge question for anyone working in innovation, experimentation, or digital strategy:


At what point does AI stop mimicking us and start understanding us?



The Animal Picture idea no user testing would have revealed


In a post by John Sills, he shared a brilliant example of human insight:

No focus group, no customer interview, no behavioral data would’ve led you to the idea of putting animal pictures in stairwells in skyscrapers to encourage people to keep their heads up and reduce the risk of tripping during fire drills.


It’s the kind of idea born from empathy, intuition, and playfulness...but not AI, evidently...

When I asked ChatGPT for a solution to that problem, one of its suggestions was to simulate what happens when people trip during a fire drill, claiming “people remember what they feel more than what they’re told.”


Good luck getting anyone to volunteer for that particular drill.


Because here’s the thing:


AI is still not great at the stuff that feels deeply human: That instinct that tells you, “This will tap into something deeper, and change people's behaviour.”



From automation to agentic experimentation


We’re entering a new phase in experimentation where AI isn’t just powering automation, but actually running simulations on its own. Think: agentic A/B testing, where AI tests variant ideas against itself before you ever show them to users.


Paired with synthetic user testing, this is unlocking new possibilities:


  • Smaller sites can pre-validate ideas without needing live traffic

  • Larger orgs can de-risk expensive experiments ahead of deployment

  • Teams can iterate more creatively and at lower cost


But let’s not overstate it—synthetic users aren’t real humans. They’re predictive models. Which means some use cases are better fits than others:


✅ Usability issues, flow logic, and basic heuristics 

🚫 Emotional resonance, tone of voice, or novel persuasion techniques


As the Nielsen Norman Group notes, synthetic users work best when your goal is to detect friction - not to deeply understand motivation.


Still, it’s a step forward. A way to vet ideas earlier, cheaper, and with a level of pattern-driven “intuition” that’s getting more sophisticated with every cycle.


And it raises a bigger question: If synthetic users can behave like humans… how long until they think like them too?


👇 I’m genuinely curious:


  • Have you tried synthetic testing in your work yet? What was the result?

  • What’s the most human (or hilariously un-human) idea an AI has ever given you?

  • Do you think we’ll ever trust AI to create the next “animal picture” idea?


Leave a comment to share your experience with any of these synthetic testing platforms and services proliferating the industry.

Comments


  • LinkedIn
Subscribe to my monthly newsletter

Stay ahead in the world of CRO and experimentation with my monthly newsletter. Each issue is packed full of practical insights, expert perspectives, and real-world strategies to help you refine and scale your optimisation efforts.

📩 Sign up now and get fresh ideas delivered straight to your inbox—no fluff, just actionable value.

Conversion Punkt Limited | Registered in England and Wales. Company No. 15504047

bottom of page