The P.E.A.S.E. Blog

Why We Started Writing This Book

April 23, 2025

And why 14 practitioners want to be named as co-creators

Peter Clarke Sarah Snook Adam Ososki Tom Walsh Kelly Couldrick Jonny Humphrey Malcolm Brown Ollie Lewis Tom Green, Assoc CIPD Neil Prentice TAP.dip MIoL Lucy Bloomfield Stephen Ridgway

TL;DR:

Peter Pease is writing a new book on rethinking evaluation in L&D—co-created with 33 practitioners from the L&D Improvement Analysts Programme, based on over 100 real-world Evaluation Research projects. The book rejects traditional models in favour of a usable five-step method grounded in how L&D actually works. Fourteen participants are named co-creators. This first blog explains why the book was needed and how it came about. The approach will be previewed at Peter’s ATD25 session on 20 May.


Rethinking Evaluation

When we launched the Learning and Development Improvement Analyst programme in 2023, I went looking for a suitable course text. Several books addressed parts of the challenge, but none offered what we needed. A few were built around familiar models. Others were credible but too abstract, too academic, or too bound to their own frameworks.

One of the participants suggested we write our own. Another, less diplomatically, suggested a title including an expletive, based on his frustration with the existing models. That suggestion did not make it past the first edit, but the sentiment behind it was widely shared.

By that point, I had already been thinking along similar lines. During the development of the programme, I kept circling back to two persistent problems.

The first was the surplus of L&D models. Most are built around levels, categories, or checklists. They offer structure but little guidance. Many are presented with the tone of settled science but rarely help practitioners make better decisions. Another model, however well-intentioned, was not going to change that.

The second problem was the false choice between simplicity and complexity. As Mencken observed, for every complex problem there is a solution that is simple, easy, and wrong. I came across several books that were stronger than most, but each one was overwhelmed by its own complexity. They offered thoroughness without usability. The profession does not need more sophistication. It needs something practical that respects complexity without being consumed by it.

There is clearly appetite for something better. According to recent research, 95 percent of L&D professionals believe they need to improve evaluation. Meanwhile, 70 percent of CFOs do not believe L&D delivers value for money. That gap is not just about measurement. It is about credibility. Solving it is not a matter of using Level 4 more rigorously. It requires a shift in how L&D thinks, acts, and communicates.

LaDIA was our response to that gap. We did not frame it as evaluation training. Instead, we asked a different question: how can L&D solve real problems using evidence? That idea—combining practical problem solving with analytical rigour—quickly resonated. The programme attracted not just analysts or data specialists, but working practitioners: learning designers, digital leads, delivery managers, and internal consultants. These were people under pressure, dealing with internal politics, and looking for ways to deliver more value with fewer resources. They were not looking for a framework. They were looking for clarity.

Since launch, 35 practitioners have enrolled. Between them, they support over half a million learners. This is not a sample of interested academics. It is a reflection of what learning looks like in practice. The project topics varied, but the themes were familiar: justifying leadership programmes, improving platform adoption, rationalising portfolios, reducing waste, and addressing fatigue around mandatory training. In almost every case, the same unease appeared: the existing models were not helping.

What began to emerge from their work was not a new framework. It was a method. No one sat down to invent it. It took shape gradually through use. People shared what they were trying, tested ideas in context, made improvements, and began to build something that worked.

The method is grounded in the realities people face. It supports a wide range of goals: defending a programme, improving an intervention, or shifting behaviour in practice. It does not require perfect data or pretend every problem can be reduced to ROI. It respects that different questions demand different forms of evidence, and that the value of evaluation lies not in its outputs, but in the thinking it enables.

This is not a thought leader’s provocation. It is not a theory repackaged in new language. It is a practitioner-led response to a longstanding problem, shaped by people doing the work.

The book will be published later this year. The thinking behind it began much earlier—in real organisations, under pressure, among people who were no longer willing to pretend that evaluation had to mean what it used to mean.

It may not be a new idea. But it is long overdue.

#LearningAndDevelopment #EvaluationMatters #EvidenceBasedL&D #LDEvaluation #LearningEffectiveness #OrganisationalLearning #AIinLearning #ATD2025 #LaDIA