Logo

When the business plan becomes a performance

Under conventional assessment models in entrepreneurship, continuation of a venture is rewarded and optimism reads as competence, write Ian Solway and Jolyon Nott. But students should be required to demonstrate judgement, not projected success
1 May 2026
copy
  • Top of page
  • Main text
  • More on this topic
Hand stopping a line of dominoes falling
image credit: LightFieldStudios/iStock.

Created in partnership with

Logo

You may also like

How can we help academia produce more women entrepreneurs?
4 minute read

A student put it more clearly than any marking rubric ever has. “We know how to write the business plan,” she said. “We just don’t know if any of it is real.”

On paper, the module was successful. Pass rates were strong. Student work was polished, fluent and professionally presented. Business plans looked credible, even investor-ready. External examiners were satisfied.

And yet something essential was missing.

When co-author Ian Solway returned to teaching entrepreneurship after several years away, he became uneasy about exactly what students were learning. They could narrate opportunity convincingly, construct growth projections and defend speculative assumptions with confidence. What he was less sure about was whether they were learning how to make judgements amid uncertainty or how to recognise when an idea should not continue.

Over time, the business plan had drifted from being a decision-making tool to becoming a performance of plausibility. Students learned how to sound credible. What they were rarely encouraged to do was interrogate their own assumptions in ways that carried academic weight. This is not a failure of individual teaching practice so much as a consequence of assessment cultures that reward fluency and coherence over judgement.

These cultures do not emerge from bad intentions. They accumulate through iteration: marking criteria designed for consistency, external examiner expectations shaped by convention, and module outcomes written to be demonstrable. Each decision is reasonable in isolation. The aggregate quietly narrows what counts as good work.

Under conventional assessment models, continuation of the venture is rewarded. Optimism reads as competence. A student who writes “Based on competitor analysis, we project 15 per cent market share within 18 months” is safer than one who admits that they “don’t yet know whether customers will pay for this”. This is not because staff believe the first claim more than the second, but because assessment structures privilege projection over judgement.

What students increasingly produce is not a plan but a form of academic bricolage: a plausible assemblage of frameworks, metrics and market language. A bubble tea shop with a platform. A story that feels right because it sounds right. This is optimisation in the wrong direction.

The implications became harder to ignore when I began working as an academic conduct officer investigating misuse of generative AI. These tools are not introducing a new problem so much as amplifying an existing one. Students use them to produce more fluent, more confident versions of already speculative claims. If an assessment collapses when plausible text becomes cheap, then it is testing performance rather than thinking.

It is useful to distinguish two related issues. The teaching problem concerns emphasis: what skills are modelled, which questions are asked, and whether students are required to challenge their own ideas. The assessment problem is structural: what the criteria reward, and therefore what students rationally optimise for. Both matter, but the second determines the first.

In practice, businesses rarely fail because of weak plans. They fail because founders do not recognise when assumptions have stopped holding, because sunk-cost thinking replaces judgement, and because no one has been taught when to stop.

Universities teach beginnings well. Endings are largely invisible. This is not about encouraging failure. It is about recognising that professional competence includes knowing when continuation with a business idea becomes unjustifiable, and that this is a decision-making skill that can be taught.

There is a tendency to add reflective elements or additional stages in order to refine existing assessments. These adjustments can help, but they leave the underlying reward structure intact. The more substantive shift is to redefine what counts as evidence of competence.

The redesign we are implementing begins with a simple inversion: students are assessed on demonstrated judgement, not projected success.

Rather than speculative five-year plans, students develop an initial business concept, conduct a structured failure analysis of their own idea, and design a responsible exit scenario should early assumptions prove false. The failure analysis asks what would have to be true for the idea to work, what evidence would challenge those assumptions, and at what point continuation would become irrational rather than brave.

One student, working on a mobile coffee van concept, concluded that operating costs, permits and input prices compressed margins beyond viability, and that the required volume could not be sustained. He recommended against proceeding. It was among the strongest pieces of work submitted.

Assessment criteria shift accordingly towards the quality of evidence used to test assumptions, clarity around decision points, honesty about constraints and coherence of exit reasoning. A student who convincingly argues that their idea is not viable can receive full marks. One who ignores warning signs in pursuit of an optimistic narrative cannot.

Crucially, this structure makes generative shortcuts largely counterproductive. Judgement about the fatal flaws of one’s own idea cannot be outsourced. The reasoning has to belong to the student.

None of this is straightforward to implement. Established expectations, both within modules and from external examiners, shape what is recognised as valid work. The practical approach is to begin at module level, document outcomes and allow evidence of student learning to inform wider adoption.

This argument is not really about entrepreneurship. It is about what happens when academic assessment drifts away from professional competence.

Generative tools have accelerated this reckoning not because they make students lazy, but because they expose which learning outcomes were always performative. Judgement under constraint cannot be automated. It requires evidence, reasoning and an uncomfortable honesty about what is not yet known.

The harder question is whether we are willing to redesign assessment around these capabilities, even when doing so produces work that looks less polished, less confident and less impressive. Rewarding the identification of weak assumptions and treating “do not proceed” as a valid outcome, requires letting go of quite a lot, but that is what alignment with professional competence demands.

Ian Solway and Jolyon Nott are teaching fellows in design management at the Winchester School of Art, part of the University of Southampton.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site