How to Use ChatGPT for Your CPD (Without Letting It Do the Thinking for You)

There is a quiet shift happening in medicine. Between patients, after shifts, in the margins of already busy lives, doctors are turning to AI. A question typed, an answer returned in seconds. No textbooks, no logins, no friction. It is, on the surface, exactly what Continuing Professional Development has always lacked: speed, accessibility, and relevance on demand.

But there is a tension here that is worth naming early. The same tool that can accelerate learning can just as easily replace it. And in a profession where thinking is the core skill, that trade-off matters.

The value of ChatGPT in CPD is not that it gives you answers. It is that it can make you think—if you use it well.

The problem of confidence without accuracy

One of the most discussed limitations of large language models is their tendency to produce answers that are fluent, plausible, and occasionally wrong. This is often described as “hallucination,” but the term risks sounding abstract. In practice, it means the model may confidently state an outdated guideline, misquote a diagnostic criterion, or fabricate a reference that looks entirely legitimate.

In medicine, that is not a minor flaw. It is the difference between a useful explanation and a dangerous misconception.

The important nuance is this: ChatGPT is often excellent at explaining concepts, patterns, and mechanisms. It is far less reliable as a source of definitive, up-to-date facts. If you ask it to clarify why early sepsis may present without fever in an immunosuppressed patient, you will likely get a coherent and clinically useful explanation. If you ask it for the latest dosing recommendations or guideline thresholds, you may get something that sounds right but is not.

Used properly, it becomes a bridge to understanding. Used uncritically, it becomes a source of false confidence.

The rule is simple, but non-negotiable. If it matters clinically, you verify it.

The deeper risk: outsourcing your thinking

The more subtle problem is not incorrect answers. It is the gradual erosion of effort.

It is very easy to ask ChatGPT to summarise an article, generate a reflection, or produce a set of learning outcomes. The output is often polished, structured, and immediately usable. But the cognitive work—the retrieval, the synthesis, the struggle to articulate meaning—is bypassed.

And that struggle is where learning lives.

There is a reason that effortful recall improves retention, that grappling with uncertainty builds diagnostic reasoning, and that writing a reflection clarifies thinking. When ChatGPT does these things for you, it removes precisely the processes that make CPD valuable.

The paradox is that the more efficiently you use it, the less you may actually learn.

Where ChatGPT genuinely adds value

When used deliberately, ChatGPT becomes less of an answer engine and more of a thinking partner. Its strength lies in how it can reshape the way you engage with knowledge.

One of its most useful roles is in explaining complexity. Medicine is full of concepts that are understood in fragments—pathophysiology that never quite clicked, statistical ideas that remain slightly opaque, guidelines that feel memorised rather than internalised. ChatGPT can unpack these quickly, adapting its explanation to your level. It can move from first principles to nuance in a way that is difficult to achieve with static resources. This is not a replacement for primary sources, but it is an effective way to bridge gaps before or after deeper reading.

It is also remarkably good at creating clinical scenarios. This is where it begins to shift from passive to active learning. You can ask it to generate a patient with sepsis who presents atypically, or a diagnostic dilemma where two plausible pathways compete. You can evolve the case, introduce conflicting data, or ask for deterioration over time. In doing so, you move from reading about medicine to practising it cognitively. The learning becomes dynamic, iterative, and closer to the reality of clinical work.

Perhaps more interestingly, it can be used to reflect on your own practice, provided you do the initial work yourself. If you feed in a case you managed—what you thought, what you did, where you felt uncertain—you can then ask it to explore cognitive biases, alternative approaches, or missed considerations. In this role, it functions less as a writer and more as a mirror. It reflects your thinking back to you, often highlighting patterns that are easy to miss in isolation.

It can also challenge you. Present a diagnosis and ask it to argue against it. Outline a management plan and ask what a cautious consultant might worry about. This kind of adversarial questioning is difficult to generate internally, particularly when you are already anchored to a conclusion. Used well, it encourages a form of diagnostic humility that is hard to teach.

Another underused approach is to reverse the usual dynamic entirely. Instead of asking for answers, ask for questions. Request viva-style prompts, escalating in difficulty, or ask it to probe your understanding of a topic you think you know well. This turns ChatGPT into a tool for retrieval practice, which remains one of the most effective ways to consolidate knowledge.

Reflection, learning outcomes, and the temptation to outsource

There is a particular risk when it comes to reflective practice. It is entirely possible to ask ChatGPT to write a reflection on almost any clinical topic and receive something that appears thoughtful, structured, and complete. It is also almost entirely devoid of personal insight.

Reflection is valuable because it forces you to articulate your own experience. It connects events to emotions, decisions to outcomes, and uncertainty to future behaviour. When that process is outsourced, the result may satisfy a requirement, but it does not change practice.

Case example: using ChatGPT well

A PGY4 medical registrar reviews a patient transferred overnight with presumed gastroenteritis. The patient is on methotrexate and recently received steroids, is afebrile, and has mild hypotension with a rising lactate. Initial management has been conservative.

After the shift, the registrar reflects that something felt “off” but wasn’t acted on early. Rather than asking ChatGPT for “the diagnosis,” they input a brief summary of the case, including their own thinking at the time: low suspicion for sepsis due to absence of fever and the anchoring on a gastrointestinal source.

They then ask:

  • “What cognitive biases might be present in this scenario?”

  • “In an immunosuppressed patient, how can sepsis present atypically?”

  • “What early warning signs might have been underweighted?”

ChatGPT outlines anchoring bias and premature closure, and explains how immunosuppression can blunt febrile responses. It highlights hypotension and lactate as early red flags that may outweigh the absence of fever.

The registrar then writes their own reflection, focusing on how their diagnostic threshold for sepsis should shift in immunosuppressed patients. They use ChatGPT only to refine the structure of their reflection and identify any gaps in their reasoning.

A week later, they revisit the topic by asking ChatGPT to generate three similar cases with subtle early sepsis features, using these as self-testing scenarios.

The learning is not in the answers provided, but in how the tool was used to interrogate thinking, challenge assumptions, and reinforce a change in practice.

A better approach is slower, but more meaningful. Write your own rough reflection first, even if it is incomplete or poorly structured. Then use ChatGPT to refine it. Ask it to clarify your thinking, identify gaps, or suggest areas you have not explored. In this model, the substance remains yours, and the tool simply improves its expression.

The same principle applies to learning outcomes. They should emerge from what you have actually learned, not what sounds appropriate in retrospect.

Learning how to ask better questions

An overlooked skill in using ChatGPT is the ability to prompt effectively. Vague questions produce generic answers. Specific, context-rich questions produce something far more useful.

There is a marked difference between asking for an explanation of sepsis and asking why sepsis may present without fever in a patient on methotrexate and recent steroids. The latter forces the model to engage with nuance, and in doing so, it pushes your own understanding further.

This becomes an iterative process. You ask, refine, challenge, and deepen. In that sense, the quality of your CPD becomes directly linked to the quality of your questions.

Integrating it into real CPD practice

The most effective use of ChatGPT is not as a standalone activity, but as something embedded into your existing workflow. After a shift, it can help you unpack a case that stayed with you. Before a shift, it can generate a scenario that sharpens your thinking. Over time, it can help you identify patterns in your practice, recurring uncertainties, and areas for deliberate focus.

What matters is not the tool itself, but the way it shapes your engagement with your own work.

A tool that reflects your intent

ChatGPT is neither a shortcut nor a solution. It is an amplifier. Used well, it accelerates insight, deepens understanding, and makes learning more responsive to the realities of clinical practice. Used poorly, it produces surface knowledge and a dangerous sense of competence.

The difference lies in whether you use it to replace thinking or to provoke it.

The goal of CPD was never to accumulate hours or complete tasks. It was to become a better clinician. ChatGPT can help with that—but only if you remain firmly in control of the thinking.

Next
Next

Workplace-Based Assessments: Turning Everyday Practice into Meaningful CPD