Hey Clinical Inquiry Fellows!
I wanted to take a moment to explain my absence, and update you on the latest developments in Stats4PT and share something exciting: a new tool to help you build and refine causal models directly from research papers. I can’t say that these projects have filled all of my time since the page on causality, but at least some of my time. I’ve started teaching again (heavy semester for me until Spring break with PCM5 and CI2); and I’ve been seeing more patients (my practice these days is limited to people with suspicion of dysautonomia and I’ve been expanding the testing I provide in my lab). Excuses…..
Delay in the Next Stats4PT Page
If you’ve been following along, you know that the next page for Stats4PT was supposed to focus on Bayesian Reasoning in Clinical Decision-Making. But as I worked through the content, it became clear that one page wasn’t enough—Bayesian reasoning isn’t just another statistical tool; it’s a framework that underpins clinical decision-making, research methodology, and causal inference; and connects quite nicely with critical realism.
Instead of a single page, Bayesian reasoning is now a core theme in Stats4PT, split into three sections:
Bayesian Reasoning for Clinical Decision-Making – Why Bayesian thinking aligns with clinical reasoning, how it contrasts with frequentist thinking, and why it’s the foundation for real-world clinical inference.
Bayesian Applications in Research and Evidence Synthesis – How Bayesian methods enhance research, clinical trials, meta-analyses, and systematic reviews, particularly in causal-critical-realist reviews (CCRRs).
Bayes’ Theorem in Clinical Decision-Making – A practical deep dive into how Bayesian inference helps refine diagnosis, treatment selection, and prognosis.
This restructuring makes Bayesian reasoning a pillar of Stats4PT, rather than just a single topic. But it also means I need to reorganize some content before publishing the first page. Thanks for your patience while I get this right!
Introducing Models4PT GPT: A Tool for Building Causal Models from Research
As part of this expansion, I’ve been working on another initiative that will directly benefit you: Models4PT GPT—a tool designed to help researchers and clinicians extract causal models from research papers and structure them in DAGitty (and eventually in Models4PT’s own platform).
Here’s what it does:
Takes a research paper and helps extract the key variables (exposures, outcomes, confounders, mechanisms).
Generates DAGitty code, allowing you to visualize the causal model in an editable format.
Supports Clinical Inquiry Fellows in structuring research findings into causal diagrams that better reflect mechanisms, latent variables, and contextual factors.
Instead of just reading a study and thinking about its implications, you can map it out, analyze it, and refine it—turning research findings into structured knowledge that improves both research planning and clinical reasoning.
Here is a walk through video for using Models4PT GPT
What’s Next?
I’ll be finalizing Bayesian Reasoning for Clinical Decision-Making — keep an eye out for that page soon
The Stats4PT homepage will be updated to reflect the new roadmap.
I’ll be collecting feedback on Models4PT GPT— if you start using it, let me know how it works for you!
This is all part of a bigger effort to make causal reasoning, statistical inference, and Bayesian thinking more accessible for clinicians and researchers.
Thanks for your patience, and I’m excited to share these next steps!
Sean,
I’m looking forward to hearing about Baysean reasoning and started writing a question to you on this topic. However, since I wrote it down I’ve since come up with the solution myself. I’m curious if my understanding is in concert with yours and I figure it might be useful for the rest of the fellows (or in our current cohort, fellas). Here it is:
I understand the importance of base rate, but I’ve been bothered when applying Baysean reasoning when the disease/injury prevalence data has a large range of reported incidence. My observation is that this is a problem in conditions with poorer diagnostic agreement and often reserved for syndromes. There tends to be a lot of these in musculoskeletal injuries. A good example is SIJ pain where the estimated incidence is troublingly variable. For an example, the estimate of SIJ pain is said to be between 15-30% of the population (Cohen, 2005).
My solution to this was to use a Fagan’s nomogram, assign a made-up special test with a value of +7 LR and plug it in using a pre-test probability of 15 and 30. I then cheated and plugged in this to ChatGPT to get the exact numbers…
“let's say I have a special test with a positive likelihood ratio (+LR) of 7 that I use on two diseases. Disease A has a prevalence of 15% of the population and disease B has a prevalence of 30% of the population. Could you calculate the post-test probabilities for each of these scenarios?”
Chat GPT did the math for me (and showed the work) providing a final answer of:
Disease A (Prevalence 15%) → Post-test probability: 55.2%
Disease B (Prevalence 30%) → Post-test probability: 75%
So how should I use this data? Take an average? Use 15? Use 30? Continue to question the existence of SIJ pain?
This leant me to ask a more philosophical question of does the base rate that is published (empirical domain) reflect the actual and real domains.
Thanks!
Tim
Cohen SP. Sacroiliac joint pain: a comprehensive review of epidemiology, diagnosis, and treatment. Anesth Analg. 2005;101(5):1440-1453
Hi Sean,
I just used Models4PT using the exact same instructions provided by the video. Super easy to use!
Thanks!