Understanding Causality in Clinical Research
What is Causality?
At its core, causality refers to the relationship between a cause and its effect—where one event or condition (the cause) brings about a change in another event or condition (the effect). In the simplest terms:
A cause precedes its effect (soon to come - Hill's framework called temporality)
A cause changes the likelihood of its effect occurring, often described probabilistically (soon to come - Hill's framework called strength of the association)
Causality goes beyond observing patterns or correlations. It asks why and how something happens, distinguishing what works from why it works. This distinction is critical in both research and clinical reasoning.
I encountered this challenge firsthand at an APTA conference in 1995. A speaker declared that physical therapists should stop using traction as an intervention because the research showed it was "not effective." At first glance, the conclusion seemed compelling—if the outcome research showed no benefit, why continue the practice?
But as I reflected further, I realized much of this research failed to ask why and when traction might work. The studies did not explore the causal mechanisms that might explain how traction could be effective for specific patients under particular conditions. This experience underscored the need to focus on the details of causality—the mechanisms, variables, and contexts that influence an intervention’s effectiveness. Without this deeper understanding, we risk discarding potentially valuable interventions prematurely.
Bhaskar’s Domains: Expanding Our Understanding of Causality
When thinking about causality, it’s essential to recognize that the data we collect (the Empirical domain) is only one part of the picture. Bhaskar’s critical realist framework introduces three domains:
Empirical: What we observe or measure directly, such as outcomes like quality of life measured by a questionnaire.
Actual: The mechanisms and processes that produce observable outcomes, often unobserved or latent.
Real: The deeper structures that shape mechanisms, such as social determinants of health or inequities in access to care.
In outcomes-based research, the focus often remains on the Empirical domain, while variables in the Actual and Real domains—the unobserved mechanisms and structures—are ignored. For example, studies dismissing traction as ineffective failed to account for the mechanisms by which traction might benefit certain patients or the conditions under which it could succeed. These unobserved variables reside in the Actual or Real domain, and their omission can limit our understanding of an intervention’s true value.
This distinction highlights the importance of causal models that include both observed (Empirical) and unobserved (Actual and Real) variables. By considering all three domains, we move closer to understanding causality in open systems like clinical practice, where many interacting factors influence outcomes.
The Role of Causality in Research
Causality plays a foundational role in research by helping us move beyond simple observations to understand why and how interventions produce specific outcomes. Effective research incorporates mechanisms, recognizes the influence of unobserved factors, and leverages statistical inference to address variables across Bhaskar’s domains: Empirical, Actual, and Real.
Incorporating Mechanisms into Research
Research that focuses solely on the Empirical domain—the observable data—often overlooks the mechanisms (Actual) and deeper structures (Real) that influence outcomes.1 To fully understand causality, researchers must consider:
Mechanisms (Actual Domain): What processes drive the observed outcomes? For example, in studies on traction, mechanisms like compressive forces and their effect on back pain should be investigated.
Structures (Real Domain): What broader factors shape these mechanisms? For example, social determinants of health or inequities in healthcare access might affect whether patients benefit from an intervention.
By accounting for these domains, research can move beyond "what works" to explore "why it works" and under what conditions. These domains are incorporated into Hill's criteria for discussing causal claims within the criteria plausibility and coherence.
Designing Studies to Address Mechanisms
Incorporating mechanisms and unobserved variables into research often requires thoughtful design that acknowledges the stratified nature of reality. Bhaskar’s critical realist principle that ontology determines epistemology reminds us that the methods we use to study phenomena (our epistemology) must align with the nature of those phenomena (their ontology). Mechanisms and structures within the Actual and Real domains may not always be feasible to study directly in clinical practice but can often be explored through alternative methods and research strata.
Leveraging Stratification in Research
While outcome studies in clinical practice focus on the Empirical domain, mechanisms research can be conducted at other strata, providing better explanations for what we observe in clinical settings. For example:
Animal Models or Lab-Based Research: Studies using controlled environments allow us to investigate specific mechanisms, such as the effect of compressive forces on spinal tissues in understanding back pain.
Simulation Studies: Computational or physical models can simulate mechanisms, such as fluid dynamics or mechanical stress, that influence outcomes.
Chemical or Physical Models: Experiments at the molecular or biomechanical level can reveal mechanisms underlying observed effects, such as the biochemical response to traction in tissue regeneration.
These approaches provide critical insights into the Actual domain (mechanisms driving outcomes) and the Real domain (structures shaping those mechanisms), enabling researchers to incorporate these insights into causal models and study designs.
Thoughtful Study Design in Clinical Research
To address mechanisms and unobserved variables in clinical research, researchers can take several steps:
Inclusion and Exclusion Criteria: Selecting participants based on specific characteristics helps control for variables in the Actual and Real domains. For example, a study on traction might include only patients with clear indications of compressive mechanisms underlying their back pain, improving the likelihood of detecting a causal effect.
Latent Variables: Statistical models can include latent variables to represent unobserved factors influencing outcomes. By inferring these variables indirectly, researchers can account for mechanisms and structures even when direct measurement is not feasible.
Mechanism-Specific Hypotheses: Developing hypotheses that explicitly test causal pathways or interactions improves the ability to understand how interventions work. For example, a hypothesis could explore how varying levels of tissue compression mediate the effects of traction.
Connecting Mechanisms to Causal Models
Incorporating mechanisms into research is not just about improving study design but also about enhancing causal models. Mechanisms research and carefully designed clinical studies together allow us to:
Build causal models that integrate empirical observations with inferred mechanisms and structures.
Test and refine causal models iteratively, incorporating insights from both mechanistic and outcome research.
By recognizing and addressing the stratified nature of reality, researchers can create more robust causal models that connect ontology to epistemology. These models not only improve research validity but also offer better tools for clinical reasoning and decision-making.
Statistical Inference and Causality
Statistical inference is a powerful tool for testing causal hypotheses and exploring variables in all three domains. For example:
Hypothesis Testing: Allows researchers to determine whether an observed effect is likely due to chance or a causal relationship.
Latent Variable Models: Enable the inclusion of unobserved factors (e.g., mechanisms or structures) in causal models.
Confidence Intervals: Provide insight into the precision of estimated effects, helping evaluate the strength of associations.
However, inference alone is not enough. Researchers must also interpret findings within the broader context of mechanisms and open systems, where variables interact dynamically across domains.
Causality in Population-Level Research
Causality also informs decision-making at the population level, guiding public health policies and resource allocation. For example:
If research demonstrates that a public health program reduces smoking rates and prevents lung cancer, policymakers can implement it broadly to maximize societal benefits.
Understanding causal mechanisms helps ensure that interventions target the appropriate populations and address the conditions under which they are most effective.
In population-level research, careful attention to mechanisms and unobserved variables ensures findings are generalizable while remaining grounded in causality.
Open Systems and Complexity
Clinical and public health research often operates in open systems, where multiple variables across the Empirical, Actual, and Real domains influence outcomes. These systems are dynamic and context-dependent, meaning:
An intervention may work in one context but not another due to unmeasured or latent factors.
Confounding variables must be carefully controlled to avoid drawing incorrect causal conclusions.
Confounding vs. Latent Variables
It is important to distinguish between confounding variables and latent variables in open systems, as they affect research and clinical reasoning in different ways:
Confounding Variables: Confounders bias results by creating spurious associations between variables. If a confounder is not accounted for, it can lead to incorrect conclusions about cause-effect relationships at both the population and individual levels.
Latent Variables: Latent variables, in contrast, do not directly bias results but represent unobserved factors that influence outcomes. Their presence often:
Increases apparent randomness or noise in the data, making it harder to detect strong cause-effect relationships.
Reduces the strength of association in statistical analysis, complicating causal inference.
Introduces uncertainty when applying findings to individual patients, as unobserved factors may vary across individuals and contexts.
The Role of Causal Models
Causal models help address these challenges by bridging the gap between what is observed (Empirical) and what operates in the background (Actual and Real domains). Research builds and refines these models to:
Integrate mechanisms and structures, making the complexities of open systems more manageable.
Connect unobserved variables to observed outcomes through inference or latent variable modeling.
Provide a framework for testing and refining hypotheses iteratively.
By acknowledging the stratification of reality, researchers create models that are not only more accurate but also more relevant for clinical reasoning, where variability and uncertainty are the norms.
Transition to Clinical Reasoning
The principles of causality extend beyond research into clinical reasoning, where individual variability and the complexities of open systems challenge the direct application of population-based findings. The causal model serves as the critical bridge between these domains, connecting the knowledge built through research to the practical reasoning used in patient care.
In clinical inquiry, research helps build and refine the causal model, pruning out misconceptions and enhancing our understanding of cause-effect relationships. Practice, in turn, applies the model to real-world scenarios, adapting it to the complexities of individual patient care.
However, for research to create models that are truly useful in practice, it must:
Incorporate unobserved variables from the Actual and Real domains, such as mechanisms or latent variables.
Explicitly consider how these variables influence observed outcomes to improve the model’s relevance for open-system scenarios.
Acknowledge that while all models are wrong, as George Box famously stated, some are useful—especially when they integrate Actual and Real domains thoughtfully.
This iterative relationship between research and practice ensures that causal models evolve to better reflect the complexities of clinical care. Next, we’ll explore how clinicians use these models to navigate individual variability and make evidence-informed decisions.
The Role of Causality in Clinical Reasoning
For clinicians, understanding causality is crucial. All clinical decision-making is founded on causal assumptions—a concept I have personally tested for almost two decades. This process has included explicit examination of causal reasoning in every clinical scenario I’ve encountered, whether in my own practice, in collaboration with colleagues, or while mentoring students. Despite hundreds, perhaps thousands, of challenges to refute this claim, I have yet to encounter a justifiable counterexample.
While research provides evidence of cause-effect relationships at the population level, clinical reasoning involves applying this evidence to individual patients. All clinical reasoning is rooted in cause-effect relationship assumptions. These assumptions form the foundation of our "practice knowledge," which is developed either through research or experience. As I’ve previously explored in my thoughts on Statistical Inference, clinical experiences are simply unstructured observations from which we draw unstructured inductive inferences, while research is structured observation that applies structured induction through statistical inference.
This iterative process of developing and applying clinical causal models requires navigating the inherent complexity of open systems, where individual variability and contextual factors challenge the direct application of population-level findings. As I’ve previously noted (Collins, 1995, 2018, and again right now), causality provides the framework for reasoning through these challenges, bridging the gap between evidence and individual patient care.
1. Connecting Evidence to Practice
Causal reasoning bridges the divide between research and clinical practice. It enables clinicians to interpret research findings critically and apply them in context-sensitive ways, adapting interventions to the unique circumstances of each patient.
For example, a clinical trial may demonstrate that a specific intervention reduces pain in patients with chronic low back pain. However, clinicians must ask critical questions when applying this evidence:
Does the patient's health profile (e.g., comorbidities, psychosocial factors, or previous treatment history) align with the research population?
Do the mechanisms underlying the intervention, as suggested by the research, align with the specific causes of the patient's pain?
In essence, clinicians use causal reasoning to tailor research findings to their patients, ensuring that interventions are not only effective in theory but also relevant and practical for the individual.
2. Navigating Complexity
Clinical practice occurs in open systems, where numerous interacting variables dynamically influence outcomes. These variables span across the Empirical, Actual, and Real domains. Understanding causality is critical for navigating these complexities, particularly when:
Patient-Specific Factors: Individual characteristics (e.g., genetic predispositions, comorbidities, or lifestyle choices) shape how interventions work.
Contextual Influences: Broader determinants of health, such as socioeconomic status, access to care, or environmental conditions, may amplify or diminish the effects of an intervention.
For example, an intervention proven effective in a controlled research setting may fail in practice if the patient lacks the resources to adhere to the treatment plan. Clinicians must reason through these contingencies to adapt evidence to the unique conditions of their patients.
Causality, in this context, is often probabilistic and contingent. It is not enough to ask whether an intervention works; clinicians must also consider why it works, for whom, and under what conditions. By reasoning through these interactions, clinicians can anticipate outcomes and adjust their approaches accordingly.
3. Synthesizing Experience and Evidence
Causality is the bridge between empirical research and practical experience. Clinicians continuously refine their causal models by synthesizing insights from both sources:
From Research: Population-level studies provide structured evidence for general cause-effect relationships.
From Practice: Patterns observed in clinical encounters reveal nuances and exceptions that may challenge or extend research findings.
For example:
A clinician may notice that a treatment validated in research is particularly effective for a subset of patients with unique characteristics not explicitly studied.
Conversely, they might encounter a patient who does not respond as expected, prompting them to consider unaccounted-for mechanisms or contextual factors.
This synthesis allows clinicians to refine their causal reasoning iteratively, improving their ability to apply evidence-informed decisions in complex, individualized scenarios.
Transition to Tools for Causality
The iterative relationship between research and clinical practice ensures that causal models evolve to better reflect the complexities of individual patient care. In the next section, we’ll explore tools and frameworks—such as causal models—that help clinicians and researchers work together to navigate these complexities and refine causal reasoning in both domains.
The Role and Need for Causal Models
Understanding causality is essential in both research and clinical practice. However, researchers and clinicians alike face significant challenges in uncovering and applying cause-effect relationships:
In Research: Confounding, bias, and the probabilistic nature of causes can obscure true relationships, making it difficult to design studies that yield valid and generalizable findings.
In Practice: Open systems with individual variability, contextual influences, and dynamic interactions make it challenging to apply population-level evidence to individual patients.
Why Causality Matters in Clinical Research
In clinical research, causality underpins the ability to determine why interventions work, for whom, and under what conditions. Two classifications of causes are particularly relevant:
Definite Causes: These represent "law-like" behaviors in which the probability of an effect given the cause is certain (P(E|C) = 1). Examples are rare in clinical contexts and are typically found in fields like physics or chemistry.
Probable Causes: These dominate clinical practice, where the probability of an effect given the cause falls between 0 and 1 (0 < P(E|C) < 1). Probable causes are contingent on numerous variables, including confounders, mediators, and interacting factors across the Empirical, Actual, and Real domains.
Hill's strength of association criterion helps evaluate whether a cause is probable by examining how strongly the cause correlates with the effect, but this is only one of several factors in determining causality.
Hill’s Framework for Causation
In 1965, Sir Austin Bradford Hill proposed a framework2 to evaluate whether an observed association between two variables reflects a causal relationship. Hill’s criteria remain widely used in research and practice, especially for understanding causality in complex systems where certainty is often elusive. These criteria emphasize that causation is not determined by a single factor but by a constellation of evidence. Here is an overview of the key elements:
Strength of Association:
Stronger associations are more likely to be causal. For example, the link between smoking and lung cancer is supported by robust statistical evidence of a strong correlation. However, weaker associations should not be dismissed outright, as they may still reflect causality when other factors are considered.
Consistency:
If similar studies in different populations, contexts, or times replicate the findings, the likelihood of a causal relationship increases. Consistency supports the idea that the association is not due to random chance or specific conditions.
Specificity:
A causal relationship is more plausible if a specific cause leads to a specific effect. For example, exposure to a certain pathogen causing a disease strengthens causation. However, this criterion is often less useful in clinical research, where multifactorial causes dominate.
Temporality:
For a cause to produce an effect, it must precede the effect. This criterion is essential and non-negotiable; it establishes the timeline of causality.
Biological Gradient (Dose-Response Relationship):
A gradient in the association—where increasing levels of exposure result in stronger effects—supports causality. For example, higher levels of radiation exposure are associated with greater risk of radiation-induced illness.
Plausibility:
The relationship should align with existing biological or mechanistic knowledge. If there is a plausible explanation for how the cause produces the effect, this strengthens the case for causality.
I’m considering whether plausibility is associated with mechanisms that Bhaskar might consider “actual”.
Coherence:
The proposed causal relationship should not contradict existing knowledge or theories. It should fit within the broader framework of what is known about the phenomenon.
I’m considering whether coherence is associated with structures that Bhaskar might consider “real”.
Experiment:
Evidence from controlled experiments, such as randomized controlled trials (RCTs), strengthens causal claims. However, this is often infeasible or unethical in many clinical and public health contexts.
Analogy:
When similar relationships are established in analogous situations, they can provide indirect support for causality. For example, the understanding that one drug in a class causes an effect may suggest that a similar drug in the same class will have a similar effect.
Applying Hill’s Framework to Clinical and Research Contexts
Hill’s framework provide a flexible framework for evaluating causation but are not intended to serve as a checklist. In clinical and public health research:
Strength of Association: Often evaluated statistically through measures like effect size or relative risk.
Temporality: Ensured by study design (e.g., prospective cohorts or experimental trials).
Plausibility: Enhanced through mechanisms research or biological models.
In clinical practice, these criteria guide causal reasoning when interpreting evidence and applying it to individual patients. For example:
A clinician might use temporality and biological plausibility to consider whether a worsening symptom is causally linked to a new medication.
Understanding the biological gradient can help adjust intervention doses to achieve optimal outcomes.
By combining Hill’s criteria with tools like causal models, researchers and clinicians can systematically explore cause-effect relationships across the Empirical, Actual, and Real domains, making better-informed decisions in both research and practice.
The Role of Causal Models in Research and Clinical Reasoning
Causal models—such as Directed Acyclic Graphs (DAGs)—are powerful tools for visualizing and analyzing cause-effect relationships. They help address the challenges faced in both research and practice:
In Research:
Identifying Confounders: Causal models clarify relationships between variables, highlighting confounding factors that need to be controlled.
Testing Hypotheses: By mapping pathways and mechanisms, these models enable researchers to design studies that explore the actual mechanisms driving observed outcomes.
In Clinical Reasoning:
Navigating Open Systems: Causal models provide a framework for reasoning through the complexities of patient-specific and contextual factors.
Understanding Probabilities (P(E|C)): These models clarify the probability of an effect given a cause in dynamic and context-sensitive systems, guiding decision-making.
Causal models thus act as a bridge between research and practice. They evolve iteratively, integrating insights from both domains to refine our understanding of causality and improve clinical decision-making.
Tools for Exploring Causality
Several tools are available to aid in causal reasoning:
Directed Acyclic Graphs (DAGs): These graphical tools are invaluable for identifying confounders, visualizing pathways, and testing hypotheses in research.
DAGitty: https://www.dagitty.net
Models4PT: This forthcoming tool aims to expand on causal reasoning by integrating variables from Bhaskar’s Empirical, Actual, and Real domains (among other expansions). Models4PT will enable clinicians and researchers to create, share, and refine causal models collaboratively, bridging the gap between evidence and individualized care. Creating an open source repository of models and the tools necessary to create, share, refine and integrate. It is in development and I’m hoping to have a beta version by end of March 2025 for use by clinical inquiry fellows and PSU DPT students. It's development is only possible due to the GNU Open Source Licensing of DAGitty. At a minimum, it will allow interested individuals to create, share, refine and integrate models as part of a PT focused repository (which has been a goal of mine since 2014 - - it’s about time I work on it!).
Models4PT: https://github.com/scollinspt/Models4PT
Critical Realist Lens on Causality
A critical realist approach expands causal reasoning by considering the deeper mechanisms and structures that underlie observed effects. While research often focuses on the Empirical domain (what we can observe), understanding causality requires incorporating the Actual domain (mechanisms) and the Real domain (structures). This expanded perspective helps clinicians and researchers:
Move beyond "what works" to uncover "why it works."
Address unobserved variables (e.g., latent variables, contextual factors) that influence outcomes.
The Role of Causal-Critical-Realist Reviews (CCRRs)
Critical realist reviews provide a structured way to evaluate evidence across the Empirical, Actual, and Real domains, emphasizing the interplay of mechanisms, structures, and observed data. Unlike traditional systematic reviews that often prioritize statistical findings, CCRRs seek to:
Identify mechanisms underlying observed effects.
Explore contextual factors that condition the expression of these mechanisms.
Highlight unobserved variables and structures that may influence outcomes but are difficult to measure directly.
By integrating these domains, CCRRs help bridge the gap between research and practice, providing a more comprehensive framework for causal reasoning in open systems. For example:
A CCRR on traction might explore not only its population-level effectiveness but also the mechanisms (e.g., decompression of spinal structures) and contextual factors (e.g., patient-specific conditions like nerve impingement) that determine its success in individual cases.
Models4PT: A Tool for CCRRs
As part of its development goals, Models4PT aims to simplify and enhance the process of creating and using causal-critical-realist reviews. By providing tools to:
Integrate domains: Models4PT supports the inclusion of variables from the Empirical, Actual, and Real domains in causal models.
Map mechanisms and contexts: Users can visualize and analyze the dynamic interactions between causes, mechanisms, and effects.
Facilitate collaboration: Researchers and clinicians can collaboratively refine causal models to better reflect the complexities of clinical practice.
By aligning CCRRs with practical tools like Models4PT, researchers and clinicians can advance a critical realist approach to evidence synthesis and application. This approach not only improves causal reasoning but also ensures that the insights gained from research are more relevant and actionable in clinical contexts.
Why a Critical Realist Lens Matters
Incorporating a critical realist perspective into causal reasoning enables a deeper understanding of clinical phenomena by:
Linking observable data to underlying mechanisms and structures.
Recognizing the complexities of open systems, where multiple interacting factors influence outcomes.
Bridging the gap between research and practice, helping clinicians apply evidence in ways that account for individual variability and contextual nuances.
By adopting tools like CCRRs and Models4PT, we can advance both research and practice, moving closer to a more comprehensive understanding of causality in clinical care.
The Role of Causal Models in Study Design and Statistical Analysis
Causal reasoning and causal models are foundational not only for bridging research and clinical practice but also for determining how research is conducted and interpreted. The causal model that underpins a study is critical for:
Designing the Study Protocol:
Every research study is designed to explore, validate, or refine a causal model. The variables included in the study, the hypotheses being tested, and the methods used to control for confounders are all derived from the causal model.
For example, a study testing the effectiveness of traction for low back pain must include:
Variables that represent the mechanisms through which traction is hypothesized to work (e.g., decompression of spinal structures).
Controls for confounders that might obscure the causal relationship (e.g., other concurrent treatments).
Choosing the Right Statistical Methods:
The causal model dictates which statistical methods are appropriate. For instance:
Randomized Controlled Trials (RCTs): The causal model assumes that randomization eliminates confounding by balancing unmeasured variables between groups, allowing the focus to be on the relationship between the intervention and the outcome.
Regression Analysis: When randomization is not feasible, regression models help adjust for confounders identified in the causal model, ensuring that the effect of the intervention is isolated as much as possible.
Mediation Analysis: Used to explore causal pathways by identifying intermediate variables that link a cause to its effect.
Without a clear causal model, researchers risk misapplying statistical techniques or misinterpreting their results.
Interpreting Statistics in Context:
Statistical results are only meaningful when interpreted within the framework of a causal model. For example:
A statistically significant association does not imply causation unless the model accounts for potential confounders and mechanisms.
Confidence intervals and p-values should be viewed as tools for understanding uncertainty regarding population estimates from the sample (not uncertainty in one persons response) in the context of the causal relationships being tested, not as standalone indicators of significance.
Exploring Probabilities in Causal Models:
Probabilities such as P(E|C)—the probability of an effect given a cause—are central to both causal models and statistical inference. These probabilities:
Inform how strongly a given cause is associated with its effect (in the population).
Depend on the structure of the causal model, including how confounders and latent variables are incorporated.
For example, the probability that traction reduces back pain depends not only on the empirical evidence but also on unobserved factors (e.g., the Actual and Real domains), which may increase variability in the results.
Refining Causal Models Through Statistics:
Statistical analysis is not just a means to test hypotheses; it is a tool to refine causal models. Results from research studies contribute to updating and improving the model, ensuring it reflects the complexities of open systems.
Practical Implications for Researchers and Clinicians
Causal models are not just abstract constructs—they directly influence every aspect of research and its application in clinical practice:
For Researchers: The causal model determines how a study is designed, which variables are measured, and how the data are analyzed.
For Clinicians: Understanding the causal model behind the research helps interpret the findings and apply them effectively to patient care.
By aligning study protocols and statistical methods with the underlying causal model, researchers and clinicians ensure that their work not only generates meaningful insights but also bridges the gap between evidence and practice.
Examples
Research
As an example I'm going to dig into the study below. It was the first study I came across that I hadn't read yet in my "Cardiopulmonary PT" Zotero library, providing it here as an example is not an endorsement - in fact, as I read it I realized there are some upcoming lessons to be learned that we can use this study. Here I focus on the basic causal claims from a modeling and statistics perspective. I do not dig into whether I believe these claims yet (using Hill's criteria).
Chand, Vaish H. Effect of diaphragmatic breathing, respiratory muscle stretch gymnastics and conventional physiotherapy on chest expansion, pulmonary function and pain in patients with mechanical neck pain: A single group pretest-posttest quasi-experimental pilot study. Journal of Bodywork and Movement Therapies. 2023;36:148-152. doi:10.1016/j.jbmt.2023.07.004
Abstract
Background: There is evidence that mechanical neck pain results in respiratory dysfunction. Physiotherapy management for mechanical neck pain is well documented but the evidence regarding inclusion of breathing strategies to improve pulmonary functions in mechanical neck pain patients is scarce.
Objective: To investigate the combined effect of diaphragmatic breathing, respiratory muscles stretch gymnastics (RMSG) and conventional physiotherapy on chest expansion, pulmonary function and pain in patient with mechanical neck pain.
Method: Thirteen patients with mechanical neck pain (18–35years) with neck pain history of ≥three months and NPRS (numeric pain rating scale) score ≥3 were recruited for this single group pre pretest-posttest quasi experimental pilot study. Informed consent was taken from all participants. After initial screening and assessment, diaphragmatic breathing, RSMG (5 patterns) and conventional physiotherapy (hot pack and TENS for 10 min) were given for one week. Chest expansion, spirometry (FEV1, FVC, FEV1/FVC, PEFR), NDI (neck disability index) and NPRS were assessed on baseline and after one week following the intervention.
Results: The normality of data was tested by using Shapiro-wilk test and the data was found to be normally distributed. Paired t-test was used to compare the baseline and post intervention values. Diaphragmatic breathing, RMSG and conventional physiotherapy had significant effect on chest expansion, FEV1, NPRS and NDI in patients with mechanical neck pain.
Conclusion: The rehabilitation strategies should emphasize breathing exercises to improve the lung function and pain scores in addition to conventional physiotherapy in rehabilitation of mechanical neck pain patients.
Here is a DAG based on this study. It is based on the empirical variables measured in the study upon which statistical tools were used.
Synopsis of Causal Model and Statistical Analysis from the Study
Statistical Analyses
Shapiro-Wilk Test:
Used to confirm the assumption of normality for the data.
Paired t-test:
Employed to compare pretest and post test measurements for each outcome. Testing the null hypothesis that there is no difference. These were all tested with an alpha of 0.05, and using Null Hypothesis Statistical Tests (NHST, see this page if you need that refresher on what that means). It works by taking the difference observed, claiming it’s the difference that would be observed in the population, and rejecting the null if the difference estimated for the population would be more than 0 (the null for a difference) with 95% confidence. I would have preferred that they include the 95% CI; but if the p value is < 0.05, then the 95% CI does not include the null value and, based on NHST with an alpha of 0.05, we can reject the null. Notice, this does not mean there’s a cause-effect relationship. It’s not that the null is “no cause” and rejecting the null means “cause.” It’s simply one part of the framework for making a decision about cause-effect (it’s part of Hill’s framework as strength of the association).
Results and Relationship to the Causal Model
Chest Expansion
Results:
Statistically significant improvements were observed at:
Axillary level (p = 0.045)
4th intercostal space (p = 0.001)
Xiphoid level (p = 0.008)
Relationship to Causal Model:
These results align with the causal links between the interventions (diaphragmatic breathing, RMSG, and conventional physiotherapy) and chest expansion in the empirical causal model.
Mechanisms in the Actual domain, such as enhanced diaphragmatic efficiency, explain the observed improvements.
Pulmonary Function (FEV1)
Results:
Significant improvement in FEV1 (p = 0.038) but no significant changes in FVC or PEFR.
Relationship to Causal Model:
The model depicts direct connections between interventions and pulmonary function.
The selective improvement in FEV1 may highlight specific mechanisms, such as improved respiratory muscle strength, while other factors require further exploration.
Pain and Disability
Results:
Pain (NPRS): Improved significantly (p = 0.0001).
Disability (NDI): Improved significantly (p = 0.016).
Relationship to Causal Model:
Pain and disability reductions are linked to:
Direct effects of physiotherapy.
Indirect effects via improved respiratory mechanics and reduced muscle tension, as represented in the causal model.
Implications for the Causal Model
The causal model supports the study’s findings and provides a framework for interpreting the observed relationships:
Mechanisms in the Actual Domain:
Improved chest expansion and FEV1 suggest enhanced diaphragmatic and respiratory muscle efficiency, mechanisms inferred as latent variables.
Connections Between Outcomes:
Observed Variables: Interactions between chest expansion, pulmonary function, and pain reductions were not controlled or explored statistically. In practice, these relationships are likely interconnected, and analyzing these interactions could provide deeper insights into the causal pathways.
Unobserved Variables: The model could be expanded to include unobserved factors, such as mechanisms in the Actual domain or structural variables in the Real domain, to better reflect clinical complexities.
Practical Utility:
Including these interactions and unobserved variables in the model would improve its usefulness in practice by helping clinicians anticipate how outcomes influence one another in open systems.
Limitations and Suggestions for Refinement
Lack of a Control Group:
The absence of controls limits causal inference, suggesting a need for future studies with robust designs. Including lack of a comparison group - all interventions were done in all subjects. If one of these interventions worked and others didn’t we have no idea (see below).
Short Intervention Duration:
The limited duration may explain the lack of significant improvements in some pulmonary parameters, highlighting the need for longitudinal studies.
Ignoring Interactions Between Outcomes:
Statistical analyses did not account for how changes in chest expansion, pulmonary function, and pain might interact. Understanding these relationships would enhance the causal model’s explanatory power and clinical usefulness.
Expanding the Causal Model:
Including latent variables related to mechanisms (e.g., Actual domain) and structural variables (e.g., Real domain) could provide deeper insights into the inter dependencies between outcomes and further clinical utility.
Overall thoughts on limitations
I can’t help myself. One major issue I have with how this study was published is the disparity between it’s conclusion and what was done. Here’s the conclusion again:
Conclusion: The rehabilitation strategies should emphasize breathing exercises to improve the lung function and pain scores in addition to conventional physiotherapy in rehabilitation of mechanical neck pain patients.
Keep in mind - there was no actual comparison between the pre-post results of different interventions. The study - and the causal model shows this as well - did not provide any evidence whatsoever that rehabilitation should emphasize breathing exercise in addition to conventional physical therapy. The fact is all subjects had all interventions and we have no real idea of which one worked based on their study. The assumption that breathing exercises needed to be added to conventional physiotherapy is completely predicated on the authors assumptions of mechanisms (very common). Even if I agree with those assumptions, they need to be stated and considered. It is feasible (see expanded model below) that conventional PT changed pain, which changed breathing all without any effect of Diaphragmatic Breathing or Respiratory Muscle Stretch Gymnastics.
Well - - all this is to say - there’s more to the reality than the research is able to demonstrate. Let’s proceed.
The Updated Model for Clinical Insights
Here the causal model has been expanded to better reflect the complexities of clinical practice and research, incorporating latent variables and additional relationships to address the interconnected nature of patient care.
Key Additions:
Latent Variables:
LatentMechanisms: Represents underlying physiological or biomechanical processes that mediate the effects of interventions on outcomes like chest expansion, pain, and pulmonary function. These might include factors such as neuromuscular efficiency, tissue elasticity, or respiratory mechanics, which are often unobserved directly in routine clinical practice or research studies. How many of these to actually include is contextual - but it's part of the clinical reasoning process, which always includes how much detail to include!
Stress: Accounts for the psychological and emotional state of patients, which is known to influence pain perception, respiratory function, and overall recovery.
ContextualFactors: Encompasses broader influences such as socioeconomic status, environmental conditions, or social determinants of health that shape how interventions affect individual patients.
Bidirectional and Correlated Relationships:
Variables such as Stress, ChestExpansion, and PulmonaryFunction are modeled with bidirectional or correlated relationships (e.g., ChestExpansion <-> PulmonaryFunction) to reflect the dynamic and interdependent nature of these outcomes.
Expanded Interactions Between Variables:
ConventionalPhysiotherapy, DiaphragmaticBreathing, and RespiratoryMuscleStretchGymnastics are now shown to influence LatentMechanisms, providing a pathway to explore how these interventions indirectly affect outcomes through unobserved processes.
Why This Model is More Useful for Clinical Practice
Reflecting Clinical Complexity:
Real-world clinical scenarios rarely involve isolated cause-effect relationships. This model acknowledges that outcomes like pain and pulmonary function are influenced by both direct interventions and indirect factors such as stress and latent mechanisms.
By integrating latent variables and contextual factors, the model provides a more holistic framework for clinical reasoning, helping clinicians anticipate how various factors might interact in their patients.
Supporting Individualized Care:
The inclusion of stress and contextual factors highlights the importance of tailoring interventions to individual patients’ needs. For instance, addressing stress alongside physical therapy interventions might amplify the effectiveness of treatment for some patients.
Facilitating Clinical Observations:
Clinicians can use this model as a guide to structure their observations systematically, considering not just empirical outcomes but also hypothesized mechanisms and contextual influences.
Applications for Research and Further Investigation
Planning Mechanisms-Focused Studies:
Researchers can design studies that test specific pathways in the model. For example:
Investigating the role of LatentMechanisms (e.g., respiratory muscle strength) as mediators of pulmonary function improvement.
Exploring how Stress moderates the relationship between interventions and pain outcomes.
Refining Data Collection:
The model encourages the inclusion of additional measurements, such as stress questionnaires, biomechanical assessments, or socioeconomic indicators, to capture latent variables and contextual factors more directly.
Guiding Hypothesis Generation:
Researchers can use the model to identify untested relationships. For instance:
Are changes in ChestExpansion causally linked to reductions in pain, or are they independent effects of the same intervention?
How do ContextualFactors like access to care influence the effectiveness of specific physiotherapy interventions?
Incorporating Latent Variables Statistically:
Advanced statistical techniques like Structural Equation Modeling (SEM) or path analysis can be used to infer and test the roles of latent variables and complex interactions in the model.
Bridging Research and Practice
This expanded model exemplifies how causal models serve as a bridge between research and clinical practice:
For Researchers: It provides a roadmap for designing studies that explore not only direct effects but also the mechanisms and contextual influences that underlie those effects.
For Clinicians: It helps structure reasoning and prioritize interventions based on a deeper understanding of how variables interact, ultimately supporting more nuanced, individualized care.
The Way Forward
By integrating latent variables and contextual factors, this expanded model captures the realities of open systems in clinical practice. It acknowledges that outcomes are rarely determined by isolated variables and instead emerge from a dynamic interplay of observed and unobserved factors. This approach not only enhances causal reasoning but also ensures that research findings are more applicable and actionable in everyday clinical scenarios.
In future posts, we’ll explore how tools like Models4PT can operationalize these models, making it easier to integrate empirical, actual, and real domains into both research design and clinical decision-making.
Summary and Next Steps
Causality is central to both research and clinical reasoning. It allows us to connect evidence to practice, navigate complexity, and synthesize experience with structured inference. Causal models—such as DAGs —serve as indispensable tools, bridging the gap between research and clinical care by incorporating mechanisms and variables from the Empirical, Actual, and Real domains.
By leveraging these tools and approaches, researchers and clinicians can better understand confounding, interactions, and pathways in both structured studies and real-world practice. This iterative process improves not only the precision of our causal models but also their applicability to individual patient care.
In the next page, we’ll dive into Bayesian Reasoning for Clinical Decision-Making. This topic will explore how Bayesian methods allow clinicians to:
Update probabilities and refine decisions based on prior knowledge and new evidence.
Contrast Bayesian reasoning with traditional frequentist approaches to inference.
Apply Bayesian inference to real-world clinical scenarios.
Stay tuned as we continue building the bridge between research and practice, advancing causal and statistical reasoning for clinical care.
I’m working on how to properly code the Actual and Real domains since they can often be observed - thus making them empirical, and whether (or how much) this conflicts with Bhaskar’s Critical Realism. Research has become so focused on outcomes with the disregard for mechanisms that there are many empirical mechanisms ignored.
I’m changing the language I use to reflect the intention of Hill’s list as a framework, not as a criteria since over the year’s it has been hard to break people away from seeing Hill’s framework as a rigid rubric as opposed to a set of discussion topics for consideration.