I’d like to talk about a recent set of experiences allowing me to see one of Hill’s criteria for causation anew, and identify a critical form of reasoning in practice. I’ve used Hill’s nine criteria as discussion points whenever causation is considered since learning them in 1997 - and you should know that causation is something I consider often. Causal reasoning is a sine qua non for clinical practice. Whether when reflecting on my own reasoning while practicing, hearing patient’s explain their reasoning, or when hearing or reading the reasoning of my colleagues - causation is the underlying question. Appropriately so then, all useful clinical research is a search for the existence and probability causal associations.1
One experience comes from teaching this Spring. Students were using Hill’s criteria to appraise a case report. The criteria were not designed for that - but it’s important for students to grasp the connection. My explanation for the activity is that if they are going to learn from their practice experiences they MUST consider the truth of the causal claims they walk away from during those practice experiences. If they assume that every causal claim they infer from experience is true (post hoc ergo propter hoc), without subjecting it to a certain level of scrutiny, then they may start to build practice knowledge that is biased (and therefore not true, not valid). There are numerous biases that specifically target our ability to learn from causal experiences. Saying “Bless you” when someone sneezes must work because so far, no one I’ve said “Bless you” to following a sneeze has died of sudden cardiac death. Sure, I don’t say it sometimes and those people also haven’t died, but those people are just lucky!
The other experience came from learning this Spring in a set of continuing education courses I took on the assessment of movement and the determination and intervention for mobility vs. motor control dysfunction.
These two experiences with the backdrop of practicing physical therapy - I started to think about the difference between statistical inference (induction, strength of an association) which is learning about a population from the observation of a sample, and analogy which is learning about one system from the observation of another similar system. What struck me is the simplicity of putting statistical inference and analogy side by side. Considering their similarity.
Analogy: The use of analogies or similarities between the observed association and any other associations.
Statistical inference: Inferring characteristics of a population from a sample.
The more I think about it - the more I come to believe that analogy is the most dominant form of logical epistemological reasoning used in clinical inquiry by clinicians dealing with the clinician’s dilemma.2 It’s causal analogical reasoning - but its analogical nonetheless. As Hill intends analogy (always listed last, and even for me, usually a challenge to consider as a criteria even as a discussion point):
If it has been established and accepted that X causes Y;
and P is analogous to X,
and Q is analogous to Y;
and the relation between P and Q is analogous to the relation between X and Y;
Then the veracity of the causal association between X and Y provide evidence for the veracity of the causal association between P and Q
Analogy becomes the criteria used to help build the dense network of causal associations from which practice is based. And causal associations are then chosen to be investigated more fully as their own claims for one reason or another. We could have a grand old time discussing those reasons. But not today.
Analogical reasoning as it turns out goes back quite a long time. I feel a bit silly even bringing it up as if it’s something new. Let me be clear, the connection of it to clinical inquiry and Hill’s criteria and how prevalent it is in practice is new to me. I’m not sure how it’s new to me - seems it should be old to me. But there it is. If this is a place you’ve been at for a long time, then I’m happy that my mind has caught up!
Either I’ve missed this completely in discussions about evidence based practice, or it hasn’t had a prominent place in the discussion. I know it hasn’t had a prominent place in any discussions I’ve been part of - but those may be biased by my role in those discussions. Assuming it is largely an implicit and not an explicit part of the evidence based (or informed) practice, then I’d like to see it become explicit. To be explicit, means we have to consider it in some of the explicit considerations - for example, in a clinical practice guideline, or in the “evidence hierarchy.”
An easy example of analogical reasoning to justify an evidential basis to practice includes the application of research from motor learning on blocked (or massed) practice vs. random (or interleaved) practice. The findings are general enough to be applied to the learning of any motor skill without the necessity for evidence for each and every motor skill. If random practice results in better motor learning for task A, it will be better for tasks B, C and D as well. What this means then is that the use of random practice to improve motor control for task D is recommended - then I am recommending this based on analogical reasoning, not on “expert opinion” which would be considered a low level of evidence. If expert opinion is based on analogical reasoning then we must then evaluate the veracity of that analogical reasoning - not just say it’s low level evidence because it’s expert opinion. It’s actually the use of analogical reasoning. A logical approach to epistemology that can easily be stashed away under the ad hominem of expert opinion.
Analogical reasoning is imperative to the transfer of understanding from one experience to another experience - as is done when learning from practice or generating practice based evidence. Those experiences are wrapped up in stories - narratives - in our head that we build upon throughout our career. Yes they blur and mingle. To learn from experience we must anchor these stories at key points and take away, after careful consideration, the features that can be related to other (similar - dare I say analogical) situations. Whether applied or not that next experience adds to the story. All the while we are learning by induction without the aid of statistical inference (at least not explicitly), but with the clear (?, not to me until now) aid of analogical reasoning.
Two points on this footnote. First, my judgement on what is useful clinical research is based on my strong causal reasoning bias and rather circular, I admit it. Second, the probability of a causal association can be thought of in two ways. First, the probability that a causal association exists. This is what is considered when we simply consider whether we will “accept” or “reject” a null (or alternate) hypothesis. This is related to, but in my opinion, quite different from, the probability of an effect given a cause; or the probability of a cause given an effect has been observed. These probabilities are related because the higher the probability of the effect given the cause (combined with a higher probability of no effective given no cause) - the stronger the association is between cause and effect and the easier it is to reject a null hypothesis of no effect (no causal association).
Please excuse my use of terms that in my head I’ve used many times in the past 2 decades in courses, papers, and online posts. The clinician’s dilemma is two fold. First, needing to make decisions under uncertainty despite the uncertainty. Second, having most (often all) of their experience for making such a decision being based on someone other than the client they are making decisions about. Even when using evidence based practice to its fullest both of these clinician dilemma’s exist.