Focus on our teacher : Alice VINCENT
The CEERRF sets up a hot news every month, it is a focus on its teachers, their expertise or one of their works which are shared here and brought to your attention.
Alice Vincent is a physiotherapist, she obtained the Inter-University Diplomas in the interpretation of therapeutic trials and from the teaching center for statistics in public health, medicine and biology. She then went on to study public health, where she obtained a Master’s degree in methodology and statistics in biomedical research from the Paris-Saclay Graduate School of Public Health.
Today, she is a liberal practitioner, intervenes in initial training in several IFMKs in the Ile-de-France, and is a member of the scientific college of the Committee for the Protection of Persons XI of Ile-de-France.
Statistics put to the test of physiotherapy
The hour of physiotherapy is in research. All players agree to this. The scientific literature seems to be authoritative, the data sometimes carry the qualifier of “evidence”, at the same time tangible, real, almost indisputable. But what do we prove? And even, do we prove it?
It is clear that most of the time, we are wrong. We make mistakes every day, even thinking we are doing well and wanting to do better. So, the scientific literature has the difficult task of providing answers, of enlightening us in our gray areas, of guiding our practices as health professionals. But has she never accepted to bear this responsibility, and even, does she have the skills?
When I graduated as a physiotherapist, I decided to continue my training with several University Diplomas and a Master 2, all of which had in common the methodology and statistics applied to health data. First by going back to the basics: to the laws of probability that define the world of statistics, to the methods inherent in epidemiology and clinical research by learning to calculate without the help of the machine. Then, by making the picture more complex: by integrating into the analyzes models from Data Sciences for big health data, by discovering other models more suited to the variables and questions studied, by learning to write the lines of code which then allow it’s up to the computer to calculate on its own… I arrived in these training sessions with a lot of questions, misunderstandings and gray areas. I came away with an even greater number of responses, but above all, with a more modest appreciation of this methodological and statistical universe, which I wish to share with you here.
In the interpretation of scientific data, we often forget humility. When we interpret a p value, and we conclude that we “prove the effectiveness of (…)”, we forget that we have nevertheless admitted a certain risk of wrongly concluding that there is a difference ( alpha risk).
When we read a correlation coefficient, we agree that if it is less than 0.2 it is very low, then below 0.4 it is low, etc. But this meaning that we want to give it has no particular scientific basis, it is a consensual interpretation.
Most of the time, when we interpret a scientific study, we are facing problems: not enough patients, non-representative samples, not significant enough, poorly documented data collection, lost population… These problems, which are no so much, disrupt our desire to obtain a clear, clean and flawless answer. And yes, we can think that if what we read is scientific, it is probably very rigorous, and the results obtained must be too.
In fact, the statistics are far from rigorous. And in an attempt to make them, we tend to want to reassure ourselves by telling ourselves that what we are doing is under control. Whether for the one who reads, or the one who writes. “I have the right to test my data with a Student test because I verified that they were distributed according to a Normal law. When performing a normality test, such as the Shapiro-Wilk test, the following assumptions are made. H0: the data distribution is normal, and H1: the data distribution is not normal. If we conclude that the data are normally distributed, in order to do a Student test, then we admit H0. However, under H0, we consent to a beta risk that we do not know, which is simply… illogical. We cannot come to a conclusion without knowing the risk we run of talking nonsense. This reasoning does not hold water! So, when we try to reassure ourselves, we end up doing anything and using methods that were not created for that, as long as it seems rigorous and solid!
Use data from the scientific literature to better manage our patients: all practitioners agree to do so. Yes but how ? Most of the time, we come up against a wave of unknown terms, complicated models where all these calculations confuse our minds, and in the end, we no longer understand anything. Difficult, therefore, to comply with an exercise that one can’t do.
Even the most basic descriptive statistics sometimes give us trouble. Who remembers the definition of standard deviation? Not many people. On the other hand, what we can remember is that we know that within the standard deviation, we find approximately two thirds of the values observed in our sample. It may be imprecise, but it’s easier to remember.
It is mistakenly thought that the more sophisticated a statistical model is, the more rigorous it is, when in fact… it is just more complicated. Besides, in a way, these complex models smoke out the reader, since most of the time, no one understands anything that has been measured and calculated. We end up admitting what the author tells us, since in any case, we did not really understand how he conducted his calculations.
What matters, perhaps, is not so much the statistical model that we use, but rather the meaning of the variables that we put in a model. The computer, on the other hand, will calculate everything that is asked of it, even if what is required of it has no concrete meaning. Knowing enough about the field of study, knowing why certain factors are studied in a context, rather than others, is the relevance of a good model.
Why are clinical trials so popular? Maybe because their design is simple. And if it is simple, then it is also understandable. When we bring clarity, we also convince a greater number of readers, without cheating.
To advocate simplicity is not to accept its shortcomings in terms of knowledge, it is to aspire to ensure that access to information is the same for everyone.
Literature at work in our practices
Anyone who dares to say today that literature does not have the answer to everything, attracts the wrath of its defenders. However, are we quite sure that we are always capable of measuring well, in order to describe well, and finally, to explain well?
The practitioner’s feelings: “I cannot explain why, but I feel that this patient needs more time to hear the therapeutic option that I am proposing. I better give her time and propose to her again in a while.” Can we correctly measure this data?
Practical knowledge in a specific context: “This mother involved in the care of her child asks me to interrupt care for a month, time to breathe and rest. I know that she is involved enough to continue self-rehabilitation with her child during this time, so I accept”. How could we theorize this observation?
We can clearly see that the context, simplified by these 4 variables, is in reality much more complex than that. These are all parameters that are difficult to measure, and therefore difficult to take into account in the equation.
In the same way, there are more and more studies that focus on the quality of life of patients, and we should be delighted. On the other hand, have you wondered how do you measure this data? And even, what is quality of life? Can it be described in the same way from one individual to another? How tactfully do you ask certain questions of patients? How do you investigate certain areas to get the desired answers, and not the socially correct answers? What matters? What do we measure, or how do we measure it?
We have forgotten the meaning of the word science. According to Charles Nodier, “Science consists in forgetting what we know, and wisdom in not caring about it”. Science is a process in which it adds the pieces of the puzzle one by one, often makes mistakes, and always agrees to correct itself.
There is no sacrosanct study that will allow us to become better practitioners, there are only pieces of the puzzle that, one by one, allow us to better understand the issues that we encounter every day, and to improve at some point.
In his quest for time, the practitioner who wants to do well, and to whom it has been indicated that he would do better to devote his readings to studies with a high level of evidence, is mistaken without knowing it. High standard of evidence by what criteria? And the rest, we put it in the trash? Of course not ! The objective here is not to blame anyone, but to question our reasoning. In the end, for the practitioner to whom it has been repeated that science is the world of tomorrow’s physiotherapy, basing his practice on studies with a high level of evidence may already allow him… to sleep better on his two ears.
 Falissard B. Comprendre et utiliser les statistiques dans les sciences de la vie. 3e édition. Elsevier Masson; 2019. 384 p.
 Bouyer J. Epidémiologie. Principes et méthodes quantitatives. Lavoisier;
 Institute of Medicine of the national academics. Finding what works in health care : Standards for systematic reviews.