David Healy: Do randomized clinical trials add or subtract from clinical knowledge?

 

Nassir Ghaemi’s comment

Blame p-values and an uneducated profession, not randomization

 

        David Healy in general takes an approach to psychiatry that is shared by many among those who are critical of the status quo of psychiatry from a perspective that is very skeptical about scientific truth. Although lip-service is paid to the concept of truth, in fact it is relativized to a social construction, a mere instrument of those in power, namely the academic power structure of psychiatry and the pharmaceutical industry. The truism that knowledge is power is reversed: power becomes knowledge.

        David has taken up the core of scientific clinical medicine in his critique of randomized clinical trials (RCTs).  Although the term evidence-based medicine (EBM) often is used, as introduced by David Sackett in the 1970s to familiarize medical practitioners with the relatively new field of clinical epidemiology, the tradition that is being criticized is indeed clinical epidemiology, the application of statistics to human research in medicine and to medical practice.

        This is a very large topic and I will not get into it in detail; nor will I try to convince David or others who disagree on the premises and details of my views. I have expressed them elsewhere both in article Ghaemi (2009a) and book-length form Ghaemi (2009b) on the topic of statistics for clinical psychiatry.  I will only make a general comment so that my perspective is stated, knowing that I’m only stating my conclusions and not my rationale here, but that it is present elsewhere.

        The big picture is that David is criticizing the whole concept of RCTs, and the basic idea of randomization, as being false. His explanations generally have to do with how RCTs are analyzed, however, not with the actual concept of randomization. He often describes the problem of misuse and overuse of p-values, which the prominent clinical epidemiologist Kenneth Rothman has well explained (Rothman 2014); an example is ignoring side effects through this misuse. That’s not a problem of randomization; that’s a problem of misuse of p-values in data analysis. Another example is how in meta-analysis, some data from studies are used to distract from other data; again, that’s an issue of data analysis, not randomization. Meta-analyses are combining different studies, and thus are observational analyses, even if the individual studies are randomized. A critique of meta-analysis does not apply to a single RCT, as David implies (p 8).

        The abuse of treatment guidelines is justly criticized. The misuse of group data that hide small side effects is justly rejected. Examples given on suicidality are apt. This is all about abuse of statistical analysis, not a core weakness of the concept of randomization. It is akin to rejecting socialism because of Stalin’s misuse of it: Stalin was terrible, so let’s end the National Health Service. P-values are misused, so let’s drop randomization.

        Indeed, David’s main point seems to be about hiding side effects. But this process happens through misleading data analysis, not via an inherent invalidity of the concept of randomization. Instead of misusing p-values for side effects, with both false positive (with repeated measurement) and false negative (small effect size for the sample size) problems, clinical epidemiologists like Rothman have stated clearly now for decades that the appropriate analytic approach is with effect estimates and confidence intervals.  These can be interpreted much more appropriately.  An example is a review we once did of the EPS in the CATIE study of schizophrenia (Shirzadi and Ghaemi 2006); the researchers used p-values throughout their papers and mostly were unable to say anything on underpowered side effect analyses. Our descriptive analysis was much more informative.

        The idea behind randomization, which is the most revolutionary idea in modern medicine in my view, was that it would remove confounding bias. This notion was Ronald Fisher’s great discovery.  Assuming a study is large enough and well conducted in its methods, AND that it is analyzed statistically validly, then very few if any confounding factors will affect the results. Those results can then be interpreted causally.  Where randomization is not feasible – as with the problem of cigarette smoking and lung cancer, as A. Bradford Hill explained – then other methods of data analysis, such as regression modeling, could improve the likelihood of causality.  The whole field of clinical epidemiology, led by Hill, grew out of this problem of how to interpret observational studies.  Face value is clearly misleading enough of the time that we can’t rely on it, as David appears to wish.  The complex field of biostatistics is there to help interpret observational studies. But randomization is a simple solution. The fact that RCTs are abused by the pharmaceutical industry or misused by the academic power structure to defend its shibboleths, is not a refutation of Fisher’s concept.

        Clinical judgment alone gave us bleeding for two millennia. That doesn’t mean industrial size treatment algorithms are the solution.

        A final comment: I am not aiming this critique at David, but rather to the leaders of the field.  One of the major problems in our field is that the majority of clinical researchers in psychiatry are not formally trained in statistics and clinical epidemiology. Imagine if most professor of mathematics were self-educated, and never took formal courses in algebra or geometry.  Statistics is not something you can pick up on your own, along the way of your busy career as an academic and clinician. It requires time in the classroom, years of time, not months.  I know. I too had been practicing as a clinical researcher for a decade before I got the opportunity for formal public health training in research methods.  After two years of study, I realized how much of my prior research was weak and faulty. That’s why I wrote a book-length treatise to educate my colleagues, to try to put into a short work what I had learned in two years. This debate brings out the importance for everyone to get better training and education in statistics and research methods. This extra training should be required of those who become researchers in particular. I know that in the last 20 years since I had my formal training, many of the changes in my views in psychiatry have had to do with realizing the falsehood of many of the claims about research methods which are used to prop up false clinical judgments, such as the long-term efficacy of antidepressants and antipsychotics, or the debates about the validity of DSM.

        In sum, the most prominent “experts” in clinical research methods in academic psychiatry who publish and speak about clinical trial design have never been trained on those topics.  My view is that much of the controversy today in psychiatry around statistical issues has to do with the lack of knowledge of the experts. They don’t know what they don’t know, and yet the field and the pharmaceutical industry relies on their opinions.  A new generation is tending to get a few years of post-residency training in schools of public health.  That training is a requirement, a necessity, in my opinion, for someone to be a qualified clinical researcher in psychiatry.  Meanwhile the older generation is taking up space with their limited knowledge, and they need to be replaced.

 

References:

Ghaemi SN. The case for, and against, evidence-based psychiatry. Acta Psychiatrica Scandinavica 2009a;119(4):249-51.

Ghaemi SN. A Clinician’s Guide to Statistics and Epidemiology in Mental Health. Cambridge University Press; 2009b.

Rothman KJ. Six persistent research misconceptions. J Gen Intern Med 2014;29(7):1060-4.

Shirzadi AA, Ghaemi SN. Side effects of atypical antipsychotics: extrapyramidal symptoms and the metabolic syndrome. Harv Rev Psychiatry 2006;14(3):152-64.

 

January 20, 2022