Donald F. Klein’s final reply to Jose de Leon’s final comment
Thomas A. Ban: RDoC in Historical Perspective
Redefining Mental Illness by Tanya M. Luhrmann. Samuel Gershon’s question
Collated by Olaf Fjetland
This note updates Dr. de Leon’s conclusion that RDoC is marketing rather than science. His view is that “one can only wish that the new director of the NIMH, Joshua A. Gordon MD PhD, has the good sense to drop the RDoC program, which many psychiatric researchers around the world think is a serious threat to scientific creativity.” This hope is contradicted by Dr. Gordon’s “Director’s Messages” (posted June 5 and June 22, 2017, on the National Institute of Mental Health’s website) entitled: “The Future of RDoC” and “RDoC: Outcomes to Causes and Back.” Dropping RDoC is not Dr. Gordon’s goal. Instead, he holds up a bright marketable future: “Imagine a world where your psychiatrist runs a panel of tests — behavioral and brain function tests — in addition to her clinical assessment. She gives you a diagnosis and realistic prognosis, and helps you choose between treatments with the knowledge of your individualized chance of responding to each of them. This is the ambition of the Research Domain Criteria (RDoC) project.”
This reframes and distorts prior RDoC principles. Categorical distinctions, such as between sickness and health were considered arbitrary and antiquated. Dimensional measures that cut across categorical distinctions were thought to be scientifically superior. This leads to difficulties when binary choices are required. For instance, categorical choices frequently must be made in the context of therapeutics, e.g., which treatment is indicated? The idea of treatment outcome comparisons relevant to a particular individual was central to the theme of personalized medicine. It was not a prior RDoC goal.
Dr. Gordon's example of RDoC at work is cited below.
“Imagine we have a dataset based on a whole bunch of people with a diagnosis of major depressive disorder. Some of these individuals get better with fluoxetine, and some with cognitive behavioral therapy (CBT).”
(This is naturalistic data rather than experimental. Therefore, treatment outcomes are open to multiple confounds. Also starting with a diagnosed group is antithetical to RDoC strategy. The dimensions are derived from factor analyses applied to mixes of patient and normal subjects.)
“Dr. Gordon suggests, "Furthermore, suppose we can demonstrate aberrant function specific to one or more RDoC constructs that can be measured through brain or behavioral observations.”
(Many papers have presented brain variations correlated with behavioral dimensions or diagnoses. None had sufficient sensitivity, specificity, effect size or replicability to qualify as a diagnostic criterion. The promised therapeutic advances, whose discovery would be enormously facilitated when genomic functions are understood, have not yielded any new treatments. Even monogenic diseases whose deranged genomic functions have been studied for decades have not made any therapeutic advances.)
Dr. Gordon presents a measurement possibility and his analytic strategy:
“Brain scanning might, for example, reveal hyper- or hypo-activity in different areas of the brain, or disrupted connectivity between different brain areas; or behavioral measures might detect impaired working memory. We can quantitatively measure the degree of influence that dysfunction in each of these constructs has on the two different subtypes of depression... Are the subtypes better described as each arising from dysfunction in one of these constructs, or more than one? Or are the constructs and diagnostic subtypes more likely independent observations? With enough data, you can test which of these models best explains what you see… Now you find that most of the fluoxetine-sensitive patients also score low on tests of motivation, but only a few of the CBT-sensitive patients do. One can then ask whether applying other tests relevant to the same behavioral domain (in this case, positive valence, which has to do with motivation and reward) would improve your ability to predict which treatment the patient will respond to — again, with a quantitative evaluation of how much the addition of these measures improves that prediction.”
Dr. Gordon believes that enough data enables getting experimental blood out of a naturalistic stone. Much work has pursued this issue. Bayesian approaches, machine leaning, directed acyclic graphs, etc., take advantage of strange problematic distributions. Dr. Gordon says, in contradiction to prior RDoC formulations, that its’ approach combines the values of both dimensional and categorical analyses. Further, RDoC’s value will be rigorously assessed. Previously, RDoC was described a radically new approach that would take decade to show its’ distinctive value. If this premise is accepted, rigorous comparative examinations will not be soon. The RDoC structure is claimed to make far better use of novel observations although the supportive mechanism remains obscure. Simply encouraging following up all novel observations will bury the investigator in non-replications and false positives.
Dr. Gordon continues with big picture sales talk that ignores scientific principles for valid comparisons and predictions. These dull concerns interfere with marketing.
Dr. Gordon states, “Finally, imagine you had this complete dataset—with hundreds of thousands of patients, each with diagnoses, clinical response, longitudinal course, and behavioral tests across the RDoC domains. One can make quantitative predictions not only of outcome—how a patient will do with a given medication, what that patient’s prognosis is, etc.—but also about the nature of the underlying structure of psychological and biological processes: Are the neural substrates (the brain processes on which treatments act) for fluoxetine-sensitive and CBT-sensitive patients separable? How many different kinds of processes are there, and which brain regions might relate to each? One can build models of the underlying processes, and quantify how much they explain about the symptoms and behavioral measures, which can help refine those quantitative predictions of prognosis and outcome. Combine the features of this approach—integration of RDoC behavioral constructs and DSM observational diagnoses, quantitative comparisons of model accuracy, iterative improvements in the combined model, and integration with longitudinal data (including developmental history, illness progression, and treatment response)—and you have the makings of an evidence-based diagnostic and clinical tool that has built-in automatic updating capability. Of course, there are a lot of things that have to go right for this to work. Getting the data will be challenging, especially data inclusive of hard-to-reach people like children and individuals with serious mental illnesses. Early editions of a comprehensive diagnostic tool will likely be rudimentary. Nonetheless, with enough data, creativity, and computation, we can build the tool of the future, sooner than you might think. Let’s get busy.”
This confidence-breeding sketch is cited extensively so the reader will grasp both the grandiosity of the program as well as the neglect of elementary warnings —even if it is big data. Dr. de Leon’s diagnosis of marketability is amply confirmed.
JA Gordon. Directors Message. The Future of RDoC. June 5, 2017. www.nimh.nih.gov/about/director/messages/2017/the-future-of-rdoc.shtml.
JA Gordon. Directors Message. RDoC: Outcomes to Causes and Back. June 22, 2017. www.nimh.nih.gov/about/director/messages/2017/rdoc-outcomes-to-causes-and-back.shtml.
January 4, 2018