The Hard Problem of Psychiatry
- Ethan Smith
- 7 days ago
- 28 min read
Updated: 2 days ago

Once upon a time, I was set on becoming a psychiatrist. I had always been deeply interested in psychology. Throughout life, I've spent probably a near unhealthy amount of time thinking about thinking and dissecting my own and others' behaviors. To some degree, I was my own lab rat, watching carefully how I reacted to different situations. But it wasn't until the latter half of high school that I acquired a sense of how this could be channeled into something useful. Namely, that was when I felt a calling towards psychiatric medicine.
The calling came from the realization that psychiatric medications are strange, and our understanding of them is disappointing. Their success rates are fickle, some have the potential to exacerbate conditions, and ultimately, we often don't know the mechanism behind their action beyond first-order effects to neurotransmitter levels. There is little "We know" and a lot of "It has been observed". Looking at the current state of affairs, I couldn't help but think we should be doing better. It's a shame how difficult it is for the brain to evaluate itself. I wasn't sure how I could help, but I wanted to somehow have a hand in better understanding the mind and the drugs we create for it.
But I did not end up pursuing medicine. I became impatient around the amount of required schooling and had hesitations about what my role would be. I originally imagined myself practicing clinical psychiatry, though given that I wanted to make an impact at the foundations, I think what I was really interested in was research. In the latter half of college, I got sidetracked into AI, which I think stimulated the same curiosities. The difference was that it was something I could get into immediately, in my own home, and perform experiments on the order of minutes or hours instead of human trials on the order of years, all big factors for my impatient self. In the end, I think this was the right choice for me.
Though I am still fascinated by psychiatry, I think it can be a very noble profession, and I want to see that we solve the mind's mysteries while treating mental conditions.

In this post, I want to talk about some of the obstacles psychiatry faces, many of which are intrinsic to the complexity of the mind and the care we need to take when working with it. I like to analogize every mind as a screw, with a unique groove. Sometimes the flathead screwdriver can work on screws it wasn't directly intended for. Similarly I think psychiatric medication is a set of a few oddly shaped screwdrivers, which, in best case can help a loose screw, but maybe not as tightly as we'd like to get it. Sometimes its not a fit at all. And in the worst case, the mismatch between screw and screwdriver just ends up stripping the screw. To be fair, a standardized scientific approach to psychiatric medicine is relatively new, only dating back to around the 1950s. I truly believe we are only a bit beyond the Stone Age equivalent for psychiatry. There is a hell of a lot of remaining work to be done and never a better time to contribute to it.
The Hard Problem of Psychiatry
There are a number of things that make psychiatry a hard problem and a science whose maturity currently leaves much to be desired. It faces obstacles that hamper matching the pace of other fields of medical research. Primarily, I'll be speaking to the portion of psychiatry which focuses on biological underpinnings of the mind and treatments that leverage those understandings moreso than talk-based therapies or high-level psychological phenomena. This is because, I'd argue, talk therapies have mostly reached their potential. It's unlikely we will see a breakthrough with a new form of psychotherapy. Meanwhile, technical, biological, and neurological understandings and interventions, as we'll see, have a long way to go. However, I'll also note here and there on general difficulties unique to the psychiatric treatment experience.
The Brain is a Labyrinth
The first is the subject matter itself. The brain is unfathomably complex. We have developed prosthetics mimicking nearly every part of the body, but replicating the brain remains currently out of reach. It is perhaps nature's greatest feat. On top of its complexity, it's also not trivially observable. Relevant activity happens on microscopic scales, or requires specialized tools to measure. The surface has hardly been nicked. We are still finding more about it everyday. Structures that were thought to play one role may be discovered to do something else the next. Given how much is still veiled in mystery, our understanding of the brain's mechanics is not yet in a place where we can make extensive and firm predictions on human behavior. The brain is a chaotic system in that the firing of one neuron can have significant downstream effects across many different parts of the brain. It may be a convoluted path to trace how the firing of a brain cell today could have a hand in stage fright at a school presentation weeks later. I would almost equate it to the difficulty of predicting the weather. While the physics of weather are understood down to concrete equations, the amount of factors that need to be traced with absolute precision to have an accurate estimate the future weather, even 1 hour from now, is practically intractable, hence the story of the Butterfly Effect. The brain is a similar problem, but instead of just aerodynamic equations, there are many, difficult-to-measure moving parts whose roles are yet to even be fully understood.

It is worth also disambiguating between how psychiatry and neuroscience consider this problem. Psychiatry does a top-down approach that ultimately deals with treating behavioral abstractions we have defined, such as happiness, OCD, manic-depressive. Neuroscience tends to be a more literal bottom-up analysis of the nervous system and may report very raw data such as ion concentrations in a brain structure or firing rate of a neuron. For this reason, neuroscience is more readily able to be made scientifically robust, and it works well for addressing conditions whose existence are boolean, like seizures, something that is inarguably either present or absent and it reveals itself in distinct spikes. However, the extreme granularities that neuroscience deals with may be difficult to trace back to mood disorders, for instance, that manifest as a pattern rather than a spike and whose causal factors may be a complex interplay of many aspects of the brain. Meanwhile, for psychiatry, it is easy to get lost as well with the top-down approach of beginning with behaviors that live on a complex multi-dimensional continuums and placing them in discrete categories, often by subjective evaluation. For the most part, we are yet to fully connect the top-down and bottom-up explorations. Attempts to begin from behavior and find explanations are incomplete as are are starting from neurological observations and making conclusions as to how they influence behavior.
This all poses difficulty in dealing with exacts, so instead we often defer to more ambiguous evaluation criteria, such as self-reports and behavioral observations, which can lend itself to unsatisfying interrater-reliability and a vulnerability to p-hacking. Even measurements that can be evaluated objectively like IQ tests still draw into question into what is actually being measured. We quickly can fall into a mess of creating metrics and abstractions defined by other abstractions without much of a golden standard of truth to trace back to. The conditions themselves cannot be objectively determined. They often are diagnosed by "meets K out of N criteria (typically of equal importance) with certain severity" which we can't even really say for sure are the right criteria to look for or how to faithfully evaluate it. Psychiatrists, while allowed some freedom of intuition, are all taught to ensure they report the same diagnosis given the same patient, to follow what was set out by the DSM (Diagnostic and Statistical Manual of Mental Disorders). Though, the patient themself may vary across interviews shrouding the phenomena of interest, which exists as a human-crafted simulacrum, in an additional layer of noise. The psychiatrist is tasked with the burden of serving both as the caretaker and a diagnostic tool, like an x-ray. Really, in the face of this conundrum, the best we can try to do for reproducibility is project the complexity of a condition into a few agreed upon labels, even if this squashes a patient's story into an oversimplified view.
One thing I like to cite is how the prevalence rate of autism has skyrocketed. There are many explanations here. The common, optimistic view you'll hear is that our awareness and recognition of the disorder has improved. This doesn't sit right with me. I think this can partially explain the observations, but it's hard to not think about how much autism has become an icon of pop culture, and ironically, a social phenomenon. Many diagnose themselves with autism at their self-perceived awkwardness or tedious habits. It's become a word many say to each other when describing having a really strong interest. On the other hand, I think psychiatrists are more commonly diagnosing autism simply because it is more prevalent and front of mind. You've heard the word, you've seen your other fellow psychiatrists diagnose it. Surely this is some kind of bias as well. The autism spectrum diagnosis, in my opinion, has become something of an "other" pile characterizing a very wide space of possible disorders (and possibly totally healthy people) described by social awkwardness. I want to be careful not to mince my words here. Autism is quite real. Though I question how much utility we get out of such a diagnosis to be referring to so many different manifestations all as autism when they may have entirely different causal factors and optimal treatments. It's hard to even know if the more severe, very prominent conditions can be explained by the same factors.

This throws a bit of a wrench in achieving a robust science. Consider for a second that you are testing a new antidepressant drug to study its effects. What all do you need to do to know whether it works or not? How can you measure that? At what point in time can you say if it worked or not? Could it be that a given pill might just take >6 months to work? How do you deal with the extreme variance and bias placebo can incur? Have you seen people on Reddit talking about how an dietary amino acid induced a psychotic break for them, is it truly indicative of a rare edge case or a spurious correlation with a pre-existing disorder? It is hard to imagine the science of psychiatry ever reaching the clarity of physics.

Now this isn't to say we can't do science here. We do the best we can, and we painstakingly follow the scientific method as much as possible. Though, as we'll see, some of the biggest breakthroughs have been the product of "mess around and find out" as opposed to carefully planned roadmaps. Science in psychology has given us a wealth correlations to work with, but almost never the complete story of causation. What's important though is we have better predictive value than random chance, and that has allowed us to begin treating people, but there's a lot of uncertainty to swim through.
It's part of the reason we've been able to employ therapies at a medically recognized level, like Cognitive Behavioral Therapy, among others. We need not a neuron-by-neuron explanation for why it works, just that at the high-level people report improvement. Therapies also don't have nearly the risk of introducing foreign chemicals into the body, and if nothing else, hopefully it gives the comfort of bonding with another human and being taught tools to handle life, even if these aspects are harder to create robust scientific explanations for. On the other hand, it should be noted that even this is in question by the Dodo Bird Verdict, which suggests all therapies yield similar results, partially attributed to the difficulties of measuring success.
Collecting Data
The rate limiting factor for scientific progress is hardly ever at the fault of human competence. If we want to understand the long term effects of pollution on climate, well, then we need to wait for the long term to happen. The same goes for observing how an ecosystem responds to an invasive species or a bacterial population reacts to a novel antibiotic. There is some wiggle room here. Better data analysis may yield more generalizable models allowing us to predict the long term without needing to get there. In other words, the same reason we can know where an asteroid will be in 1000s of years without needing to wait for that moment. But the fact of the matter is research is often serial: we must procure one result before devising another experiment or proceeding with an innovation.
There are several common rate-limiting factors across sciences.
Experimentation time - How long do we need to run a trial before we have the data we need?
Human trials may take particularly long.
A space trip to observe Jupiter to explore its atmosphere will take a lot of travel time.
Experiment resources - What are the resources needed to conduct the study?
Availability of subjects to be studied
Affordability of tools or entities of research. Think particle colliders, rockets, and giant compute clusters.
Engineering experiments - Funds and resources may be readily available, but we still need to build the infrastructure for experiments. How long will this take?
If we wanted to take measurements on Mars, we need to build all of what it takes to send a rover there.
Experiment ideation and usage of data - Imagine an experiment is performed and it yields data that could potentially make for a huge stride in science. Did we notice it and make the best usage out of it or did it go under our noses? How do we decide what to try next? Are we searching optimally?
This part partially reflects human competence and intuition
Addressing counterfactuals and confounding variables - When we have many variables at play, we need to marginalize out confounding variables to isolate those of interest.
In human studies, do we know if an effect was yielded from the intervention itself, or might it be socioeconomic factors or other health factors? Effectively ruling out all of the "What if's"
Psychiatry and neuroscience suffer a good bit from all of these.
Poking around nerves to see which triggers a facial movement is a pretty quick experiment, but most drug trials looking at changes in behavior are in the long-term regime
Presently this appears like an inevitable cost. Though if we somehow turn extensive granular data into comprehensive explanations or laws as with the physics analogies, we may be able to yield similar caliber of results from smaller scale experiments. In other words, a fictional holy grail could be discovering a certain pattern of brain activity observed in a short time frame that predicts recovery in 5 years with 99.9% accuracy or something like that.
This is also where things like having computers that can perform high fidelity simulations of the world become very coveted. We'd quite like to run entire human trials on computers that can potentially faster than real life and without the risk of harming real humans.
Another response to experiments that are serially slow, meaning they have many long stages you just have to get through, is to parallelize. Run many experiments in parallel to increase data production, and hopefully striking gold.
Getting grants, permissions, and finding the people for studies can take a while. This also limits experimentation to scientists who have this specialized access.
This could be made more efficient, but you know how it goes.
Engineering is not as typically an issue, but some studies may require special machinery
As we'll discuss, a lot of strides in psychiatric studies have come from revisiting data that may have been previously overlooked. We have to wonder what else might be under our nose right now.
Developing better heuristics for search is a huge one. Unfortunately data is noisy and ambiguous to evaluate which is part of why promise goes unnoticed. I earnestly believe there could be a lot of value in having AI that could go back over all the past research and try to piece together insights or new research directions.
The amount of confounding variables in psychiatric studies often makes identifying absolute causation intractable, so we instead aim for as large of trials as we can perform, do our placebos, and use statistical analyses.
All of these together are part of the reason why it takes 10-20+ years for the next generation of a psychiatric medication. And it may often be a relatively marginal improvement, if there is one at all. Sometimes things are just different. There are plenty of places older medication have been preferred over their newer counterparts.
The lifetime of modern psychiatry has witnessed only a few generations of psychiatric drugs with nearly as tepid success as the start. This is in stark contrast to Moore's law which posits computational power approximately doubles every 2 years, but also gives us a lofty goal to match.
Ethics
"First, do no harm" is not a line that actually appears in the Hippocratic Oath, but it is a principle that has governed medicine and psychiatric treatment, especially after the horrors of mental asylums before we understood how to treat those with extreme mental disorders.
Though this can be a tradeoff in the doctor's mission to minimize suffering. One must balance avoiding the risk of worsening a condition versus taking that risk on an understudied treatment which could substantially help someone.

As mentioned earlier, the research process on drug discovery is extensive and time consuming, and there often isn't enough data to study very long-term effects. Progress is slow, and even then, drug companies face a high risk of lawsuits if a potential side effect goes unnoticed in drug trials, prompting slow and careful consideration before releasing a drug into the wild. At the same time, there are people who need help immediately, and the risk might be worth it as there isn't much to lose. For some, the unfortunate reality is that betting on an experimental drug is the lesser of two evils.
Drugs often begin with tests in vitro or on rats, and this makes for a rapid iteration space for uncovering interesting properties. Though it is not exactly a gold standard for inferring for how a drug will affect people due to the degrees of separation between the human and rat brain.
I don't think the ethical component confers as much as a slowdown as what due process of research requires anyway, though it does add an extra layer of needed scrutiny and classic government roadblocks.
History of Developments
It should be noted only how recent medicine has taken on proper science, let alone psychiatric medicine. It was not even a full century ago that doctors were recommending cigarettes and that mental illness was the product of a moral failure. It was only 1952 when the first edition of the DSM was released, a monumental moment marking a commitment towards adhering to a standard of practice and giving mental treatment the due process of the scientific method. In the first edition, there were many common disorders today that were not recognized. For instance, ADHD did not exist, instead the closest thing that mentioned hyperactive behaviors was "minimal brain damage".
Looking across the years of how the DSM has changed, one has to wonder: What else are we missing? What are we failing to recognize now? The number of unique diagnoses has tripled since DSM1 to DSM5. This is at least in part due to increased awareness and understanding of conditions. Also, given my previous complaints that diagnoses can put people in a box or oversimplify their story, this feels like it should offer more granular precision. But can we be so sure our current definitions are improvements over previous iterations? A risk with more diagnoses is that there are potentially more ways to be seen as unhealthy, and the threshold for being assigned a diagnosis is lower, again, for better or worse! The increases in diagnoses over the years could be that mental illness is on the rise or that we are doing a better job at recognizing it, but because our definitions and culture are fickle, its hard to say what's actually happening and there is always the risk we are seeing disorder in normal behavior. It's a bit unsettling how much of moving targets our diagnostic criteria can be and how a refactoring can entirely change our perspective on ailments. Someone who may have been considered healthy once may be considered disordered now, for better or worse. All in all this is to say, the book is to be taken with a grain of salt, and just as much as we look back on the older versions as a primitive take on the mind, we may do the same in 20 years.

This also happened to be the year that the first antidepressant/anxiolytic, imipramine, was developed, by accident when looking for a treatment for tuberculosis. Coincidentally it was the year of the first recognized antipsychotic treatment, which was originally intended as surgical anesthetic. Amphetamines (Adderall) were unveiled a bit earlier, being first synthesized in 1887, had its first medical use as a decongestant in 1933, later found usage for obesity and wakefulness as well as an independent doctor that gave it to children for headaches and disruptive behavior. Also at this time, literal meth was given army soldiers in Germany as performance enhancers. It was a bit of a wild west where we gave some people crippling addictions, but also had some important accidental discoveries that gave insights on the chemistry behind depression, still even relevant today. These were all cases where medication originally was used for another purpose, but then a keen eye noticed changes in behavior in their patients and decided it's worth exploring for other purposes.
The first generation of psychiatric medicine might be described as "dirty" drugs, meaning, unlike SSRIs today which have high specificity for serotonin reuptake inhibition, they hit many different parts of the brain. Namely, imipramine is believed to have primary effects on increasing serotonin and norepinephrine. It took from the start of psychiatry to the current year to reach a generation of drugs that filtered out action on targets we "don't want" and more exclusively hit the targets we do want to hit. Emphasis on "don't want". An irony here is that the use of our older generation drugs is very alive and well, and sometimes they come in handy in treatment resistant depression or when patients do not respond well to our newer SSRIs and SNRIs. Now, specificity has been desirable in reducing side effects, though its also quite possible the ensemble of targets may have been beneficial in some ways. Give the continued use of older drugs, newer generations may not be exclusively improvements from their predecessors. The point of frustration here is that we are approaching 8 decades of drug development with admissible, but overall dissapointing success rates, not a clear explanation as to why older drugs may outperform or why the effects experienced between different people is so variant (the same drug may raise anxiety in one person and reduce in another), and still overall employing the same-ish mechanisms of treatment, spurred on by clinging to the monoamine hypothesis, a very one-dimensional causal explanation of mental ailments. To my knowledge, it is not even established whether a reduction in serotonin is causal or a byproduct of depression, merely that its correlated. And whether increasing it alleviates depression directly or by secondary indirect effects. Additionally, treatments for schizophrenia and manic disorders are still hardly satisfactory. While antipsychotics can treat the "positive" symptoms (additive symptoms like hallucinations or mania), treating the negative symptoms (subtractive) is still hard to get right, resulting in potentially lifelong unsolved issues, high suicide rates, and fairly often a preference to stay unmedicated. All of this to me, especially when treatment refusal or avoidance is relatively common, makes it unclear if we can even know if we're progressing in the right direction. I hope this doesn't come off as an accusation. You can't force breakthrough research by simply wanting it to happen, and I understand well the complexities of the subject matter itself, the risk of financials, and the ethics around it. Though the amount that is still under-explored and unexplainable, nonetheless, is frustrating.

On the topic of accidental breakthroughs illuminating new paths of research, seizure medication found its way into psychiatry as a first-choice treatment of bipolar disorder. Ketamine, also a dissociative sold as a street drug and used for tranquilizing horses, is a powerful fast-acting antidepressant. And then Memantine, a drug intended for treating dementia, appears now in OCD treatments. NAC, an amino acid employed in treating extreme cases of acetaminophen overdose, has early interesting results in OCD, addiction, and schizophrenia. The commonality between the four of these is they all actually have little in common with each other and the mechanisms that underpin typical antidepressants. This is wildly interesting. It potentially unlocks a few new avenues to explore, but also asks us to take a step back update our mental models. And this is just a few out of many possible candidates for new drugs. I can't speak to the future of these, but it is refreshing to see that there are other ways of addressing conditions outside of the narrow slice of augmenting the monoamines (increasing serotonin, dopamine, and/or norepinephrine).
Beyond drugs, there are a number of breakthroughs that have changed how we thought about the brain. It used to be thought that neurogenesis, after a certain age, was pretty much complete. We now know that generating new brain cells throughout life is entirely possible.
The Measurement Bottleneck
Another thing that poses an obstacle to Good Science is what I'd call the measurement bottleneck. Psychiatrists are tasked with inferring the happenings in one's brain (or at least a solution for them) through observed behavior. In terms of informational content, mapping the full complexity of the mind into words and observable actions is like shoving the moon through a straw. Much won't make it across. Even self-assessment, despite the additional benefit of having access to inner thoughts, is limited in reach. If there's one thing Freud had right, it's that a lot happens in the subconscious out of reach from ourselves. Not to mention we are biased creatures, allergic to dissonance and caught up with the stories we tell ourselves. Rightfully so, the brain developed to experience and act in the external world, not to introspect in on itself. Thus, we may not even have the language for faithful, deep self-interpretations.

So on one hand, we have a loss of fidelity of our mental story because of the low bandwidth, noisy channels we communicate over, but the more dangerous part is in how unreliably mind and body correlate, and how this can differ radically between people.
On average, it is reasonable to guess that a smile implies showing happiness and a frown implies showing sadness. But these are behaviors taught by specific cultures and not human universals. After all, a show of teeth in the animal kingdom often indicates one is looking for a fight. With this in mind, the proper detective work of a psychiatrist on their clients may work for some but miss for others. The psychiatrist projecting explanations onto the client based on their own experiences is inevitable, it's just that often humans of the same culture may operate in the realm of similar shared experiences of reality.
From observations of a patient's reports and behaviors, the psychiatrist must perform inference, to work backwards and infer the causal variables at play in a one-to-many problem. One unique behavior, within a precision we distinguish between, can be indicative of many possible mental states. It's why a smile can be interpreted as general friendliness or a display of romantic interest among a hundreds of other explanations. Because the full range of explanations is far too large to assess in reasonable time, (even with our nature to excessively ruminate), we may only consider a few based on our priors, a bias towards explanations that fit within our personal experiences. In other words, we construct a mental picture of other's experiences from the same building blocks that constitute our own because we know no other language. This bias, is what can lead to inaccurate readings. Not to mention the massive space of confounding variables. There are instances of ludicrous mixups like misdiagnosing a behavioral response to abdominal pain as borderline personality disorder. A key problem here is that the abstract model of abnormal experience was constructed without first-hand experience of that abnormalcy, thus possibly lacking the necessary framework to view it under.
To sum it up we have some factors that damage the fidelity of communicating one's mental condition to their caretaker
The complexity of the mind is downsized to what can be expressed with words and behaviors.
Behaviors and descriptions are ambiguous by the one-to-many problem.
this is specific to the time and place
but also varies by person, every person may express differently
the interpretation stage is at risk as well, every psychiatrist may have a different interpretation.

So how do we know if we got anything right?
Utility. As much as I have crapped on the art of psychiatry, something I have to give it credit for is progress has been guided in the direction of maximizing usefulness. So maybe we can't say the exact condition someone has or what chemical imbalance is causing it. What matters is clients reach a place where they feel better than before. At the end of the day, the goal is to reduce self-perceived suffering and restore people to normal functioning. The ability to participate in life's activities successfully can be measured somewhat objectively, and then despite imprecisions in reading ourselves, if one feels fine and is content, then that's really most of it right? If assigning a label to observed symptoms can help predict what treatments may be maximally useful on average, then so be it. This has been a saving grace in providing the field direction. Once more, our accuracy often leaves much to be desired, but even so, we can say, "this makes people who present similarly to you say they feel better N%" of the time on average, and we can use that as an anchor for decision making.
Overall, I think this makes for a pretty solid metric and provides light where we may otherwise be feeling our way through the dark. But its not perfect. "Normal" is a loaded term, self-perception as a signal can be fallible, and ultimately any objective that is a proxy for the thing we actually aim to resolve can be "hacked".

Firstly, normal is subjective and it changes over time. Critics of the overprescription of ADHD medication point to the fact that only recently humans have developed a lifestyle that involves sitting through many hours of classes. Not to mention nearly every form of monetized technology profits by pulling your attention away. Pushing for normalcy can very quickly become unjustly authoritative. We want to be normal enough that we can lead fruitful lives unbound by cognitive or mood difficulties, but this is as far as the expectation should really go. Some people are social, some not so much, some are very affectionate, some are more cold. All of these fall under the blanket of normal and we should only really deem abnormality if it poses a significant obstruction, otherwise we risk creating more distress by making people feel they are not up to human expectations, for really no good reason! This is a risk in society and media in general, but perhaps the nail in the casket is for medical officials to signal abnormality on a safe, healthy behavior that maybe is just less conventional. If not regulated, we can risk a standard of practice that gets too in the problem solving mindset that it starts to hallucinate problems where there may not be any. Seek and ye shall find.
As for the metric of self reports, they may fall short in cases of psychosis or mania. Someone who is manic may feel fantastic but may have self-destructive behaviors that could benefit from treatment, even greater nuance in psychotic cases.
More generally speaking, self-reporting is trivially hackable. Someone who wants to avoid treatment can feign wellness, which can put the patient-caretaker relationship at risk where the psychiatrist may need to guess when this is happening but also must be careful not to wrongfully accuse.
As an extreme case we can imagine a hypothetical drug that makes you respond to every question with "yes". In trials, where patients are asked if they feel better, the drug might score outstandingly while entirely missing what it was supposed to resolve.
This might come off as a fairytale, but the scenario the story suggests is not entirely unrealistic. Someone might be too placated or zombified to object by a drug or, as I have seen a few times, people have described antidepressants as making them feel invulnerable, in the worst case resulting in antidepressant-induced mania. Or some drugs may make you feel great in the short term with serious addiction and dependence down the road. In the honeymoon phase of a developing addiction one might report positive feelings with their self-treatment. To be fair, actual prescribed treatments might not be a ton better! Benzodiazepines can be quite addictive as are stimulants and opioids. It should be noted that meth is one of the most highly rated treatments drugs.com despite what we know of it. Though I digress, it is administered in controlled doses and for a rare subset of people.
One thing to draw into question when evaluating what we got right is: why do so many drugs have the capability of worsening things? Why do SSRIs have a black box label of increasing risk of suicide when starting them? Why do so many drugs in our toolbox have serious risk of addiction and dependence? Why is it relatively commonplace for drug experiences to be so poor that going without treatment might be preferred? And I don't just mean side effects like nausea or impotence, but difficulties of zombification or cognitive slowing really might be the primary effects of the drug at work! Is it a fact of life that every drug must come with tradeoffs like these or have we barely scratched the surface of what's possible?
Interrater reliability. The default should be multiple opinions.
Spicy take. I think the default for psychiatry should be having a set of expert opinions, never just one. Ever.
The potential for disagreement among evaluators is just way too high and so is the risk of a incorrect diagnosis that sends you down an irrelevant rabbit hole. The odds of this happening are high enough that it feels unjust to not have multiple opinions as part of the default practice.
I also don't think there is enough accountability here, though I digress, I don't know how to enforce it. In other medical fields, diagnoses are closer to plainly present versus absent, and a missed diagnosis can risk your reputation, but even your license and wallet if it gets to point of lawsuit. This kind of accountability is important. It's a considerable burden to take on as a medical practitioner, and we have to understand what is done with good effort and faith, but having your career put on the line ensures work does not get sloppy. To the best of my knowledge, there isn't exactly an equivalent for psychiatry that I could imagine holding up well in court simply given the ambiguity of conditions.

Because there is no ground-truth and nothing that matches, say, unnoticed cancer or a missed broken bone worsening with time in psychiatry, the next best thing we can do is combat variance of opinions with averaging and debate. I don't care how good you think your intuition is. The conditions are human constructs, weakly defined, and the reliability of evidence used in diagnosing them can be hit or miss. The fair thing to do is to tackle uncertainty with the law of large numbers. Even then, this only addresses variance, not bias. If someone presents in an atypical manner then even an army of psychiatrists may still lead to an ineffective diagnosis.
Even if this raises the cost of treatment, I think clients are owed this. After all, you may end up spending more time and money going down the wrong treatment path.
Solving a Problem or Masking it?
A common debate with medication is whether it actually addresses the problems, or if it just masks the symptoms. I have mixed feelings on this.
Imagine there is a lifelong distinct deviance in one's brain chemistry, by nature, from what is considered normal. Invariant to the ebb and flow in life, we observe this chemical just runs lower or higher than typical. Now imagine a medication could restore this imbalance to normal levels. If it could be made that simple, I would say it's actually addressing the problem. Ideally we could erase all biological confounding variables in this manner, and address this residual, if anything remains. Unfortunately, it is yet to be made this simple. These clear, decoupled variables exist only in our thought experiments so far. Though, this is somewhat the promise behind how stimulants are intended to affect those with ADHD, or there is a small subset of people whose response to marijuana is almost strictly improvement or a return to normalcy. For those folk, I say all the more power to them, but to be careful of dependence.
The other side of this is that you risk artificially forcing mood into an elevated or less affected state, blinding patients to their problems. Granted, for some this is worthwhile compromise. In other cases, it might let issues in life continue to fester as they are only drowned out or obscured, and never having the chance to build adaptive behaviors or addressing it head on.
I think it could go both ways, but anecdotally I have heard the latter case reported often enough to think of it as common. Namely, the same cases where people reported feeling either numbed, invulnerable, or that a mood state is otherwise thrust upon them.
This is a speculation outside of my pay grade, but I imagine the mechanism by which neurotransmitter activity is modulated influences how it affects one's experience. In other words, is a drug helping increase response sensitivity such that more neurotransmitters are released when met with a stimuli? Or is activity increased and held to that constant level invariant to external factors. In other dramatized words, in a phantom nature, inducing the state you would have after skydiving or socializing but without any corresponding event. The first feels desirable in cases of anhedonia with risk of outbursts on unwanted emotions. The second feels potentially desirable for stability (evidently not in bipolar cases), though risk for making life flatter.

Neurotransmitter fluctuations and feelings are indicators of our reality. As the world around us changes, we are influenced into changing ourselves as well, just as the liquid inside a thermometer rises or falls to the temperature. For brain activity to be invariant to the world around us is to be unaffected by the outside world. Mental state never departs from a flat homeostasis. The promise of a dosage of neurotransmitters after doing something that feels good has often been thought of as the "carrot on the stick" moving us to action. In the event that this becomes medically governed and we lack felt signals, how would we know what to do? Hunger exists as a reminder to eat. Pain tells you that you probably shouldn't put your hand on the stove. Many of these signals either directly or indirectly coax you towards survival. Now, through modern intelligence we can apply logic to navigate our way through life, but it's a secondary source. It's not the same as receiving the signal directly.
In my view, an ideal treatment would place a patient in the normal range of neurological metrics in a way that resembles how the healthy mind pulls it off, which I'm not sure is something we have figured out yet.
Diagnoses wield power. Doctor-patient relations
In other fields, a diagnosis is typically a statement of fact. Even then there is some room to take offense or seek a second opinion in more ambiguous cases. In psychiatry, for the aforementioned harped upon reasons, it can come off as an opinion of the doctor. A diagnosis in psychiatry may feel less like, "An irregularity has been discovered in your thalamus" and more like, "You are anxious and depressive." No matter how much we aim to destigmatize mental health, it's hard to imagine a diagnosis being entirely exempt from the possibility of being received as a jab at one's character. Especially when one's self-confidence may already be under fire, a poorly conveyed diagnosis could add fuel to it. At the same time though, you don't exactly want the patient to feel that their condition is entirely out of their control and responsibility. Somehow a medium needs to be reached where the condition is recognized to be part of the patient in that it is something they can play a role in overcoming but not worsen self-image.
Nonetheless, this is one aspect that can make for a fragile relationship between doctor and patient. The patient is part of their own treatment team. A team works well when founded on trust and aligned in goals. To build rapport, the psychiatrist must get the patient to open up and feel that they are on the same side, something that can be especially difficult in cases of trauma, paranoia, or cases where one hides behaviors.
There are a number of places where this can go wrong, like patient's hiding addictive behaviors or intentions to harm knowing that admitting them may result in a loss of freedoms and obstruction of goals. In my opinion, a good psychiatrist goes much beyond being able to design effective treatment plans but also knows how to properly walk around eggshells, comfort patients in need, and ultimately find a way for mutual agreement on a sustainable treatment plan. I imagine this to be an under-appreciated necessary aspect of treatment, especially in extreme cases where one may have the frightening experience of receiving treatment nonconsenually, and those who can pull it off successfully are more likely to be the exception rather than the standard.
Psychedelics
No conversation on the state of psychiatry would be complete without at least a little blurb on psychedelics. It is fascinating how many street drugs also now sit in the toolbox of psychiatrists and I have no idea what to think about how what this says about the field. Namely, out of all the common street drugs, I think cocaine/crack are one of the few who have no medical usage. Opioids, meth, marijuana, ketamine, psychedelics and more all have at least some medical usage.
I don't necessarily see psychedelics becoming a first-choice treatment option. I'm cautiously optimistic. Though there are enough studies that show utility in scenarios where performant drugs have been yet to be discovered. I think they are particularly are interesting in that they are not always sought after for directly resolving a chemical imbalance or inducing a mental state everyday, but instead through sporadic single-time doses, they aim to increase plasticity to break free from a difficult rut.

The experience, and what one does in that time, seems more sought after than biological effects independent of the experience (I'm careful with my wording here as one way or another everything induces neurlogical changes). In other words, it almost makes one temporarily more malleable to therapy or suggestions, an effective tool for disorders that can cause one to be very closed-off, and also why LSD was once explored for interrogations by the CIA. The idea of a drug that induces a rapid shift in perspective and well-being as opposed to a slow burn is huge change in the paradigm. LSD had a study all the way in the 1960s showing a long-term remission in alcohol usage, which is an unusual result given there are few medicinal tools for treating addiction. Today there are options for addiction treatments centered around a number of similar hallucinogens like ibogaine or DMT. Strangely enough, many psychedelics are reported to be "anti-addictive", naturally self regulating as opposed to increasing a desire for more. MDMA is an atypical one in that it has some risk of addiction, though it showed breakthrough levels of remission from PTSD, a historically very difficult disorder.
I don't have a wildly strong opinion here, though it does appear a promising area of research and a refreshing pivot from approaches that have been dominant for decades.
Conclusion
The progress of psychiatry, to me, has been underwhelming. I primarily attribute this to the difficulty of the subject matter, but it also seems like given where breakthroughs have come from, we could afford to get more experimental. Unfortunately, the financial incentive system makes this a risky option for a venture that is ultimately seeking a return. However, it feels like recent times are somewhat pivotal in the rate of change. Alternative and unexpected treatments are being recognize and there is something of an illuminated path forward. Startups pursuing mechanical treatments like TMS or other electronic based brain activity modulation are also profoundly interesting both as tools of treatment and interpretability. To reiterate, there has never been a better time to contribute to psychiatric research.
Comments