Questions About Qualia Research Institute (Organizational)
QRI is a nonprofit research group studying consciousness based in San Francisco, California. We are a registered 501(c)(3) organization.
Qualia Computing and Opentheory are the personal blogs of QRI co-founders Andrés Gómez Emilsson and Michael Johnson, respectively. While QRI was in its early stages, all original QRI research was initially published on these two platforms. However, from August 2020 onward, this is shifting to a unified pipeline centered on QRI’s website.
Although QRI does collaborate regularly with university researchers and laboratories, we are an independent research organization. Put simply, QRI is independent because we didn't believe we could build the organization we wanted and needed to build within the very real constraints of academia. These constraints include institutional pressure to work on conventional projects, to optimize for publication metrics, and to clear various byzantine bureaucratic hurdles. It also includes professional and social pressure to maintain continuity with old research paradigms, to do research within an academic silo, and to pretend to be personally ignorant of altered states of consciousness. It's not that good research cannot happen under these conditions, but we believe good consciousness research happens despite the conditions in academia, not because of them, and the best use of resources is to build something better outside of them.
Questions About Our Research Approach
The first major difference is that QRI breaks down “solving consciousness" into discrete subtasks; we're clear about what we're trying to do, which ontologies are relevant for this task, and what a proper solution will look like. This may sound like a small thing, but an enormous amount of energy is wasted in philosophy by not being clear about these things. This lets us “actually get to work."
Second, our focus on valence is rare in the field of consciousness studies. A core bottleneck in understanding consciousness is determining what its ‘natural kinds' are: terms which carve reality at the joints. We believe emotional valence (the pleasantness/unpleasantness of an experience) is one such natural kind, and this gives us a huge amount of information about phenomenology. It also offers a clean bridge for interfacing with (and improving upon) the best neuroscience.
Third, QRI takes exotic states of consciousness extremely seriously whereas most research groups do not. An analogy we make here is that ignoring exotic states of consciousness is similar to people before the scientific enlightenment thinking that they can understand the nature of energy, matter, and the physical world just by studying it at room temperature while completely ignoring extreme states such as what's happening in the sun, black holes, plasma, or superfluid helium. QRI considers exotic states of consciousness as extremely important datapoints for reverse-engineering the underlying formalism for consciousness.
Lastly, we have a focus on precise, empirically testable predictions, which is rare in philosophy of mind. Any good theory of consciousness should also contribute to advancements in neuroscience. Likewise, any good theory of neuroscience should contribute to novel, bold, falsifiable predictions, and blueprints for useful things, such as new forms of therapy. Having such a full-stack approach to consciousness which does each of those two things is thus an important marker that “something interesting is going on here" and is simply very useful for testing and improving theory.
QRI has three core areas of research: philosophy, neuroscience, and neurotechnology
-
Philosophy: Our philosophy research is grounded in the eight problems of consciousness. This divide-and-conquer approach lets us explore each subproblem independently, while being confident that when all piecemeal solutions are added back together, they will constitute a full solution to consciousness.
-
Neuroscience: We've done original synthesis work on combining several cutting-edge theories of neuroscience (the free energy principle, the entropic brain, and connectome-specific harmonic waves) into a unified theory of Bayesian emotional updating; we've also built the world's first first-principles method for quantifying emotional valence from fMRI. More generally, we focus on collecting high valence neuroimaging datasets and developing algorithms to analyze, quantify, and visualize them. We also do extensive psychophysics research, focusing on both the fine-grained cognitive-emotional effects of altered states, and how different types of sounds, pictures, body vibrations, and forms of stimulation correspond with low and high valence states of consciousness.
- Neurotechnology: We engage in both experimentation-driven exploration, tracking the phenomenological effects of various interventions, as well as theory-driven development. In particular, we're prototyping a line of neurofeedback tools to help treat mental health disorders.
The most direct way to see what QRI has accomplished is to read through our core research. Some of our noteworthy accomplishments include: creating a clear conceptual framework for researching consciousness, conducting original research on cluster headache frequency and treatment options, constructing a novel neuroscience paradigm for meditation & jhana states, offering an in-depth defense for formalism and against functionalism, outlining solutions for scaling cluster headache treatment options, presenting evidence that the pain-pleasure scale is logarithmic, providing a fully mathematical account of the geometry of DMT experiences, offering a detailed enumeration of research threads enabled by a harmonic decomposition of brain activity, exploring anti-tolerance drugs, proposing body-cooling technology to reduce MDMA neurotoxicity, inventing a technique for psychedelic cryptography, finding a theoretical solution to the problem of other minds, creating a novel method to calculate emotional valence from neuroimaging data, and developing a unified theory of brain dynamics and emotional updating.
Over the next five years, we intend to further our neurotechnology to the point that we can treat PTSD (posttraumatic stress disorder), especially treatment-resistant PTSD. We intend to empirically verify or falsify the symmetry theory of valence. If it is falsified, we will search for a new theory that ties together all of the empirical evidence we have discovered. We aim to create an Effective Altruist cause area regarding the reduction of intense suffering as well as the study of very high valence states of consciousness.
Over the next 20 years, we intend to become a world-class research center where we can put the discipline of “paradise engineering" (as described by philosopher David Pearce) on firm academic grounds.
Questions About Our Mission
Understanding consciousness would improve the world in a tremendous number of ways. One obvious outcome would be the ability to better predict what types of beings are conscious—from locked-in patients to animals to pre-linguistic humans—and what their experiences might be like.
We also think it's useful to break down the benefits of understanding consciousness in three ways: reducing the amount of extreme suffering in the world, increasing the baseline well-being of conscious beings, and achieving new heights for what conscious states are possible to experience.
Without a good theory of valence, many neurological disorders will remain completely intractable. Disorders such as fibromyalgia, complex regional pain syndrome (CRPS), migraines, and cluster headaches are all currently medical puzzles and yet have incredibly negative effects on people's livelihoods. We think that a mathematical theory of valence will explain why these things feel so bad and what the shortest path for getting rid of them looks like. Besides valence-related disorders, nearly all mental health disorders, from clinical depression and PTSD to schizophrenia and anxiety disorders, will become better understood as we discover the structure of conscious experience.
We also believe that many (though not all) of the zero-sum games people play are the products of inner states of dissatisfaction and suffering. Broadly speaking, people who have a surplus of cognitive and emotional energy tend to play more positive sum games, are more interested in cooperation, and are very motivated to do so. We think that studying states such as those induced by MDMA that combine both high valence and a prosocial behavior mindset can radically alter the game theoretical landscape of the world for the better.
In QRI's perfect future:
- There is no involuntary suffering and all sentient beings are animated by gradients of bliss,
-
Research on qualia and consciousness is done at a very large scale for the purpose of mapping out the state-space of consciousness and understanding its computational and intrinsic properties (we think that we've barely scratched the surface of knowledge about consciousness),
-
We have figured out the game-theoretical subtleties in order to make that world dynamic yet stable: radically positive, without just making it fully homogeneous and stuck in a local maxima.
Questions About Getting Involved
We are almost finished with an Intro to QRI book, but until that is ready, you can start by reading the QRI Glossary to become acquainted with the vocabulary we use. From there, you can explore our research lineages. Lastly, you can also check out Top 10 Qualia Computing Articles.
You can start by signing up for our newsletter! This is by far our most important communication channel. We also have a Facebook page, Twitter account, and Linkedin page.
The best ways to help QRI are to:
- Donate to help support our work.
- Read and engage with our research. We love critical responses to our ideas and encourage you to reach out if you have an interesting thought!
- Spread the word to friends, potential donors, and people that you think would make great collaborators with QRI.
- Check out our volunteer page to find more detailed ways that you can contribute to our mission, from independent research projects to QRI content creation.
Questions About Consciousness
The most important assumption that QRI is committed to is Qualia Formalism, the hypothesis that the internal structure of our subjective experience can be represented precisely by mathematics. We are also Valence Realists: we believe valence (how good or bad an experience feels) is a real and well-defined property of conscious states. Besides these positions, we are fairly agnostic and everything else is an educated guess useful for pragmatic purposes.
QRI thinks that functionalism takes many high-quality insights about how systems work and combines them in such a way that both creates confusion and denies the possibility of progress. In its raw, unvarnished form, functionalism is simply skepticism about the possibility of Qualia Formalism. It is simply a statement that “there is nothing here to be formalized; consciousness is like élan vital, confusion to be explained away." It's not actually a theory of consciousness; it's an anti-theory. This is problematic in at least two ways:
- By assuming consciousness has formal structure, we're able to make novel predictions that functionalism cannot (see e.g. QRI's Symmetry Theory of Valence, and Quantifying Bliss). A few hundred years ago, there were many people who doubted that electromagnetism had a unified, elegant, formal structure, and this was a reasonable position at the time. However, in the age of the iPhone, skepticism that electricity is a “real thing" that can be formalized is no longer reasonable. Likewise, everything interesting and useful QRI builds using the foundation of Qualia Formalism stretches functionalism's credibility thinner and thinner.
- Insofar as functionalism is skeptical about the formal existence of consciousness, it's skeptical about the formal existence of suffering and all sentience-based morality. In other words, functionalism is a deeply amoral theory, which if taken seriously dissolves all sentience-based ethical claims. This is due to there being an infinite number of functional interpretations of a system: there's no ground-truth fact of the matter about what algorithm a physical system is performing, about what information-processing it's doing. And if there's no ground-truth about which computations or functions are present, but consciousness arises from these computations or functions, then there's no ground-truth about consciousness, or things associated with consciousness, like suffering. This is a strange and subtle point, but it's very important. This point alone is not sufficient to reject functionalism: if the universe is amoral, we shouldn't hold a false theory of consciousness in order to try to force reality into some ethical framework. But in debates about consciousness, functionalists should be up-front that functionalism and radical moral anti-realism is a package deal, that inherent in functionalism is the counter-intuitive claim that just as we can reinterpret which functions a physical system is instantiating, we can reinterpret what qualia it's experiencing and whether it's suffering.
For an extended argument, see "Against Functionalism".
At QRI, we hold a position that is close to dual-aspect monism or neutral monism, which states that the universe is composed of one kind of thing that is neutral, and that both the mental and physical are two features of this same substance. One of the motivating factors for holding this view is that if there is deep structure in the physical, then there should be a corresponding deep structure to phenomenal experience. And we can tie this together with physicalism in the sense that the laws of physics ultimately describe fields of qualia. While there are some minor disagreements between dual-aspect monism and panpsychism, we believe that our position mostly fits well with a panpsychist view—that phenomenal properties are a fundamental feature of the world and aren't spontaneously created only when a certain computation is being performed.
However, even with this view, there still are very important questions, such as: what makes a unified conscious experience? Where does one experience end and another begin? Without considering these problems in the light of Qualia Formalism, it is easy to tie animism into panpsychism and believe that inanimate objects like rocks, sculptures, and pieces of wood have spirits or complex subjective experiences. At QRI, we disagree with this and think that these types of objects might have extremely small pockets of unified conscious experience, but will mostly be masses of micro-qualia that are not phenomenally bound into some larger experience.
QRI is very grateful for IIT because it is the first mainstream theory of consciousness that satisfies a Qualia Formalist account of experience. IIT says (and introduced the idea!) that for every conscious experience, there is a corresponding mathematical object such that the mathematical features of that object are isomorphic to the properties of the experience. QRI believes that without this idea, we cannot solve consciousness in a meaningful way, and we consider the work of Giulio Tononi to be one of our core research lineages. That said, we are not in complete agreement with the specific mathematical and ontological choices of IIT, and we think it may be trying to ‘have its cake and eat it too' with regard to functionalism vs physicalism. For more, see Sections III-V of Principia Qualia.
We make no claim that some future version of IIT, particularly something more directly compatible with physics, couldn't cleanly address our objections, and see a lot of plausible directions and promise in this space.
On our research lineages page, we list the work of Karl Friston as one of QRI's core research lineages. We consider the free energy principle (FEP), as well as related research such as predictive coding, active inference, the Bayesian brain, and cybernetic regulation, as an incredibly elegant and predictive story of how brains work. Friston's idea also forms a key part of the foundation for QRI's theory of brain self-organization and emotional updating, neural annealing.
However, we don't think that the free energy principle is itself a theory of consciousness, as it suffers from many of the shortcomings of functionalism: we can tell the story about how the brain minimizes free energy, but we don't have a way of pointing at the brain and saying there is the free energy! The FEP is an amazing logical model, but it's not directly connected to any physical mechanism. It is a story that “this sort of abstract thing is going on in the brain" without a clear method of mapping this abstract story to reality.
Friston has supported this functionalist interpretation of his work, noting that he sees consciousness as a process of inference, not a thing. That said, we are very interested in his work on calculating the information geometry of Markov blankets, as this could provide a tacit foundation for a formalist account of qualia under the FEP. Regardless of this, though, we believe Friston's work will play a significant role in a future science of mind.
The global workspace theory (GWT) is a cluster of empirical observations that seem to be very important for understanding what systems in the brain contribute to a reportable experience at a given point in time. The global workspace theory is a very important clue for answering questions of what philosophers call Access Consciousness, or the aspects of our experience on which we can report.
However, QRI does not consider the global workspace theory to be a full theory of consciousness. Parts of the brain that are not immediately contributing to the global workspace may be composed of micro qualia, or tiny clusters of experience. They're obviously impossible to report on, but they are still relevant to the study of consciousness. In other words, just because a part of your brain wasn't included in the instantaneous global workspace, doesn't mean that it can't suffer or it can't experience happiness. We value global workspace research because questions of Access Consciousness are still very critical for a full theory of consciousness.
QRI is generally opposed to theories of consciousness that equate consciousness with higher order reflective thought and cognition. Some of the most intense conscious experiences are pre-reflective or unreflective such as blind panic, religious ecstasy, experiences of 5-MeO-DMT, and cluster headaches. In these examples, there is not much reflectivity nor cognition going on, yet they are intensely conscious. Therefore, we largely reject any attempt to define consciousness with a higher-order theory.
The relationship between evolution and consciousness is very intricate and subtle. An eliminativist approach arrives at the simple idea that information processing of a certain type is evolutionarily advantageous, and perhaps we can call this consciousness. However, with a Qualia Formalist approach, it seems instead that the very properties of the mathematical object isomorphic to consciousness can play key roles (either causal or information processing) that make it advantageous for organisms to recruit consciousness.
If you don't realize that consciousness maps onto a mathematical object with properties, you may think that you understand why consciousness was recruited by natural selection, but your understanding of the topic would be incomplete. In other words, to have a full understanding of why evolution recruited consciousness, you need to understand what advantages the mathematical object has. One very important feature of consciousness is its capacity for binding. For example, the unitary nature of experience—the fact that we can experience a lot of qualia simultaneously—may be a key feature of consciousness that accelerates the process of finding solutions to constraint satisfaction problems. In turn, evolution would hence have a reason to recruit states of consciousness for computation. So rather than thinking of consciousness as identical with the computation that is going on in the brain, we can think of it as a resource with unique computational benefits that are powerful and dynamic enough to make organisms that use it more adaptable to their environments.
QRI thinks there is a very high probability that every animal with a nervous system is conscious. We are agnostic about unified consciousness in insects, but we consider it very likely. We believe research on animal consciousness has relevance when it comes to treating animals ethically. Additionally, we do think that the ethical importance of consciousness has more to do with the pleasure-pain axis (valence), rather than cognitive ability. In that sense, the suffering of non-human animals may be just as morally relevant, if not more relevant than humans. The cortex seems to play a largely inhibitory role for emotions, such that the larger the cortex is, the better we're able to manage and suppress our emotions. Consequently, animals whose cortices are less developed than ours may experience pleasure and pain in a more intense and uncontrollable way, like a pre-linguistic toddler.
We think it's very unlikely that plants have complex, unified conscious experiences or feel emotions. Plants have thick cellulose walls that separate individual cells, making it very unlikely that plants can solve the binding problem and therefore create unified moments of experience. Additionally, the amount of information flow and speed of information flow in a plant is much less than the amount of information being exchanged in an animal nervous system. Both of these facts point to low integrated information.
This is a very multifaceted question. As a whole, we postulate that in the vast majority of cases, when somebody may be nominally pursuing pain or suffering, they're actually trying to reduce internal dissonance in pursuit of consonance or they're failing to predict how pain will actually feel. For example, when a person hears very harsh music, or enjoys extremely spicy food, this can be explained in terms of either masking other unpleasant sensations or raising the energy parameter of experience, the latter of which can lead to neural annealing: a very pleasant experience that manifests as consonance in the moment.
Before we try to ‘fix' something, it's important to understand what it's trying to do for us. Sometimes suffering leads to growth; sometimes creating valuable things involves suffering. Sometimes, ‘being sad' feels strangely good. Insofar as suffering is doing good things for us, or for the world, QRI advocates a light touch (see Chesterton's fence). However, we also suggest two things:
- Most kinds of melancholic or mixed states of sadness usually are pursued for reasons that cash out as some sort of pleasure. Bittersweet experiences are far more preferable than intense agony or deep depression. If you enjoy sadness, it's probably because there's an aspect of your experience that is enjoyable. If it were possible to remove the sad part of your experience while maintaining the enjoyable part of it, you might be surprised to find that you prefer this modified experience more than the original one.
- There are kinds of sadness and suffering that are just bad, that degrade us as humans, and would be better to never feel. QRI doesn't believe in forcibly taking away voluntary suffering, or pushing bliss on people. But we would like to live in a world where people can choose to avoid such negative states, and on the margin, we believe it would be better for humanity for more people to be joyful, filled with a deep sense of well-being.
When you listen to very consonant music or consonant tones, you will quickly adapt to these sounds and get bored of them. This has nothing to do with consonance itself being unpleasant and everything to do with learning in the brain. Whenever you experience the same stimuli repeatedly, most brains will trigger a boredom mechanism and add dissonance of its own in order to make you enjoy the stimuli less or simply inhibit it, not allowing you to experience it at all. Semantic satiation is a classic example of this where repeating the same word over and over will make it lose its meaning. For this reason, to trigger many high valence states of consciousness consecutively, you need contrast. In particular, music works with gradients of consonance and dissonance, and in most cases, moving towards consonance is what feels good rather than the absolute value of consonance. Music tends to feel the best when you mix a high absolute value of consonance together with a very strong sense of moving towards an even higher absolute value of consonance. Playing some levels of dissonance during a song will later enhance the enjoyment of the more consonant parts such as the chorus of songs, which are reported to be the most euphoric parts of song and typically are extremely consonant.
QRI thinks that consciousness research is critical for addressing AI safety. Without a precise way of quantifying an action's impact on conscious experiences, we won't be able to guarantee that an AI system has been programmed to act benevolently. Also, certain types of physical systems that perform computational tasks may be experiencing negative valence without any outside observer being aware of it. We need a theory of what produces unpleasant experiences to avoid inadvertently creating superintelligences that suffer intensely in the process of solving important problems or accidentally inflict large-scale suffering.
Additionally, we think that a very large percentage of what will make powerful AI dangerous is that the humans programming these machines and using these machines may be reasoning from states of loneliness, resentment, envy, or anger. By discovering ways to help humans transition away from these states, we can reduce the risks of AI by creating humans that are more ethical and aligned with consciousness more broadly. In short: an antidote for nihilism could lead to a substantial reduction in existential risk.
One way to think about QRI and AI safety is that the world is building AI, but doesn't really have a clear, positive vision of what to do with AI. Lacking this, the default objective becomes “take over the world." We think a good theory of consciousness could and will offer new visions of what kind of futures are worth building—new Schelling points that humanity (and AI researchers) could self-organize around.
QRI is agnostic about this question. We have reasons to believe that digital computers in their current form cannot solve the phenomenal binding problem. Most of the activity in digital computers can be explained in a stepwise fashion in terms of localized processing of bits of information. Because of this, we believe that current digital computers could be creating fragments of qualia, but are unlikely to be creating strongly globally bound experiences. So, we consider the consciousness of digital computers unlikely, although given our current uncertainty over the Binding Problem (or alternatively framed, the Boundary Problem), this assumption is lightly held. In the previous question, when we write that “certain types of physical systems that perform computational tasks may be experiencing negative valence", we assume that these hypothetical computers have some type of unified conscious experience as a result of having solved the phenomenal binding problem. For more on this topic, see: “What's Out There?"
Miscellaneous
We are collaborating with researchers from Johns Hopkins University on studies involving the analysis of neuroimaging data of high-valence states of consciousness. Additionally, we are currently preparing two publications for peer-reviewed journals on topics from our core research areas. Michael Johnson will be presenting at this year's MCS seminar series, along with Karl Friston, Anil Seth, Selen Atasoy, Nao Tsuchiya, and others; Michael Johnson, Andrés Gómez Emilsson, and Quintin Frerichs have also given invited talks at various east-coast colleges (Harvard, MIT, Princeton, and Dartmouth).
Some well-known researchers and intellectuals that are familiar and think positively about our work include: Robin Carhart-Harris, Scott Alexander, David Pearce, Steven Lehar, Daniel Ingram, and more. Scott Alexander acknowledged that QRI put together the paradigms that contributed to Friston's integrative model of how psychedelics work before his research was published. Our track record so far has been to foreshadow (by several years in advance) key discoveries later proposed and accepted in mainstream academia. Given our current research findings, we expect this trend to continue in the years to come.
We think that, to a large extent, people and animals work under the illusion that they are pursuing intentional objects, states of the external environment, or relationships that they may have with the external environment. However, when you examine these situations closely, you realize that what we actually pursue are states of high valence triggered by external circumstances. There may be evolutionary and cultural selection pressures that push us toward self-deception as to how we actually function. And we consider it negative to have these selection pressures makes us less self-aware because it often focuses our energy on unpleasant, destructive, or fruitless strategies. QRI hopes to support people in fostering more self-awareness, which can come through experiments with one's own consciousness, like meditation, as well as through the deeper theoretical understanding of what it is that we actually want.
We consider David Pearce to be one of our core lineages. We particularly value his contribution to valence realism, the insistence that states of consciousness come with an overall valence, and that this is very morally relevant. We also consider David Pearce to be very influential in philosophy of mind; Pearce, for instance, coined the phrase 'tyranny of the intentional object’, the title of a core QRI piece of the same name. We have been inspired by Pearce’s descriptions for what any scientific theory of consciousness should be able to explain, as well as his particular emphasis on the binding problem. David’s vision of a world animated by ‘gradients of bliss’ has also been very generative as a normative thought experiment which integrates human and non-human well-being. We do not necessarily agree with all of David Pearce’s work, but we respect him as an insightful and vivid thinker who has been brave enough to actually take a swing at describing utopia and who we believe is far ahead of his time.
There’s general agreement within QRI that intense suffering is an extreme moral priority, and we’ve done substantial work on finding simple ways of getting rid of extreme suffering (with our research inspiring at least one unaffiliated startup to date). However, we find it premature to strongly endorse any pre-packaged ethical theory, especially because none of them are based on any formalism, but rather an ungrounded concept of ‘utility’. The value of information here seems enormous, and we hope that we can get to a point where the ‘correct’ ethical theory may simply ‘pop out of the equations’ of reality. It’s also important to highlight the fact that common versions and academic formulations of utilitarianism seem to be blind to many subtleties concerning valence. For example, they do not distinguish between mixed states of consciousness where you have extreme pleasure combined with extreme suffering in such a way that you judge the experience to be neither entirely suffering nor entirely happiness and states of complete neutrality, such as extreme white noise. Because most formulations of utilitarianism do not distinguish between them, we are generally suspicious of the idea that philosophers of ethics have considered all of the relevant attributes of consciousness in order to make accurate judgments about morality.
We greatly admire the linguistic and logical rigor used by top philosophy of mind departments. Crackpots complain about academia but don't acknowledge that well-written academic philosophy meets extremely high standards of precision and clarity. However, we wish that the rigor employed by top philosophers would be used to explore topics that would yield high value instead of being channeled into obscure, narrow topics that are disconnected from what truly matters from an ethical, moral, and philosophical point of view. For example, there is little appreciation of the value of bringing mathematical formalisms into discussions about the mind, or what that might look like in practice. Likewise there is close to no interest in preventing extreme suffering nor understanding its nature. Additionally, there is usually a disregard for extreme states of positive valence, and strange or exotic experiences in general. It may be the case that there are worthwhile things happening in departments and classes creating and studying this literature, but we find them characterized by processes which are unlikely to produce progress on creating a science of mind.
At QRI, we do not make specific recommendations to individuals, but rather point to areas of research that we consider to be extremely important, tractable, and neglected, such as anti-tolerance drugs, neural annealing techniques, frequency specific microcurrent for kidney stone pain, and N,N-DMT and other tryptamines for cluster headaches and migraines.
QRI thinks ending extreme suffering is important, tractable, and neglected. It's important because of the logarithmic scales of pleasure and pain—the fact that extreme suffering is far worse by orders of magnitude than what people intuitively believe. It's tractable because there are many types of extreme suffering that have existing solutions that are fairly trivial or at least have a viable path for being solved with moderately funded research programs. And it's neglected mostly because people are unaware of the existence of these states, though not necessarily because of their rarity. For example, 10% of the population experiences kidney stones at some point in their life, but for reasons having to do with trauma, PTSD, and the state-dependence of memory, even people who have suffered from kidney stones do not typically end up dedicating their time or resources toward eradicating them.
It's also likely that if we can meaningfully improve the absolute worst experiences, much of the knowledge we'll gain in that process will translate into other contexts. In particular, we should expect to figure out how to make moderately depressed people happier, fix more mild forms of pain, improve the human hedonic baseline, and safely reach extremely great peak states. Mood research is not a zero-sum game. It's a web of synergies.