영적인 전쟁과 심리 전쟁 블로그: Spiritual and Psychological Warfare Blog

View Original

Welcome to the psychology blog!

I was the leading figure of the spiritual and psychological warfare for 17 years. This is a website that is ran by your donations. 

Timing Never Lies

The art of reading mind

One of the most effective strategies in poker is known as "bluffing." This technique, also referred to as "bluffing," is a form of deception in which a player deliberately places large bets with the intention of making it appear that they have a superior hand to their opponent, despite actually having a weaker hand. The efficacy of bluffing is well-documented, and Nobel Prize-winning economist Robert J. Shiller, a renowned expert in game theory, has asserted that "bluffing is the most effective strategy in all categories of cognitive games."The application of bluffing extends beyond the realm of poker. In daily life, individuals frequently employ bluffing strategies. For instance, a salesperson might offer an unplanned deal on a new product to maintain an ongoing relationship with a client, or a man might purchase an expensive handbag from his own funds to impress his girlfriend by appearing to be a capable man. Smaller examples include the following: asserting, "I'm almost there, I'll be there in five minutes," despite the speaker's awareness that it will take an additional 30 minutes to reach the destination, or fabricating a story about attending a friend's father's funeral to excuse a tardy return home from a recurring dinner engagement.

While it is often considered immoral to deceive others, bluffing is merely a strategic maneuver, devoid of any inherent moral judgment. From a moralist's perspective, lying is inherently wrong, yet in certain circumstances, refraining from disclosing information can exacerbate the situation. For instance, a parent may choose to mislead a terminally ill child with statements such as, "It's not a big deal, don't worry too much," or a wife may tell her first child, "It won't hurt, don't worry too much." In the real world, not all lies are inherently malevolent, and there exists a gradation of lies based on their severity.

However, some lies are readily discernible, while others are so artfully crafted that their authenticity remains undetectable, despite meticulous scrutiny.In intimate relationships, such as those between parents and children, siblings, or individuals who have shared a prolonged history and are familiar with each other's patterns, lies are often readily discernible.Nevertheless, untrained individuals may be easily misled by professional liars who earn their livelihoods through deceit and utilize it as a means of survival. While easily identifiable lies are often trivial and do not cause significant problems, deceit by a scammer can lead to substantial financial losses and considerable mental distress.The question, therefore, is how to discern whether a person on the other end of a raw cotton transaction is being dishonest.To illustrate this point, I will present an example from my personal experience.

During the summer of 2008, I received a phone call from the group secretary's office. The secretary inquired if I was Mr. Tae-hyuk Lee, and I confirmed my identity. They then informed me that the chairman of the group wished to meet with me and requested that I allocate some time for the meeting. Although I was not familiar with the group's name, I was intrigued by the president's invitation. Having recently returned to Korea, I was curious about the normality of such connections in the Korean context.A few days later, I met with Mr. Kim, the chairman of A Group, who made me an investment proposal during our first meeting."I am familiar with Mr. Lee's reputation. Your name has already become a brand. If I invest about a billion dollars, are you willing to do business with me?""What?"I was taken aback by the magnitude of the investment. I acquiesced immediately.

"Then let us commence the business planning process immediately, with the first meeting scheduled for the following day."

"I will undertake the task, for which I am grateful."

However, following the initial meeting, Chairman Kim's response, which entailed an investment of 1 billion won, was delayed. A period of three weeks elapsed, and there was no discernible progress in the business negotiations. The only evident progress was the expenditure of time and resources on providing him with refreshments and suggestions for further activities, such as a planned excursion. Initially, I speculated that Koreans might be characterized by an excess of caution when contemplating business investments.I pondered the rationale behind Chairman Kim's actions, questioning if his hesitation was hindering him from pursuing alternative opportunities and whether his delay in committing to the investment was indicative of a potential ruse.I also observed that Chairman Kim's demeanor and facial expressions aroused my suspicion. However, given that I was the recipient of the investment, and that Chairman Kim persisted in treating me well, it was challenging for me to terminate the relationship without evidence or conviction.After four weeks of meetings, we engaged in another drinking party, at which point Chairman Kim departed first. I saw Chairman Kim off and returned to the bar, where it was late, leaving only the chairwoman and myself. I was eager to ascertain information about Chairman Kim from her, but she was in a sound mental state and likely to be tight-lipped about the personal details of her client, so a cautious approach was necessary to achieve my objective.I sat down face to face with her."Excuse me, Mr. President!""Yes?""Mr. Kim is a regular here, isn't he? He has good manners, is involved in significant business dealings, and appears to be a pleasant individual.

"He seems like a very nice man."

One, two, three!"Three seconds after I posed my question, she responded."Yes, she is!"

However, I immediately suspected that her response was not genuine. While her affirmation was immediate, the time it took her to respond indicated that her statement was not accurate. It appeared that she was attempting to feign inexperience.

In a poker game, the most effective method of detecting an opponent's bluff is by assessing the temporal nuances of their actions rather than relying on immediate visual cues such as bet size or facial expressions. This is due to the temporal lag between verbal statements and their subsequent actions. Consequently, if a person bets with half a second less delay than their typical pace in a poker game, it is probable that they are bluffing.The more skilled the player, the less time lag there is between when they are bluffing and when they are not bluffing (sometimes there is almost no time lag), making it challenging to ascertain their honesty.It is difficult to judge the authenticity.

This temporal discrepancy between truth and falsehood can also be applied to ordinary conversation.The woman's protracted response to "Yes, that's right!" can be attributed to her lack of eloquence in deceit.Upon ascertaining her dishonesty, I proceeded to the subsequent step."Mr. Kim, what is the amount of the debt?"I was unaware of whether Chairman Kim was in debt or not. I inquired about the debt because, unless Mr. Kim was a highly skilled businessman, it was likely that he had incurred a debt at the bar.The president's response was largely in line with my expectations.

"What? Oh, you knew all that? Wait a minute. Let me settle the bill!"

The unsuspecting boss was more susceptible to my strategy than I had anticipated.Subsequently, she made numerous statements.

"I've been here a few times before, but the other customers usually pay for the drinks, and the drinks you and Mr. Tae-hyuk have together are all out of pocket. I don't know him, but he doesn't seem like the kind of person who does business properly."

This marked the conclusion of my professional relationship with Chairman Kim. Subsequent encounters, albeit formal in nature, revealed that Chairman Kim sought to avoid meeting with me upon recognizing my understanding of his identity.While many individuals may assume that overt information is more reliable than timing in detecting deception, it is important to note that visible information is more likely to contain hidden lies.It is implausible that an individual intent on deceiving would deliberately plant their deception in plain sight.Timing, however, is not easily deceived. In the event that an individual responds to one's inquiry at a pace that is less than expeditious or exhibits signs of hesitation, it is advisable to exercise caution and consider the veracity of their responses. The utterance of a word, followed by its subsequent articulation, often unveils the authenticity of the statement with remarkable clarity and expediency.

Rethinking Thinking

While everyone engages in thought, not everyone thinks optimally. To partake in intellectual feasts, we rely on master chefs who have mastered the art of combining, blending, and savoring a diverse array of mental elements. Their culinary prowess is not inherently different from our own, but rather, they demonstrate a higher level of expertise. We often assume that master chefs are born with this skill set, yet even the most promising individuals undergo years of training. Consequently, it stands to reason that the acquisition of these skills is not beyond the grasp of any individual. However, this process necessitates a reevaluation of the very nature of intellectual acumen. Rather than a mere exercise in thinking, it becomes imperative to explore the most efficacious methods of thinking.

The journey into the realm of mental cookery commences in the kitchen of the mind, where concepts are steeped, simmered, braised, beaten, baked, and shaped. In a manner akin to culinary professionals who astonish us by incorporating a dash of this and a spoonful of that, the culinary landscapes of the imagination are replete with unanticipated practices. Brilliant ideas emerge from the most unorthodox sources and are melded from the most unconventional ingredients. The ingredients used in these mental culinary processes often bear little resemblance to the final product.In some cases, the creator may not be able to provide a rational explanation for their confidence in the outcome, relying instead on intuitive feelings that a particular combination of ideas will result in a pleasurable outcome.It is important to note that intuitive feelings do not always align with logical reasoning.A notable example of this can be found in the experience of Barbara McClintock, who would later receive a Nobel Prize in genetics. In 1930, while engaged in a genetics experiment in the cornfields surrounding Cornell University, she and her fellow scientists observed that only a fraction of the corn produced sterile pollen, defying their initial expectations. This discrepancy, which McClintock found deeply unsettling, prompted her to retreat to her laboratory to ponder the implications.

In 1930, while engaged in a genetics experiment in the cornfields surrounding Cornell University, McClintock, a prominent scientist, experienced a moment of epiphany. The experiment's initial objective was to induce half of the corn to produce sterile pollen; however, the results obtained revealed that less than a third of the corn actually produced this pollen. This discrepancy was of significant concern to McClintock, prompting her to leave the cornfield and proceed to her laboratory, where she could engage in solitary contemplation.

After a period of approximately thirty minutes, McClintock emerged from her laboratory and hurried back to the cornfield. She positioned herself at the top of the field, a strategic vantage point that allowed her to observe the entire area below, while the majority of her colleagues remained at the bottom. There, she proclaimed, "I have it! I have the answer! I have discerned the nature of this 30 percent sterility." Her colleagues, naturally, inquired about the validation of her claims. However, McClintock found herself at a loss, unable to articulate or explain her insight. Many decades later, McClintock stated, "When one suddenly perceives the problem, an occurrence transpires that results in the presence of the solution—prior to the formulation of a verbal explanation. This process occurs subconsciously, and I have encountered it on numerous occasions. I am able to discern when it is imperative to take the matter seriously, and I possess a profound sense of certainty regarding this phenomenon. McClintock further elaborated on this phenomenon, stating, "When one is confronted with a problem, a response occurs that provides the solution before one can articulate it verbally. This process occurs subconsciously, and I have encountered it on numerous occasions. I am certain of this, though I do not discuss it or feel the need to inform others. This sensation of knowing without being able to articulate the method of knowing is a common experience. The renowned French philosopher and mathematician Blaise Pascal is renowned for his aphorism, 'The heart has its reasons that reason cannot know.' The eminent nineteenth-century mathematician Carl Friedrich Gauss acknowledged that intuition frequently guided him to concepts that he could not immediately substantiate." It is not within the scope of this discussion to delve into the intricacies of this phenomenon, which is often characterized by an inherent uncertainty surrounding the process of arriving at a conclusion. This sentiment is encapsulated by the French philosopher and mathematician Blaise Pascal's renowned aphorism, "The heart has its reasons that reason cannot know." The esteemed nineteenth-century mathematician Carl Friedrich Gauss acknowledged that intuition frequently guided him towards concepts that defied immediate verification. He articulated this sentiment when he stated, "I have had my results for a long time; but I do not yet know how I am to arrive at them." Claude Bernard, the revered founder of modern physiology, underscored the pivotal role of sensation in scientific thought, asserting that scientific progress is initiated by "feeling alone," which serves as a guiding force for the mind. Claude Bernard, the founder of modern physiology, wrote that all purposeful scientific thinking begins with feeling. He stated that "feeling alone guides the mind." Pablo Picasso, the renowned painter, confessed to a friend that he does not know in advance what he is going to paint on a canvas, any more than he decides beforehand what colors he is going to use. He explained that each time he undertakes to paint a picture, he has a sensation of leaping into space, never knowing whether he shall fall on his feet. It is only later that he begins to estimate more exactly the effect of his work. Composer Igor Stravinsky similarly found that imaginative activity commenced with an inexplicable appetite, a "gut feeling" for an unknown entity already present but not yet comprehensible. The Latin American novelist Isabel Allende has described a similarly vague sense propelling her work: "Somehow inside me — I can say this after having written five books — I know that I know where I am going." I know the end of the book even though I don't know it.It's so difficult to explain."This sense of intuitive understanding, albeit vague and difficult to articulate, prompts significant questions.As McClintock notes, "It had all been done fast.The answer came, and I'd run. In contrast, the present author has opted for a step-by-step, methodical approach, and has arrived at a conclusion that aligns with the initial hypothesis. This outcome is particularly noteworthy, as it occurred without any explicit documentation. The central question, therefore, is why the author possessed such certainty regarding the solution, and why they were able to convey this understanding with such enthusiasm. The query posed by McClintock addresses fundamental aspects of creative thinking, akin to the experiences of renowned figures such as Picasso and Gauss, ranging from composers and physiologists.The genesis of sudden illuminations or insights, the capacity to discern knowledge that has yet to be articulated, expressed, or documented, and the operational mechanisms of gut feelings and intuitions in imaginative thinking are all crucial inquiries. The translation of feelings into words, emotions into numbers, and the role of intuition in this process, are pivotal to the study of creative thinking. Finally, the question remains as to whether we can comprehend this creative imagination and, if so, whether we can cultivate, train, and educate it.Philosophers and psychologists have pondered these and related questions for centuries. Neurobiologists, on the other hand, have sought answers in the structures of the brain and the connections between nerve synapses. Despite their efforts, definitive answers remain elusive. A significant yet often underutilized source of insight into creative thinking is the introspective accounts of eminent thinkers, creators, and inventors. While these introspective reports do not offer comprehensive answers to all our inquiries about thinking, they undoubtedly reveal novel and noteworthy avenues for exploration.Most notably, they underscore the limitations of conventional thinking notions, highlighting the existence of non-logical forms of thinking that are not verbalizable.

A notable illustration of this can be found in the words of renowned physicist Albert Einstein. Contrary to popular belief, Einstein did not describe himself as utilizing mathematical formulas, numbers, complex theories, and logic to solve his physics problems. In fact, a recent book by Harvard psychologist Howard Gardner, Creating Minds, portrays Einstein as the epitome of the "logico-mathematical mind." Contrary to popular belief, Einstein's mental abilities were not entirely rooted in mathematics. His peers were aware of his relative weakness in the subject, often requiring collaboration with mathematicians to advance his work. In a letter to a correspondent, Einstein humorously acknowledged his own challenges, stating, "Do not worry about your difficulties in mathematics. I can assure you that mine are still greater."

Einstein's cognitive strengths were distinct, as he revealed to his colleague Jacques Hadamard. He further elaborated, stating, "The words of the language, as they are written or spoken, do not seem to play any role in my mechanism of thought.The psychological entities that seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined....The above mentioned elements are, in my case, of visual and some of muscular type." In a thought experiment that could not be articulated, he pretended to be a photon moving at the speed of light, imagining what he saw and how he felt. Then he became a second photon and tried to imagine what he could experience of the first one. As Einstein explained to Max Wertheimer, a psychologist, he only vaguely understood where his visual and muscular thinking would take him. He described his "feeling of direction" as "very hard to express."McClintock, for her part, articulated the development of a "feeling for the organism," akin to Einstein's sense of a beam of light. She cultivated a profound sense of connection with the organism, akin to Einstein's profound identification with a beam of light. Through her intimate engagement with the corn plants, she developed a comprehensive understanding of their chromo-reactive behavior. This intimate connection, she noted, engendered a sense of being an integral part of the system, rather than an external observer. This phenomenon, she observed, was a surprising departure from her usual perspective, where she felt disconnected from her own identity. She had the opportunity to establish a profound connection with her subjects, as she was able to empathize with them on a personal level. As she engaged in her work with the corn plants, she found that her sense of self became intertwined with the plant's chroma. This phenomenon led her to the realization that she was not an external observer, but rather an integral part of the system. She experienced a sense of embodiment, as if she were an integral part of the corn plant itself. This phenomenon was particularly surprising, as it caused her to lose sight of her own identity. She found herself unable to distinguish between her own self and that of the corn plant. The most significant aspect of this experience was the loss of self-awareness. As she engaged in this activity, she began to feel as if the corn plant had become an extension of herself. This emotional involvement played a critical role in the prelogical scientific thinking of Claude Bernard, who wrote, "Just as in other human activities, feeling releases an act by putting forth the idea which gives a motive to action." As Claude Bernard articulated, "As with other human activities, feeling releases an act by putting forth the idea that gives a motive to action." Similarly, Wolfgang Pauli, a renowned mathematical physicist, posited that within the "unconscious region of the human soul," emotional responses functioned as a proxy for ideas that had not yet been articulated. He further elaborated that clear concepts are supplanted by images of powerful emotional content, which are not thought, but rather perceived pictorially, as it were, before the mind's eye.

Some scientists contend that thinking in feelings and mental images can be rationally manipulated. According to Pauli, the unconscious realm of the human soul functions as a repository for ideas that have yet to be articulated. He posited that clear concepts are supplanted by images of profound emotional content, which are perceived visually, as if before the mind's eye. This assertion has been corroborated by the scientific community, with some proponents suggesting a "certain connection" between "the psychical entities which seem to serve as elements in thought" and "relevant logical concepts." Mathematician Stanislaw Ulam further elaborated on this concept, stating that he experienced abstract mathematical notions in visual terms, rendering the concept of "an infinity of spheres or an infinity of sets" into a visual representation of nearly tangible objects, gradually diminishing and vanishing into a distant point. Ulam further elaborated on this concept, stating that such thinking does not occur "in terms of words, syllogisms, or signs," but rather, it is facilitated by a "visual algorithm" that functions as a "metalogic" with its own unique set of rules.For William Lipscomb, a Nobel laureate in chemistry and a talented musician, this form of thinking is a synthetic and aesthetic experience. In his research into the chemistry of boron, Lipscomb found himself employing not only inductive and deductive reasoning but also intuitive thinking. He described this as a process of "intellectual and emotional focus," which he perceived as an aesthetic response. This was followed by a profusion of predictions arising from his mind as if he were an observer witnessing the process unfold.It was only subsequently that Lipscomb was able to formulate a systematic theory of structure, bonding, and reactions for these unusual molecules. Subsequent assessments indicated that the phenomenon under investigation was indeed scientific in nature. However, the methodologies employed and the responses elicited bore a closer resemblance to those of an artistic nature. "Gut feelings, emotions, and imaginative images find application in scientific pursuits, yet, akin to the interpretation of a dance or a musical composition, their significance is perceived rather than defined.

"Intuition or mathematics?" poses Arthur C. Clarke, an inventor and science fiction author. "Do we employ models to facilitate the discovery of truth, or do we first ascertain the truth and subsequently devise the mathematics to elucidate it?"There is no doubt about the answer: gut feelings and intuitions, an "essential feature in productive thought," as Einstein described them, manifest well before their meaning can be articulated in words or numbers.In his own work, mathematics and formal logic took a secondary role: As he articulated, "Conventional words or other signs [presumably mathematical ones] have to be sought for laboriously only in a secondary stage, when the associative play already referred to is sufficiently established and can be reproduced at will."In his discussion with Wertheimer, he further elaborated, "No really productive man thinks in such a paper fashion." In his discussion with Infeld, Einstein reflected on the contrast between the triple sets of axioms in his physics book and the actual process of thinking. He noted that this was a later formulation of the subject matter, a question of how to best express the ideas. However, these ideas did not emerge from any manipulation of axioms. Einstein emphasized that scientists do not think in formulae, suggesting that the way the two triple sets of axioms are contrasted in his book is not an accurate representation of the actual thinking process. As stated by Einstein and Infeld in their book, scientists may not think in mathematical terms, but the need to express intuitive insight in a form comprehensible to others compels them to "work with so-called scientific methods to put it into their frame after you know." Other scientists have confirmed the two-part process of intuitive, imaginative understanding followed, necessarily, by logical expression. Metallurgist Cyril Stanley Smith of the Massachusetts Institute of Technology (MIT) has stated that the stage of discovery was entirely sensual and that mathematics was only necessary to communicate with others. Likewise, Werner Heisenberg, who formulated the uncertainty principle, wrote that "mathematics... played only a subordinate, secondary role" in the revolution in physics he helped to create. These statements confirm the two-part process of intuitive, imaginative understanding followed, necessarily, by logical expression. As Cyril Stanley Smith, a metallurgist at the Massachusetts Institute of Technology (MIT), has stated, "The stage of discovery was entirely sensual, and mathematics was only necessary to communicate with others." Similarly, Werner Heisenberg, the formulator of the uncertainty principle, has noted that "mathematics played only a subordinate, secondary role" in the revolution in physics he helped create. Richard Feynman, a Nobel Prize-winning physicist who also had an intuitive understanding of things, has observed, "In certain problems that I have worked on, it was necessary to continue developing the picture as the method before the mathematics could really be done." These observations challenge the prevailing notion that scientists think more logically than others. The ability to think creatively is inextricably linked to the capacity to feel. The drive to understand must be tempered with sensory and emotional sensations, melding with intellect to yield imaginative insight. Indeed, the intimate connections between thinking, emotions, and feelings are the subject of a remarkable book called Descartes' Error (1994), which revisits the famous philosopher's separation of mind (and thinking) from body (and being or feeling) more than three hundred years ago. As Descartes himself noted, the mind and body are inseparable, and the ability to think creatively is inextricably linked to our emotional and sensory experiences. Indeed, the intimate connections between thinking, emotions, and feelings are the subject of a remarkable book called Descartes' Error (1994), which revisits the famous philosopher's separation of mind (and thinking) from body (and being or feeling) more than three hundred years ago. The author, neurologist Antonio Damasio, finds that neurological patients whose emotional affect is grossly altered due to strokes, accidents, or tumors lose the ability to make rational plans. Because they are unable to become emotionally involved in their decisions, they fail to make good ones. Damasio's position is that body and mind, emotion and intellect, are inseparable. It is evident that there is a consensus on this matter. Scientists are inclined to formulate logical ideas through a process of gradual refinement, guided by their rational thinking. However, it is important to acknowledge the role of intuition and emotion in the genesis of creative thinking and expression across various disciplines.

This assertion may come as a surprise to many individuals. Cognitive scientists such as Herb Simon and Noam Chomsky have defined thinking as exclusively the logical procedures of induction and deduction or the rules of linguistics. Howard Gardner, a proponent of diverse thinking in Creating Minds and Frames of Mind, asserts that the thinking of creative individuals can be best categorized by the mode of expression they utilize. According to Gardner and his colleagues, scientists such as Einstein, McClintock, and Feynman are considered logico-mathematical thinkers; poets and writers are characterized as highly verbal thinkers; dancers are classified as kinesthetic thinkers; artists are primarily visual thinkers; psychologists are intrapersonal thinkers; and politicians are interpersonal thinkers. The validity of these characterizations is evident, akin to the commonplace application of yeast in baking. However, it should be noted that certain types of bread, such as soda bread and flatbread, are produced without yeast. Moreover, yeast is a versatile ingredient that can be utilized in the preparation of various foods, including beer and Grape-Nuts cereal. It is important to recognize that a single ingredient does not determine the outcome of a recipe, whether in the culinary arts or in the realm of thought. The characterization of individuals based on a singular element of their cognitive processes is as fallacious as asserting that Albert Einstein's primary intellectual domain was logic and mathematics.

Artists, for instance, draw upon a multifaceted array of stimuli, including visual elements. Emotions, kinesthetic feelings, and philosophical contemplations serve as wellspring of artistic ideas. Painter Susan Rothenberg articulates her artistic process as "really visceral...I'm very aware of my body in space—shoulders, frontal positions." The notion of characterizing an individual based solely on a specific aspect of their cognitive processes is as fallacious as asserting that Einstein's thought was predominantly governed by logic and mathematics. Artists, for instance, draw upon a multifaceted array of stimuli, including visual elements, emotions, kinesthetic sensations, and philosophical contemplations, among other sources. Painter Susan Rothenberg elucidates her artistic process as "extremely visceral," emphasizing her embodiment in space and the utilization of bodily language that defies facile explanation. A substantial portion of her oeuvre concerns body orientation, both in the act of creation and in the perception of spatial dimensions, juxtaposed with her physical orientation. Similarly, sculptor Anne Truitt experiences her artistic process as a bodily sensation. In recounting her apprenticeship, she asserts, "It was not my eyes or my mind that learned. It was my body." Sculptor Anne Truitt's artistic process is characterized by a profound physical engagement, which she describes as "a love for the process of art, and I've never fallen out of it." This affection is evident in her recollection of the initial discomfort experienced after carving stone, which she compares to "an hour or so of my arms aching and trembling." The physical impact of her artistic practice extends beyond the immediate carving process, as Truitt notes that her blouse size increased by one and her shoulders broadened with muscle. The experience of creating art has been described as a process of duality, with the artist's body and mind working in harmony to produce visual and emotional responses. The physical discomfort experienced during the process, such as aching and trembling arms, is a testament to the commitment and dedication required. The transformation of the body's center of gravity, shifted from the navel to a place of strength and balance, has been noted as a significant aspect of this process. This shift in balance enables the artist to lift stones with ease and to interact with materials such as clay with a delicate touch.

The connection between the artist's physical and mental states and the creation of art has been articulated by other artists. For instance, the painter Bridget Riley describes her paintings as an "intimate dialogue between [her] total being and the visual agents that constitute the medium." She has always sought to actualize visual and emotional energies simultaneously through the medium.While her paintings are concerned with generating visual sensations, they are not exclusively focused on emotion.One of her objectives is to ensure that these two responses are experienced as one and the same.Picasso, Gardner's prototype of the "visual thinker," would have undoubtedly concurred. Picasso, in his capacity as Gardner's exemplar of the "visual thinker," subscribed to the notion that all sensations and forms of knowledge are interconnected. He articulated this perspective in the following assertion: "All the arts are the same: you can write a picture in words just as you can paint sensations in a poem." This assertion underscores the fluidity and interconnectedness of sensory experiences, further exemplified by the myriad interpretations of the color "blue." He posited that all sensations and forms of knowledge are interconnected, stating, "All the arts are the same: you can write a picture in words just as you can paint sensations in a poem." He further exemplifies this interconnectedness by discussing the color blue, questioning its definition and suggesting that there are myriad sensations that we collectively refer to as "blue." For instance, one can speak of the blue of a packet of Gauloises, and in that case, one can talk of the Gauloise blue of eyes, or, conversely, one can talk of a steak being blue when one means red. Those who look at pictures and do not feel these (or other) associations, he asserts, miss the point, as the mixture of feelings and sensations is what gives rise to the painting in the first place.

As artistic ideas are often non-visually initiated, artists similarly undergo the process of translation that Einstein, McClintock, and others have described.Josef Albers articulated this process most succinctly when he wrote that art is "the discrepancy between physical fact and psychic effect...[a] visual formulation of our reaction to life."Sculptor Louise Bourgeois states, "I contemplate...for a long time. As stated by Georgia O'Keeffe, "I long ago came to the conclusion that even if I could put down accurately the thing I saw and enjoyed, it would not give the observer the kind of feeling it gave me. I had to create an equivalent for what I felt about what I was looking at—not copy it." Similarly, the process of translation that artists undergo has been described by Einstein, McClintock, and others. Josef Albers may have expressed this process most succinctly when he wrote that art is "the discrepancy between physical fact and psychic effect...[a] visual formulation of our reaction to life." Louise Bourgeois also articulates this process, stating, "I contemplate...for a long time. Then I try to express what I have to say, how I am going to translate what I have to say to it. I try to translate my problem into stone." Max Bill describes the object of art in similarly sweeping terms, stating, "Abstract ideas which previously existed only in the mind are made visible in a concrete form." He further elaborates, "Paintings and drawings are 'the instruments of this realization [by means of] color, space, light, movement.'" As Georgia O'Keeffe writes, "I long ago came to the conclusion that even if I could accurately convey what I saw and enjoyed, it would not give the observer the kind of feeling it gave me. I had to create an equivalent for what I felt about what I was looking at—not copy it." Similarly, Max Bill describes the function of art as the "translation" of abstract ideas into a concrete form. He states that paintings and drawings serve as "instruments of this realization" through elements such as "color, space, light, movement." As O'Keeffe further expounds, "I realized that my ability to accurately depict the visual stimuli I encountered would not inevitably translate into a similar emotional response in the viewer. I had to devise an alternative expression for the sentiments I experienced, not merely a replication. Therefore, artistic depictions are not merely a reflection of the original feelings, concepts, and sensations; they are, in essence, a reinterpretation. This reinterpretation is not dissimilar to the process of a scientist formulating a theory or a formula that directly articulates their research. All public languages are modes of translation. Even individuals who articulate their thoughts through language seldom think or generate ideas exclusively in words. The poet E. E. Cummings, for instance, has challenged the assumption that poets are essentially wordsmiths manipulating the rules of grammar, syntax, and semantics, stating that "the artist is not a man who describes, but a man who FEELS." Similarly, Gary Snyder, also a poet, has expanded on this theme, explaining that to write, he must "revisualize it all.... I'll replay the whole experience again in my mind." The act of forgetting the content of the page and accessing the preverbal level behind it is crucial for the process of recollection. Through the mechanisms of reexperiencing, recalling, visualizing, and revisualizing, the entire experience is relived and clarified. As Stephen Spender elucidates his own creative process, he states, "Above all, the poet is someone who never forgets certain sensory impressions, which he has experienced and which he can relive again and again as though with all their original freshness." Consequently, it is not surprising that, despite my inability to recall specific details such as telephone numbers, addresses, or the whereabouts of particular items, I possess a remarkable recollection of the sensation experienced during specific encounters, which are then encapsulated within my memory through the agency of certain associations. This phenomenon can be demonstrated through personal experience, as certain associations, once triggered, can transport an individual to a state of complete recollection, particularly to the realm of childhood, thereby disrupting the perception of the present moment and its temporal and spatial context.The construction of imaginary realms, as exemplified by Cummings's and Spender's works, necessitates not only linguistic prowess but also the capacity to conjure impressions with ease. This assertion is corroborated by numerous literary figures. Robert Frost characterized his poetry as a process of "carrying out some intention more felt than thought," and he was often quoted as saying, "No tears in the writer, no tears in the reader. No surprise in the writer, no surprise for the reader."The American novelist and short-story writer Dorothy Canfield Fisher also required direct experience to inform her writing. She described her process as follows: "I have intense visualizations of scenes." While I do not personally draw from my own intimate life experiences when writing, achieving these vivid visualizations is essential for my own creative process. This suggests that I am unable to produce anything of substance about places, people, or phases of life with which I am not intimately familiar, down to the last detail.Isabel Allende, too, plans her books "in a very organic way." She states, "Books don't happen in my mind; they happen somewhere in my belly... I don't know what I'm going to write about because it has not yet made the trip from the belly to the mind. It is somewhere hidden in a very somber and secret place where I don't have any access yet. It is something that I've been feeling but which has no shape, no name, no tone, no voice." "At the outset, the impulse, the vision, and the feeling remain unspoken. However, ultimately, they must be expressed in words. Once a poet or writer has experienced evocative or distressing images and feelings, they face the same challenge as scientists and artists: how to translate these internal feelings into an external language that others can experience. Fisher described her "presumption" in attempting "to translate into words...sacred living human feeling." As posited by T.S. Eliot, a figure often held up as the paradigm of a "verbal thinker" by Howard Gardner, the endeavor to translate the intricacies of the human experience into language is one shared by both scientists and artists alike. The poet or writer, having undergone a process of introspection and recollection of both inspiring and troubling images and feelings, finds themselves confronted with the same challenge faced by their counterparts in the scientific and artistic realms: the translation of these internal sentiments into a form comprehensible to others. In her attempt to do so, Fisher alludes to her "presumption" in her endeavor to "translate into words...sacred living human feeling." The words of O'Keeffe, a figure held up as an exemplar of a "verbal thinker" by Gardner, echo this sentiment as well: "With a poem, you can say, 'I got my feeling into words for myself. I now have the equivalent in words for that much of what I have felt.'" Gary Snyder, a poet and philosopher, has articulated the following process for creating poetry: "The first step is the rhythmic measure, the second step is a set of preverbal visual images which move to the rhythmic measure, and the third step is embodying it in words." Similarly, William Goyen, a novelist, poet, and composer, characterizes his own writing process as "the business of taking it from the flesh state into the spiritual, the letter, the Word." Science fiction writer Ursula K. Le Guin has noted the irony in this translation process for writers of fiction: "The artist deals with what cannot be said in words. The artist whose medium is fiction does this in words, which, she goes on to explain, can be used thus paradoxically because they have, along with a semiotic usage, a symbolic or metaphoric usage. "In other words, words are both literal and figurative signs of interior feelings, but not their essence. They are, as Heisenberg said of mathematics, expressions of understanding, not its embodiment.So Stephen Spender defines the "terrifying challenge of poetry" as the attempt to express in words that which may not be verbally expressed but may be verbally suggested: "Can I think out the logic of images? As Spender notes, the "terrifying challenge of poetry" lies in the attempt to articulate that which eludes verbal articulation, yet can be verbally suggested. The process of composing a poem that one has envisioned entails a juxtaposition of the ease and difficulty of its articulation. The act of writing implies the navigation of the imaged experience of abstract concepts, necessitating an imaginative endeavor. The question, therefore, is not merely "Can I think out the logic of images?" but rather, "Can I relive the imaged experience through the medium of words?" Einstein or McClintock as Spender.If this logic of images, of muscular movement, of feeling, is anything, it is not the mathematical logic or the formal linguistic logic that we study in school.Formal logic is used to prove the validity of preexisting propositions. This novel "logic," as coined by Ulam, is more aptly termed "metalogic," as it lacks the capacity to prove anything; rather, it engenders novel ideas and conceptions, with no guarantee of their validity or practicality. This form of thinking, as yet unexamined and unaccounted for by contemporary theories of the mind, is nonverbal, nonmathematical, and nonsymbolic insofar as it does not belong to a formal language of communication.Nevertheless, our objective here is to describe and comprehend this metalogic of feelings, images, and emotions. If Ulam's hypothesis is valid, the result could be as revolutionary and fundamental as the rules of symbolic logic codified by Aristotle thousands of years ago.Such a metalogic might, in fact, explain the creative origins and character of the articulated ideas to which Aristotle's logic can be applied.At present, the closest concept we have to such a metalogic is the vague one of intuition. As Einstein noted, "Only intuition, resting on sympathetic understanding, can lead to [insight]." Henri Poincaré, arguably the preeminent mathematician of the late nineteenth century, articulated a similar sentiment in Science and Method: "It is by logic that we prove, but by intuition that we discover." Logic provides certainty regarding the absence of obstacles on specific routes, yet it falls short in guiding us towards our desired outcomes. In such cases, it becomes essential to envision the desired outcome from a distance, a faculty that intuition fosters. Without it, a mathematician might possess an adept understanding of grammar but lacks originality. As Max Planck, a renowned physicist, succinctly remarked, "Scientists need an artistically creative imagination." This assertion underscores the inherent interconnectedness between science and art, highlighting the common ground they share in terms of their underlying processes and methodologies.

Indeed, scientists and artists draw from a shared reservoir of intuitive ideas, emerging from a similar process of introspection and creativity. This observation underscores the complexity and interconnectedness of human thought, challenging the simplistic comparison of scientific and artistic pursuits. While it is evident that a poem does not equate to a mathematical formula and a novel does not equate to an experiment in genetics, it is crucial to acknowledge that these disparate forms of expression are not monolithic entities. Physics and biology, for instance, represent distinct scientific thought processes, and sculptures and collages, or photographs, represent distinct artistic expressions. To delineate individuals based on their distinct creations is to overlook the commonalities inherent in the creative process. Across diverse disciplines, scientists, artists, mathematicians, composers, writers, and sculptors employ a shared repertoire of cognitive instruments, including emotional sentiments, visual imagery, bodily sensations, reproducible patterns, and analogies. It has been noted that a considerable number of scientists and artists have observed the universality of creativity. This observation is supported by the fact that all imaginative thinkers learn to translate ideas generated by these subjective thinking tools into public languages to express their insights, which can then give rise to new ideas in others' minds. At the Sixteenth Nobel Conference in 1980, scientists, musicians, and philosophers concurred, as Freeman Dyson noted, that "the analogies between science and art are very good as long as you are talking about the creation and the performance. The creation is certainly very analogous. The aesthetic pleasure of the craftsmanship of performance is also very strong in science." In a subsequent discussion, physicist Murray Gell-Mann reiterated the notion that there is a consensus among scientists regarding the origin of ideas. He recounted an experience at a multidisciplinary conference held approximately a decade prior at the Aspen Physics Center in Colorado. During this seminar, which included painters, poets, writers, and physicists, a common understanding emerged regarding the underlying mechanisms behind artistic and scientific pursuits. Gell-Mann observed that both artistic and scientific endeavors are driven by a shared objective: the resolution of problems. As articulated by a musician, the "absolute similarities" between the cognitive processes of scientists and artists are not only individual, but also manifest on a social level. While a scientist may perceive a problem as a matter of problem solving, an artist may interpret it as a form of shared inspiration, yet the "answer" emerges from the same creative act. This assertion is further supported by Nobel Prize-winning immunologist and author Charles Nicolle, who remarked, "The revelation of a novel fact, the leap forward, the conquest over yesterday's ignorance, is an act not of reason but of imagination, of intuition. It is an act closely related to that of the artist and the poet; a dream that becomes reality; a dream which seems to create." French physician Armand Trousseau concurred, stating, "All science intersects with art, and all art harbors its scientific facet. The most deficient scientist is he who lacks artistic aptitude, and the most inadequate artist is he who is bereft of scientific understanding." In a similar vein, the constructivist sculptor Naum Gabo once remarked, "Every eminent scientist has encountered an instance when the artist within them has rescued the scientist." As posited by Pythagoras, "We are poets," and in the sense that a mathematician is a creator, he was correct. Stravinsky concurred, stating, "The way composers think, the way I think, is... not very different from mathematical thinking." Stravinsky's perspective on creativity mirrors that of Gell-Mann and Gabo, as well as his own. This convergence of perspectives, as articulated by Arthur Koestler in his seminal work The Act of Creation, underscores the interconnectedness of creative thought across disciplines. It is a point that resonates with the notion that, as Koestler writes, "Newton's apple and Cezanne's apple are discoveries more closely related than they seem." This suggests that both artists' creations stem from a shared process of reperceiving and reimagining the world, drawing upon fundamental perceptual feelings and sensations. Despite the universality of the creative process being acknowledged, its preverbal and premathematical elements have not been widely recognized. This is evident in the limited recognition of the fact that cognition is not exclusive to philosophers and psychologists, but also pertains to the cross-disciplinary nature of intuitive tools for thinking. This myopic view persists at all educational levels, from kindergarten to graduate school, and is perpetuated by educators as well. The prevailing curriculum, delineated by products rather than processes, is a prime example of this myopic view. From the outset, students are enrolled in separate classes in various subjects, such as literature, mathematics, science, history, music, and art. Despite the current emphasis on "integrating the curriculum," interdisciplinary courses remain scarce, and transdisciplinary curricula that span the breadth of human knowledge are virtually non-existent. Furthermore, at the level of creative process, where it is truly essential, the intuitive tools for thinking that tie one discipline to another are entirely ignored. Mathematicians are expected to think only "in mathematics," writers only "in words," musicians only "in notes," and so forth. Our schools and universities insist on cooking with only half the necessary ingredients. This approach leaves educators with only a fraction of the necessary knowledge to effectively teach and students with only a fraction of the ability to learn. This limited education has far-reaching consequences that extend beyond our immediate experiences. In our personal experiences, even at the graduate level, we never encountered any suggestion that problem-solving could occur outside of verbal or mathematical contexts. It never crossed our minds, nor was it ever suggested, that problems in mathematics or physics could be articulated through a series of images and feelings stewed in our minds, or that a literary work such as a book or a poem could be conceived as a series of images and emotions brewed in our bellies. Furthermore, the notion that the process of conceiving an idea or solving a problem might be distinct from its translation into a disciplinary language was never broached. This book proposes that the process of learning one subject or achieving one insight can serve as a catalyst for developing insights in other disciplines. However, if the creative thinkers cited in this chapter accurately represent their own methodologies, as we contend they do, it becomes evident that an educational framework based exclusively on distinct disciplines and public languages overlooks significant aspects of the creative process. The conventional approach to education, predicated on the compartmentalization of knowledge into distinct disciplines and the cultivation of public languages, has been shown to fall short in its attempt to nurture the full scope of the creative process. While educators endeavor to refine students' mathematical and syntactical logic, they often neglect to cultivate the metalogics of emotions and intuition. The prevailing pedagogy is predicated on the use of words and numbers, operating under the assumption that thought is inherently verbal and numerical. This approach to education is deeply misguided. As William Lipscomb has articulated concerning contemporary scientific education, "If one genuinely aspired to provide minimal assistance to both aesthetics and originality in science, it would be challenging to devise a more effective plan than our educational system. There is a paucity of discourse concerning that which we do not comprehend in science, and even less discourse concerning how to prepare for creative ideas."A similar observation can be made regarding training in the arts, humanities, and technologies. The translation of languages is mastered, yet the mother tongue is neglected. Culinary feasts are presented, yet they go uneaten. Chefs are honored, yet their methods are not emulated.It is, therefore, paramount to recognize and describe the intuitive "dialects" of creative thinking. As vital as words and numbers are to the communication of insight, that insight is born of emotions and images of many sorts conjured within the imagination. Consequently, cultivating emotional intelligence within educational curric is paramount. Students must learn to attune to their intuitive sensibilities and utilize them effectively. This approach is not merely idealistic; various professions, including medicine, are progressively acknowledging intuition as an integral component of disciplinary thinking. Geri Berg, an art historian and social worker formerly at Johns Hopkins University, asserts that "emotional awareness, like observation and critical inquiry skills, is an essential component of providing quality healthcare."Dr. John Burnside, chief of internal medicine at the Hershey Medical Center in Pennsylvania, has articulated this perspective with even greater emphasis. He asserts that a significant educational failure is the lack of serious recognition and attention towards intuitive judgment and common sense. This tendency, often non-quantitative, is often overlooked in favor of the more glamorous aspects of medicine, such as instinct, passion, or the primal. However, Burnside contends that it can be defined and should be incorporated into medical education. The author posits that a crucial educational failure lies in the underappreciation of the intuitive realm of "gut feeling" or common sense, which often gets overshadowed by the numerical-based methods of medicine. This tendency to overlook the significance of intuitive thinking can be attributed to its association with instinct, passion, and the primal. However, the author contends that this intuitive aspect can indeed be defined and should be a central component of medical education.

The author further asserts that regardless of whether the objective is self-understanding, understanding others, or exploring some facet of nature, or even the provision of superior medical care, it is imperative to cultivate the ability to utilize the feelings, emotions, and intuitions that underpin the creative imagination. This, the author contends, is the fundamental purpose of both gourmet thinking and education.

The Science of Success

The vast majority of individuals possess genes that confer a degree of hardiness analogous to that of dandelions, enabling them to establish a robust existence in a wide range of environments. However, a select few individuals, akin to the orchid, possess a heightened degree of fragility and variability in their characteristics. These individuals, though seemingly delicate, possess the capacity to flourish under optimal conditions, akin to the successful cultivation of an orchid in a controlled greenhouse setting. This assertion is supported by a novel theory in genetics that posits that the genes that often pose challenges to human behavior, leading to antisocial and self-destructive tendencies, are also fundamental to human adaptability and evolutionary success. In the absence of a conducive environment and adequate parenting, children who are raised in such circumstances may become susceptible to depression, substance abuse, or even incarceration. Conversely, with a nurturing environment and proper parenting, these children can thrive and contribute positively to society.

Sign inSubscribe NowIn 2004, Marian Bakermans-Kranenburg, a professor of child and family studies at Leiden University, initiated a study in which she recorded her observations of families whose children, aged 1 to 3 years, exhibited a high degree of oppositional, aggressive, uncooperative, and aggravating behavior. This behavior, as described by psychologists, is referred to as "externalizing." The behaviors observed included whining, screaming, whacking, throwing tantrums, and willfully refusing reasonable requests. While such behaviors are commonplace among toddlers, research has demonstrated that children who exhibit particularly high levels of these behaviors are more likely to experience stress, confusion, academic underperformance, and social challenges in school. These children also tend to display antisocial and unusually aggressive behaviors in adulthood.

At the inception of their study, Bakermans-Kranenburg and her colleagues administered a parental questionnaire to 2,408 children, subsequently focusing on the 25% who were rated highest by their parents in externalizing behaviors. These ratings were subsequently confirmed through laboratory observations.The primary objective of Bakermans-Kranenburg's study was to modify the children's behavior. To this end, she and her team developed an intervention strategy, which entailed visiting each of 120 families six times over a period of eight months. During these visits, the researchers recorded the interactions between the mother and child in their daily activities, with a particular focus on behaviors requiring obedience or cooperation. The recorded footage was then edited into segments that could be used to provide constructive feedback to the mothers.In contrast, a control group of children with high externalizing behaviors received no intervention.

More StoriesA flaming pizza oven floating in the sky with angelic pizzas surroundingNo One Has to Settle for Bad Pizza AnymoreSaahil DesaiA black-and-white photo-illustration of sets of white paper packets, resembling printed research papers, stacked in a circle with a hole in the center plummeting into darkness.

The Business-School Scandal That Just Keeps Getting BiggerDaniel Engberphotograph of Robert Downey Jr. sitting, flanked by Bartlett Sher in glasses and blue-green blazer on left and Ayad Akhtar in glasses and tan blazer on rightThe Playwright in the Age of AIJeffrey GoldbergVideo: Watch an interview with Stephen Suomi, one of the researchers featured in this storyTo the researchers' delight, the intervention worked. The mothers, who viewed the videos, demonstrated an increased ability to identify and respond to cues they had previously overlooked or misinterpreted.For instance, many mothers reported a shift in their approach to reading picture books, acknowledging their initial reluctance to engage with fidgety and challenging children. However, according to Bakermans-Kranenburg, when these mothers viewed the playback of their interactions, they expressed surprise at the level of enjoyment their children derived from these interactions, as well as their own enjoyment.The majority of the mothers began reading to their children on a regular basis, creating an atmosphere that Bakermans-Kranenburg describes as "a peaceful time that they had dismissed as impossible."Concurrently, the children's problematic behaviors exhibited a decline. A year after the intervention's conclusion, the toddlers who received the intervention exhibited a reduction in externalizing behaviors by more than 16 percent, while the control group, which did not receive the intervention, demonstrated an improvement of approximately 10 percent (as expected, due to the age-related improvement in self-control). Additionally, the mothers' responses toward their children became more positive and constructive.It is noteworthy that few programs are capable of effecting such significant changes in parent-child dynamics. However, the Leiden team's primary objective extended beyond the evaluation of intervention efficacy. They were also investigating a novel hypothesis concerning the influence of genes on behavior, a hypothesis that has the potential to significantly alter our understanding of mental illness, behavioral dysfunction, and human evolution.

A significant aspect of the team's research focused on a novel interpretation of a prominent concept in contemporary psychiatric and personality research. This concept posits that specific variations in critical behavioral genes (predominantly those influencing brain development or the processing of its chemical messengers) render individuals more susceptible to certain mood, psychiatric, or personality disorders. This hypothesis, frequently referred to as the "stress diathesis" or "genetic vulnerability" model, has been significantly bolstered by numerous studies over the past 15 years. It has attained a dominant position in the fields of psychiatry and behavioral science. During this period, researchers have identified numerous gene variants that can increase an individual's vulnerability to depression, anxiety, attention-deficit hyperactivity disorder, heightened risk-taking, and antisocial, sociopathic, or violent behaviors. These gene variants, however, only appear to confer risk if a person experiences a traumatic or stressful childhood or faces particularly challenging life circumstances later in life.

This vulnerability hypothesis, as it is referred to in the scientific community, has already led to a significant shift in our understanding of numerous psychic and behavioral problems. It posits that these conditions are not solely influenced by natural or environmental factors, but rather by intricate "gene-environment interactions." The presence of certain gene variants does not necessarily lead to the development of these disorders. However, in cases where an individual carries "deleterious" versions of specific genes and experiences adverse life circumstances, there is an increased likelihood of developing these conditions.Recently, an alternative hypothesis has emerged from this original one, challenging its fundamental tenets. This novel model posits that the prevailing interpretation of "risk" genes as mere liabilities is a fallacy. The new thinking suggests that these genes, while capable of inducing dysfunction in unfavorable contexts, can concomitantly enhance function in favorable ones. The genetic sensitivities to negative experiences identified by the vulnerability hypothesis, it is posited, represent merely the downside of a more extensive phenomenon: a heightened genetic sensitivity to all experiences.The mounting evidence in support of this view is compelling. While much of this evidence has existed for years, the prevailing focus on dysfunction in behavioral genetics has led researchers to overlook this perspective.Jay Belsky, a child-development psychologist at Birkbeck, University of London, offers an explanation for this oversight. He notes that most work in behavioral genetics has been conducted by researchers who focus on vulnerability, leaving them unable to see the upside because they do not seek it out. This oversight can be likened to dropping a dollar bill beneath a table, only to notice it after it has been overlooked. However, this oversight does not extend to the five dollars just beyond one's feet.Though this hypothesis is novel to modern biological psychiatry, it can be found in folk wisdom, as pointed out by Bruce Ellis, a developmental psychologist at the University of Arizona, and W. Thomas Boyce, a developmental pediatrician at the University of British Columbia, in an article published last year in the journal Current Directions in Psychological Science. The Swedes, Ellis and Boyce noted in an essay titled "Biological Sensitivity to Context," have long spoken of "dandelion" children, who, equivalent to our "normal" or "healthy" children with "resilient" genes, do pretty well almost anywhere, whether raised in the equivalent of a sidewalk crack or a well-tended garden. Ellis and Boyce propose that there are also "orchid" children, who, if disregarded or mistreated, will wither and potentially flourish under the appropriate conditions.This concept, termed the "orchid hypothesis," appears at first glance to be a straightforward refinement of the vulnerability hypothesis. However, it introduces the novel concept that environmental influences and experiences can positively influence an individual's development. This hypothesis represents a paradigm shift in our understanding of genetics and human behavior. The concept transforms risk into possibility, vulnerability into adaptability, and exposure into responsiveness.This notion, while seemingly straightforward, carries profound and far-reaching consequences.Gene variants, previously regarded as unfortunate (e.g., the "bad" gene), are now understood as evolutionary investments with the potential for substantial returns. These risks, in essence, contribute to the diversification of portfolios, enhancing survival chances. Selection favors parents who invest in both dandelions and orchids, fostering a balanced and adaptable approach to life.

This viewpoint suggests that the presence of both dandelions and orchids within a family or species significantly enhances its probability of survival, both in the long term and in specific environments.The behavioral diversity inherent in these two distinct types of temperament is essential for a species to thrive and spread in a changing world. The numerous dandelions in a population provide a foundation for stability. Conversely, the less numerous orchids may encounter challenges in some environments, yet they can flourish in those that are conducive to their survival.Even when confronted with adverse early-life conditions, some of the heightened responses to adversity that can be problematic in everyday life—such as increased novelty-seeking, restlessness of attention, elevated risk-taking, or aggression—can prove advantageous in certain challenging situations, including wars, tribal conflicts, and social strife of various kinds, as well as migrations to new environments. The symbiotic relationship between the steadfast dandelions and the mercurial orchids exemplifies a form of adaptive flexibility that is beyond the capacity of either individual organism to manifest alone. Collectively, they pave a route to hitherto unattainable individual and collective accomplishments.The orchid hypothesis provides a resolution to a foundational evolutionary inquiry that eludes the vulnerability hypothesis. Given that variants of certain genes engender primarily dysfunction and adversity, it is perplexing to consider their survival through natural selection. Genes exhibiting such maladaptive characteristics should have been eradicated through selection. However, approximately 25% of the global population carries the most well-documented variant of a gene associated with depression, while more than 20% carry the variant studied by Bakermans-Kranenburg, which is linked to antisocial and violent behaviors, as well as ADHD, anxiety, and depression. This phenomenon falls outside the scope of the vulnerability hypothesis. In contrast, the orchid hypothesis provides a comprehensive explanation.This novel perspective on human vulnerability and resilience offers a transformative and unexpected insight. For over a decade, proponents of the vulnerability hypothesis have contended that specific gene variants underlie significant human afflictions, including despondency, alienation, and various forms of cruelty. The orchid hypothesis acknowledges this assertion. However, it also posits that these same detrimental genes contribute significantly to the remarkable achievements of our species.

The orchid hypothesis, alternatively referred to as the plasticity hypothesis, the sensitivity hypothesis, or the differential-susceptibility hypothesis, is still in its nascent stages and has not yet been extensively examined. Many researchers, including those in the field of behavioral science, are not yet familiar with this concept. However, a few researchers, particularly those who have general reservations about attributing specific genes to specific behaviors, have expressed concerns.As more evidence supporting the hypothesis emerges, the most common reaction among researchers and clinicians is one of excitement. A growing number of psychologists, psychiatrists, child-development experts, geneticists, ethologists, and others are beginning to believe that, as Karlen Lyons-Ruth, a developmental psychologist at Harvard Medical School, puts it, "It's time to take this seriously."With the data gathered in the video intervention, the Leiden team began to test the orchid hypothesis. The central question guiding their inquiry pertains to the potential synergy between adverse environmental experiences and subsequent positive environmental impacts in children.To this end, Bakermans-Kranenburg and her colleague Marinus van Ijzendoorn embarked on a comprehensive genetic study of the children involved in the experiment.Their research focused on a specific "risk allele" linked to ADHD and externalizing behaviors. Allele is defined as any of the variants of a gene that takes more than one form; such genes are known as polymorphisms. A risk allele, therefore, is simply a gene variant that increases the likelihood of developing a problem.

In their study, Bakermans-Kranenburg and van Ijzendoorn investigated whether children who carried a risk allele for ADHD and externalizing behaviors, a variant of the dopamine-processing gene DRD4, would exhibit a similar response to positive environments as to negative ones. A third of the subjects in the study carried this risk allele, while the remaining two-thirds carried a version considered a "protective allele," which rendered them less susceptible to adverse environmental influences. The control group, which did not receive the intervention, exhibited a comparable distribution.The vulnerability hypothesis and the orchid hypothesis predict that, in the control group, children with the risk allele should demonstrate poorer outcomes compared to those with the protective allele. This prediction was confirmed, albeit to a minor extent.Over a period of 18 months, the genetically "protected" children exhibited a reduction in externalizing behaviors by 11 percent, while the "at-risk" children showed a 7-percent decrease. These gains, while statistically significant, were modest and were expected to increase with age.The intervention group provided the true test of the hypotheses. The vulnerability model predicted that the children with the risk allele would demonstrate less improvement than their counterparts with the protective allele. The modest enhancement in their environment provided by the video intervention was not expected to offset their inherent vulnerability.

However, the results showed that the toddlers with the risk allele exhibited a significantly greater reduction in externalizing behaviors, with a decrease of nearly 27%, while the toddlers with the protective allele demonstrated a modest improvement of only 12% (only slightly surpassing the 11% improvement observed in the control group). This indicates that the positive effects of the intervention in the intervention group significantly exceeded the negative effects in the control group. Consequently, the Leiden team concluded that risk alleles can indeed engender not just risk but also possibility.The question arises as to whether liability can truly be so readily transformed into gain.W. Thomas Boyce, a pediatrician with over three decades of experience in child development research, asserts that the orchid hypothesis "significantly reshapes our understanding of human vulnerability." He further elaborates, "We observe that when children with such vulnerabilities are placed in an optimal environment, they not only demonstrate improvements, but they exceed the achievements of their protective-allele peers."The question that naturally follows is whether there exist any enduring human vulnerabilities that do not possess this redeeming quality.

As the author delved into the intricacies of this subject, contemplating its implications on his own temperament and genetic composition, he found himself reflecting on the potential relevance of genetic testing, particularly in relation to the serotonin-transporter gene, also known as the SERT gene or 5-HTTLPR. This gene plays a pivotal role in regulating serotonin, a chemical messenger that plays a crucial role in mood regulation and other physiological processes.The presence of two shorter forms of the gene, known as short/short and short/long (S/S and S/L, respectively), has been associated with an increased risk of depression, particularly in individuals who manifest a particular genetic profile. In contrast, the long/long form of the gene has been observed to offer a degree of protection against depression.

Initially, I had reservations about undergoing an SERT gene assay, as I found the prospect of learning my risk of depression to be disconcerting, particularly in light of my family and personal history. However, given the potential implications of carrying the short/long allele, which could predispose me to depression, I decided to proceed with the assay. The results, which would have revealed whether I possessed the long/long allele, could have provided me with reassurance. Alternatively, the assay could have revealed the presence of the short/short allele, which is associated with an increased risk of depression. This was a subject about which I was ambivalent. However, as I investigated the orchid hypothesis and began to conceptualize in terms of plasticity rather than risk, I concluded that I was interested in ascertaining the truth. Consequently, I contacted a researcher I know in New York who studies depression and the serotonin-transporter gene. The following day, a package was left on my front porch containing a specimen cup. I performed an examination of the specimen, expelled the contents, and re-sealed the container. Subsequently, I placed the vial into its designated shipping tube and returned it to the designated location on my porch.Approximately one hour later, the FedEx delivery service retrieved the package.

Of all the evidence supporting the orchid-gene hypothesis, perhaps the most compelling comes from the work of Stephen Suomi, a rhesus-monkey researcher who heads a sprawling complex of labs and monkey habitats in the Maryland countryside—the National Institutes of Health's Laboratory of Comparative Ethology. For a period of 41 years, initially at the University of Wisconsin and subsequently, beginning in 1983, at the Maryland laboratory specifically established for him by the NIH, Suomi has been conducting research on the origins of temperament and behavior in rhesus monkeys. These monkeys share approximately 95 percent of our DNA, a proportion exceeded only by apes.Rhesus monkeys and humans differ in obvious and fundamental ways. However, their striking similarities in crucial social and genetic aspects have elucidated fundamental aspects of our own behavior, thereby contributing to the emergence of the orchid hypothesis.Suomi's formative years as a student and protégé of Harry Harlow, followed by his direct succession as a successor, laid the foundation for his expertise in the field. Harlow, a prominent figure in 20th-century behavioral science, is known for his significant contributions and the controversies that have accompanied his research. During the 1930s, the study of childhood development was dominated by a mechanistic behavioralism, and Harlow's work was particularly influential. The movement's leading figure in the United States, John Watson, regarded mother love as "a dangerous instrument." He urged parents to leave crying babies alone, to never hold them to give pleasure or comfort, and to kiss them only occasionally, on the forehead.Mothers were considered less important for their affection and more for their role in shaping behavior.

Harlow's seminal research, which involved a series of ingenious but occasionally unsettling experiments on monkeys, diverged from the prevailing behavioralism of the era. His most renowned experiment demonstrated that infant rhesus monkeys, raised in isolation or with same-age peers, exhibited a preference for a foodless yet plush terrycloth surrogate "mother" over a wire-mesh version that provided meals. This experiment demonstrated that infant rhesus monkeys, when raised in isolation or with same-aged peers, exhibited a strong desire to form bonds. Depriving them of physical, emotional, and social attachment could result in a state of near-paralysis, characterized by profound social and emotional dysfunction.This research provided critical evidence for the emerging theory of infant attachment, a theory that emphasizes the importance of rich, warm parent-child bonds and positive early experiences. This theory continues to dominate child development theory and parenting literature in the present day.

Since assuming leadership of Harlow's Wisconsin laboratory at the age of 28, he has expanded and refined the inquiry initiated by Harlow.Contemporary tools now enable the examination of not only monkeys' temperaments but also the physiological and genetic underpinnings of their behavior. The naturalistic environment of his laboratory facilitates his examination of not only mother-child interactions but also the family and social environments that shape and respond to the monkeys' behavior.Life in a rhesus-monkey colony is characterized by significant complexity, necessitating the navigation of a social system that is intricate and hierarchical. Those who demonstrate proficiency in this navigation are often successful, while those who lack it face limited opportunities.

Rhesus monkeys typically reach maturity at approximately four or five years of age and have a lifespan of about 20 years in the wild.Their developmental trajectory mirrors that of humans, exhibiting a 1-to-4 ratio, where a 1-year-old monkey is analogous to a 4-year-old human, a 4-year-old monkey to a 16-year-old human, and so forth. The gestation period lasts for approximately 16 months, with parturition occurring annually at approximately four years of age. While copulation is possible throughout the year, the female's fertility season is limited to a few months. Due to the synchrony of these events, a troop typically produces offspring that are approximately the same age.For the initial month, the infant remains attached to or in close proximity to its mother. At approximately two weeks, the infant begins to explore, initially within a few feet of its mother. Over the subsequent six to seven months, these excursions become increasingly frequent, extensive, and prolonged, although the young monkeys rarely venture beyond their mother's immediate vicinity or hearing range. In the event that the young monkey becomes alarmed, it swiftly returns to the mother's proximity. Frequently, the mother anticipates potential threats and draws the infant closer to her.At approximately eight months of age—equivalent to a rhesus preschooler—the mother's mating season commences. In preparation for the impending birth, the mother facilitates increased interaction between the youngster and its cousins, older siblings in the maternal lineage, and occasional visitors from other families or troops. The youngster's family group, friends, and allies provide ongoing protection as needed.A maturing female remains with this group throughout her life. Conversely, males typically depart after reaching approximately 4 or 5 years of age, which corresponds to the developmental stage of a 16-to-20-year-old individual. Initially, they join an all-male group that typically lives in relative isolation. After a period ranging from a few months to a year, he will typically depart from the group and endeavor to secure a position within a new family or troop through charm, assertiveness, or guile.If successful, he will assume the role of an adult male who provides services as a mate, companion, and protector for multiple females.However, it is noteworthy that only approximately half of the males attain this level of status. Their transition period is characterized by vulnerability to attacks from other young males, rival gangs, and new troop members if they miscalculate. Predation is also a risk during times when they lack the protection of a gang or troop, and many die during this transition.Early in his research, Suomi identified two types of monkeys that experienced challenges in managing these relationships. The first type, which he terms a "depressed" or "neurotic" monkey, constitutes approximately 20% of each generation. These monkeys exhibit delayed weaning, manifesting a reluctance to separate from their mothers. As adults, they display a tendency towards tentative, withdrawn, and anxious behavior, forming fewer bonds and alliances compared to other monkeys.The second type, predominantly male, is what he designates a "bully," characterized by uncommonly aggressive behavior that is indiscriminate in nature. These individuals constituted approximately 5 to 10 percent of each generation.As stated by Suomi, "Rhesus monkeys exhibit a tendency towards aggression, even during early developmental stages," and their play behavior is characterized by a high degree of physical contact. However, the severity of injuries observed in these instances is typically minimal, with the exception of the behaviors exhibited by the aforementioned "bully" monkeys. These individuals engage in confrontations with dominant monkeys, as well as encroaching upon the interactions between mothers and their offspring. These monkeys appear to lack the capacity to calibrate their aggression, and they frequently engage in confrontations that invariably escalate. These tendencies are further compounded by their substandard performance in tests of monkey self-control.For instance, in a "cocktail hour" test frequently utilized by researchers, monkeys are granted unrestricted access to an alcohol-based beverage of neutral taste for a duration of one hour. The majority of monkeys consume three or four drinks and then cease. In contrast, the bullies, as reported by Suomi, "drink until they drop."The neurotics and the bullies are subject to divergent outcomes. The neurotics, although they mature late, generally fare well. The females, however, become jumpy mothers, with the environment in which they raise their offspring exerting a significant influence on the outcome of their children. In secure environments, they are more or less normal; in insecure environments, they become jumpy too. The males, on the other hand, typically remain within their mothers' family circles for an extended period, often up to eight years. This extended stay is facilitated by the fact that they do not cause problems, allowing them to accumulate sufficient social skills and diplomatic deference. As a result, when they do leave, these males tend to assimilate more successfully into new groups compared to those who depart at a younger age. However, they do not engage in mating as frequently as more confident, assertive males. Consequently, they rarely ascend to high status within their new troops, which can render them vulnerable during conflicts. Nonetheless, they are less likely to perish in their attempts to gain access. They generally survive and procreate.In contrast, the bullies fare significantly worse. Even during their early years as babies and youths, they rarely establish social connections. By the age of two or three, their proclivity for aggression often results in their expulsion from the troop by the females, who may resort to collective force if necessary. Subsequently, these males are ostracized by both other troops and the troop's females. Most of them perish before reaching adulthood due to their social isolation. Their reproductive success is minimal.Suomi observed that the disposition of the monkeys was contingent on the characteristics of their respective mothers. The bullies, for instance, were typically the offspring of harsh, censorious mothers who restricted their children's social interactions. The results of the experiment were clear-cut: the monkeys with anxious temperaments were descended from anxious, withdrawn, distracted mothers. However, the extent to which these distinct personality types are influenced by genetic factors versus environmental factors remains a subject of debate.To investigate this, Suomi manipulated the environmental factors by taking nervous infants of nervous mothers—infants who exhibited high levels of nervousness during standardized newborn testing—and placing them with particularly nurturing "supermoms." The results showed that these infants exhibited behaviors that were almost indistinguishable from those of normal infants. Concurrently, Dario Maestripieri of the University of Chicago collected infants who had obtained high scores in a secure environment and had been reared by nurturing mothers. These infants were then placed under the care of abusive mothers. This experiment yielded results that indicated the presence of a significant environmental influence on the development of the infants, suggesting that genes and environment have a symbiotic relationship.

The advent of tools for gene study in the late 1990s prompted a swift adoption by researchers, including Finland's Dario Maestripieri. Maestripieri, working in conjunction with Klaus-Peter Lesch, a psychiatrist from the University of Würzburg, embarked on a project in 1997 that would yield significant results. In the preceding year, Lesch had published findings that revealed the existence of three variants (the previously mentioned short/short, short/long, and long/long alleles) in the human serotonin-transporter gene, and that the two shorter versions amplified the risk for depression, anxiety, and other problems.Upon receiving a request from Suomi to genotype his monkeys, Lesch proceeded with the analysis, which revealed the presence of the aforementioned three variants in the monkeys, although the short/short form was present in a rare manner.

Consequently, a collaboration was initiated among Lesch, Suomi, and J. Dee Higley, a colleague at the NIH, to conduct a study that would come to be recognized as a seminal example of a "gene-by-environment" study.The study involved the collection of cerebral spinal fluid from 132 juvenile rhesus monkeys and the subsequent analysis of 5-HIAA, a serotonin metabolite that serves as a reliable indicator of serotonin processing within the nervous system. Building upon earlier studies by Lesch, which had demonstrated that individuals with the short/long serotonin-transporter allele exhibited lower 5-HIAA levels, indicative of less efficient serotonin processing, the present study sought to validate this finding in a non-human primate model. The study's primary objective was to ascertain the replicability of the observed association between serotonin metabolite levels and the presence of the short/long serotonin-transporter allele. The confirmation of this association in monkeys would serve to bolster the evidence supporting the genetic underpinnings elucidated in Lesch's earlier research. Moreover, the identification of a consistent genetic and behavioral dynamic in rhesus monkeys would further substantiate their value as a model system for investigating human behavior.

The monkeys' 5-HIAA levels were grouped by their serotonin genotype (short/long or long/long, excluding short/short, which was too rare for analysis) and then sorted based on whether they were raised by their mothers or as orphans with same-aged peers. When their colleague Allison Bennett charted the results on a bar graph showing 5-HIAA levels, all of the mother-reared monkeys, irrespective of their serotonin genotype, exhibited serotonin processing within the normal range.Conversely, the metabolite levels of the peer-raised monkeys diverged sharply by genotype: the short/long monkeys in that group exhibited highly inefficient serotonin processing (a risk factor for depression and anxiety), whereas the long/long monkeys processed it robustly. Upon viewing the results, Suomi recognized the presence of substantiated evidence indicating a behaviorally relevant gene-by-environment interaction within the subject population of monkeys. He subsequently recounted to me, "Upon initial observation of the graph, I immediately concluded that the findings merited celebration."Subsequently, the results were published by Suomi and Lesch in 2002 in Molecular Psychiatry, a relatively recent publication dedicated to the field of behavioral genetics. This paper coincided with a notable increase in gene-by-environment studies concerning mood and behavioral disorders. In that same year, two psychologists at King's College London, Avshalom Caspi and Terrie Moffitt, published the first of two extensive longitudinal studies (utilizing life histories of hundreds of New Zealanders) that would prove to be particularly influential. The initial study, published in Science, demonstrated that the short allele of a significant neurotransmitter-processing gene (designated as the MAOA gene) significantly elevated the likelihood of antisocial behavior in adult humans who had experienced childhood abuse. The subsequent study, also published in Science in 2003, revealed that individuals possessing short/short or short/long serotonin-transporter alleles, when subjected to stress, exhibited an elevated risk of depression.

These and numerous related studies have been instrumental in substantiating the vulnerability hypothesis in recent years. However, many of these studies also encompassed data that endorsed the orchid hypothesis, albeit unnoticed or unremarked during that period. (Jay Belsky, a child-development psychologist, has recently documented more than two dozen such studies.) For instance, both of Caspi and Moffitt's seminal papers in Science include raw data and graphs that demonstrate that for individuals who did not encounter severe or recurrent stress, the risk alleles in question amplified resistance to aggression or depression. Furthermore, the data presented in the 2002 Molecular Psychiatry paper by Suomi and Lesch, which examined the impact of serotonin-transporter gene variants on serotonin processing in monkeys, offers a compelling perspective. The study revealed that monkeys raised by their mothers with the risk variant exhibited a 10% higher serotonin processing efficiency compared to monkeys raised by their mothers with the protective variant.When interpreting these findings within the context of the orchid hypothesis, it is intriguing to observe the potential implications for human behavior and mental health. When the focus is directed exclusively to the outcomes in adverse environments, vulnerability is observed. Conversely, when the focus is directed to the outcomes in favorable environments, it is evident that the risk alleles generally yield superior results in comparison to the protective ones.For instance, securely raised 7-year-old boys who carry the DRD4 risk allele for ADHD exhibit a reduced prevalence of symptoms when compared to their securely raised peers who carry the protective allele. Similarly, non-abused teenagers who carry the same risk allele demonstrate a lower incidence of conduct disorder. Furthermore, non-abused teenagers who carry the risky serotonin-transporter allele exhibit reduced symptoms of depression compared to those with the protective allele. This phenomenon is further supported by numerous other studies, although it should be noted that these investigations were primarily designed and analyzed to identify negative vulnerabilities, as Jay Belsky has observed. However, as researchers shift their focus from identifying risk amplifications to examining gene sensitivity and start to prioritize positive environments and traits, the evidence supporting the orchid hypothesis is expected to accumulate.Suomi's research, conducted in the years following his 2002 study, provides substantial support for this hypothesis. Specifically, he observed that monkeys carrying the supposedly risky serotonin-transporter allele, along with nurturing mothers and secure social positions, demonstrated superior performance in various key domains. These domains included the ability to create playmates during their youth, establish and maintain alliances at later stages, and effectively sense and respond to conflicts and other hazardous situations. In comparison, monkeys with the supposedly protective allele, despite being similarly endowed, exhibited less adeptness in these areas.Furthermore, these monkeys ascended to higher positions within their respective dominance hierarchies, underscoring their increased success.

Suomi's research led to another noteworthy discovery. He and his colleagues assayed the serotonin-transporter genes of seven species of macaques, the primate genus to which the rhesus monkey belongs.None of these species exhibited the serotonin-transporter polymorphism that had been identified as a key factor in rhesus monkeys' flexibility. Concurrent studies of other pivotal behavioral genes in primates yielded analogous outcomes. According to Suomi, the SERT gene assays in diverse primates, including chimps, baboons, and gorillas, yielded no discernable results. The scientific field is nascent, and the entirety of the data set is not yet available. However, to date, among all primates, only rhesus monkeys and human beings appear to exhibit multiple polymorphisms in genes with profound implications for behavior. "It's just us and the rhesus," stated one researcher.This finding prompted further consideration of an additional trait shared with rhesus monkeys: the capacity to thrive in a limited number of environments. However, two species, often referred to as "weed" species, exhibit a remarkable adaptability to diverse, changing, or disturbed environments: human beings and rhesus monkeys. This adaptability may be a key factor contributing to our success. This adaptability may be attributed to the high degree of variability in our behavioral genes.One morning in May of this year, Elizabeth Mallott, a researcher at Suomi's laboratory, arrived at the primary rhesus enclosure to commence her day and discovered approximately half a dozen monkeys in her designated parking area. These monkeys were congregating in close proximity, exhibiting signs of distress and anxiety. Upon closer inspection, Mallott observed that some of the monkeys exhibited bite wounds and scratches.It is noteworthy that monkeys who successfully leap over the double electrified fences of the enclosure typically do so with ease and then swiftly return to the inside. However, in this instance, the monkeys remained outside. This behavior was also exhibited by several others that Mallott encountered between the two fences.

Following the containment of the escapees in an adjacent structure, Mallott, now accompanied by Matthew Novak, another researcher intimately familiar with the colony, proceeded through the double gates.The colony, comprising approximately 100 monkeys, had been in existence for approximately 30 years.Changes in its hierarchical structure typically manifested gradually and subtly.However, upon further observation, Novak and Mallott discerned that a significant event had transpired. Novak would later recount, "Animals were occupying areas they were not typically seen in, and animals that typically avoided each other were cohabiting. Social norms appeared to be in a state of flux."An analysis of the situation revealed that Family 3, which had been ranked second to Family 1 for several years, had orchestrated a coup. This shift in the social hierarchy was precipitated by Family 3's growth, which had surpassed Family 1's in recent years. However, Family 1, under the leadership of a skilled matriarch named Cocobean, had maintained its position through the use of authority, diplomacy, and strategic advancement.Approximately one week prior to the coup, one of Cocobean's daughters, Pearl, was relocated from her habitat to the veterinary facility due to renal complications. Concurrently, Family 1's most dominant male was aging and exhibiting signs of arthritis. Pearl had a particularly close relationship with Cocobean and, as the only daughter without children of her own, was well-positioned to defend her.Her absence, in conjunction with the male's infirmity, created a vulnerable moment for Family 1.Novak suggests that this sequence of events may have been in the works for a couple of weeks. "However, based on our reconstruction of the sequence of events, the incident began the night prior when a young female, Fiona, a 3-year-old member of Family 1 who exhibited tendencies toward aggressive behavior, initiated a confrontation with an individual from Family 3. This confrontation escalated, and Family 3 perceived an opportunity to initiate an attack on Family 1. The wounded and the displaced could be identified by their injuries and the changes in their social standing. One other female in Family 1, Quark, was killed, and another, Josie, was so severely injured that it was necessary to euthanize her.The same occurred with all of Cocobean's other daughters. One male, the largest of the group, sustained a severe bite on his arm, rendering him unable to use it.Fiona experienced significant injury.The altercation was meticulously orchestrated, with the assailants targeting the group's leader and methodically working their way down the hierarchy.Following Novak's detailed account, he and I conducted a thorough examination of the enclosure.Despite the intense heat of a July day, the monkeys were engaged in a period of rest and recuperation. Family 3 had taken up position in a corncrib near the pond, which was one of several corncribs provided for shelter. They engaged in grooming, napping, and fixed us with unwavering stares.A more agitated group congregated in another crib down the hill.When our distance was measured to 30 feet, the largest monkey in the group ascended to the cage bars. From this vantage point, the primate vocalized loudly, agitated the metal bars of the enclosure, and exposed its pointed teeth.Subsequently, I proceeded to the office of Dr. Suomi, inquiring about his perspective on the unfolding events. Dr. Suomi has devoted significant intellectual resources to the analysis of this incident, and it is evident why. His research had been meticulously woven together, interweaving seminal concepts such as the significance of early experience, the dynamic interplay between environment, parenting, and genetic inheritance, the profound impact of family and social bonds, and the variegated consequences of different traits in diverse circumstances.In the context of the orchid hypothesis, he began to discern the potential for these threads to be woven into a novel tapestry, challenging existing paradigms.

"Approximately 15 years ago," he stated, "Carol Berman, a monkey researcher at SUNY-Buffalo, devoted significant time to observing a sizable rhesus-monkey colony inhabiting an island in Puerto Rico. Her objective was to ascertain the consequences of group size fluctuations over time.The colonies would commence with approximately 30 to 40 individuals, a group that had diverged from another, and subsequently undergo expansion. At a certain point, often somewhere near a hundred, the group would reach its limit, and it, too, would split into smaller troops."

Such size limits, which vary among social species, are sometimes called "Dunbar numbers," after Robin Dunbar, a British evolutionary psychologist who argues that a species' group limit reflects how many social relationships its individuals can manage cognitively. Berman's observations suggest that the Dunbar number of a species reflects not just its cognitive abilities but also its temperamental and behavioral range.

Berman observed that in small rhesus troops, the mothers can allow their young to play freely because strangers rarely approach. However, as a troop grows and the number of family groups rises, strangers or semi-strangers more often come near. The adult females become more vigilant, defensive, and aggressive. This behavioral shift is exhibited by the juveniles and adult males, who also adopt more aggressive and defensive behaviors.Consequently, the frequency of confrontations and rivalries escalates, eventually leading to the dissolution of the troop.As stated by Suomi, "This phenomenon represents a complex feedback system, where the dynamics at the individual level, specifically the interaction between mother and infant, ultimately influences the collective's nature and survival."

Studies by Suomi and colleagues demonstrate that such disparities in early experiences can profoundly influence the expression of genes, that is, the activation and deactivation of genes. Furthermore, these studies suggest that early experiences may also impact subsequent patterns of gene expression and behavior, including an animal's flexibility and reactivity, by modulating the sensitivity levels of key alleles. According to his theory, a challenging upbringing can lead to either watchful caution or vigilant aggression in monkeys. This phenomenon, he suggests, is a parental preparation for challenging times. However, this effect may be particularly pronounced in monkeys with particularly plastic behavioral alleles.This theory may help explain the events leading up to the so-called "Palace Revolt." Fiona's injudicious aggression proved disastrous for her and Family 1. However, Family 3, a group that had been diplomatically deferring to Family 1 for years, dramatically improved its fortunes by mounting an uncharacteristically aggressive and sustained counterattack. Suomi hypothesizes that in the more competitive and crowded environment of the large colony, gene-environment interactions have rendered certain Family 3 monkeys, particularly those with more reactive "orchid" alleles, less aggressive but more potentially aggressive.During the period when they could not challenge the established hierarchy—the period preceding Pearl's departure—aggressiveness would have led them into unwinnable, possibly fatal conflicts. However, in the absence of Pearl, the dynamics shifted, and the Family 3 monkeys capitalized on a rare and decisive opportunity by unleashing their aggressive potential.The coup also demonstrated a more straightforward principle: that a genetic trait that is tremendously maladaptive in one situation can prove highly adaptive in another.This principle is exemplified by human behavior. The survival and evolution of any society is contingent on the presence of individuals who exhibit characteristics such as aggression, restlessness, stubbornness, submissiveness, sociality, hyperactivity, flexibility, solitude, anxiety, introspection, vigilance, and even moroseness, irritability, or outright violence, which may exceed the typical norms.This phenomenon provides a crucial insight into the evolutionary dynamics of risk-taking alleles, underscoring their role in our survival and adaptation. These alleles have not merely circumvented the selection process; rather, they have been actively selected for.Recent analyses indicate that numerous orchid-gene alleles, including those discussed in this narrative, have emerged in humans within the past 50,000 years. Each of these alleles is believed to have originated through chance mutations in a few individuals, subsequently disseminating rapidly. The divergence of rhesus monkeys and humans from a common ancestor approximately 25 to 30 million years ago suggests that these polymorphisms may have evolved and disseminated independently in both species. However, the subsequent spread of these alleles across diverse populations indicates a significant selective value, underscoring their functional importance in human evolution.

As evolutionary anthropologists Gregory Cochran and Henry Harpending have noted in The 10,000 Year Explosion (2009), the past 50,000 years—the period in which orchid genes appear to have emerged and expanded—corresponds to the period during which Homo sapiens began to exhibit pronounced characteristics that set them apart from other species, and during which sparse populations in Africa expanded to populate the entire globe. While Cochran and Harpending do not explicitly incorporate the orchid-gene hypothesis into their argument, they argue that human domination of the planet is due to certain key mutations that accelerated human evolution—a process that the orchid-dandelion hypothesis certainly helps explain.The details of this process vary depending on the context. For instance, in environments characterized by a high density of aggressive individuals, conflict escalates, leading to the selection of aggressive behaviors due to their cost. Conversely, when aggression diminishes to a level that becomes less hazardous, its prevalence rises again.Shifts in environmental or cultural contexts similarly influence the distribution of alleles. The orchid variant of the DRD4 gene, for instance, has been linked to an increased risk of ADHD, a condition characterized by actions that "annoy elementary-school teachers," according to Cochran and Harpending. However, attentional restlessness can be advantageous in environments that valorize sensitivity to new stimuli, such as the current proliferation of multitasking, which may favor individuals with attentional agility. While one might lament the increasing prevalence of ADHD-related phenomena in contemporary society, a closer examination reveals that the spread of DRD4's risk allele has been a hallmark of human evolution for at least the past 50,000 years.Even if we acknowledge the potential of orchid genes to confer adaptability, which is crucial for human success, it is nevertheless noteworthy to consider their precise dynamics. Following the submission of a saliva sample for genotyping via FedEx, I initially attempted to compartmentalize the experience. However, to my surprise, I managed to do so.The anticipated email containing the results, scheduled for Monday, arrived unexpectedly three days earlier, during a Friday evening when I was engaged in a concurrent activity with my children, watching Monsters, Inc. while briefly reviewing the messages on my iPhone.Initially, I failed to fully comprehend the content of the message.

The message began with the salutation, "David," and proceeded to inform me that the assay had been run on the DNA from my saliva sample. The assay had run smoothly, and the genotype was identified as S/S. It was a fortunate circumstance, I thought, that neither of us regarded these matters as being determinative or as having a fixed valence. I was invited to communicate if I wished to discuss the result or any genetic concerns.Upon completing the message, I perceived the house to be quieter, though this was not the case. As I gazed out the window at our pear tree, its blossoms fallen but its fruit only nubbins, I felt a chill spread through my torso.I had not anticipated the significance of this information. However, as I sat absorbing this information, the chill came to seem less the coldness of fear than a shiver of abrupt and inverted self-knowledge—of suddenly knowing with certainty something I had long suspected, and finding that it meant something other than I thought it would. The orchid hypothesis posits that a specific allele, the rarest and riskiest of the serotonin-transporter gene's three variants, renders me not just more vulnerable but more malleable. This novel perspective shifted my outlook, eliminating the sense of being burdened by a handicap that would render my efforts futile in the face of adversity.Instead, I experienced a heightened sense of agency. Any intervention I implemented to enhance my environment and experience, in essence, would be amplified.Consequently, my short/short allele no longer appears as a trapdoor, but rather as a springboard, albeit a potentially slippery and fragile one.I have no intention of having my other key behavioral genes analyzed. Likewise, I have no intention of having my children's genes analyzed. The insights gained from such analyses might reveal that I exert a form of influence over them in every interaction, but I am already cognizant of this fact. Nonetheless, I derive a sense of satisfaction from the notion that my actions, such as taking my son fishing for salmon, listening to his younger brother's intricate elaborations of his dreams, or singing "Sweet Betsy of Pike" with my 5-year-old daughter as we drive home from the lake, may have a subtle impact on their future development.I am not aware of the specific mechanisms through which these influences occur, and I do not feel that I need to understand them. What is of significance is the awareness that, through collaboration, we have the capacity to activate these mechanisms.

We Are All Confident Idiots

The issue with a lack of knowledge is that it can often be mistaken for expertise, as demonstrated by a leading researcher in the field of psychology of human wrongness.In March of last year, during the South by Southwest (SXSW) music festival in Austin, Texas, the late-night talk show Jimmy Kimmel Live! sent a camera crew to the streets to observe hipsters engaging in bluffing. Kimmel then proceeded to address his studio audience, stating, "People who attend music festivals pride themselves on their knowledge of up-and-coming acts, even if they are unaware of the current artists."The host then directed his crew to inquire about bands that are not currently active.

One of Kimmel's interviewers approached a subject wearing thick-framed glasses and a whimsical T-shirt, and inquired about the prevalent sentiment among festival attendees. The subject, seemingly under the influence of alcohol, responded with a resounding affirmation, expressing his belief in the host's potential to achieve significant success.

This exchange occurred within the context of Kimmel's recurring "Lie Witness News" segment, a segment that involves posing questions to passersby with false premises. In one episode, the crew of Kimmel's program posed questions to individuals on Hollywood Boulevard, seeking their opinions on the 2014 film Godzilla and its portrayal of the 1954 giant lizard attack on Tokyo. In another episode, they inquired about Bill Clinton's role in ending the Korean War and speculated on the potential impact of his appearance as a judge on America's Got Talent on his legacy. In response to the latter question, one woman expressed her disagreement, stating, "No." "It will make him even more popular."It is evident that the subjects of these interviews are susceptible to the tactics employed by Kimmel. Some appear willing to articulate any response on camera, regardless of its relevance, in an attempt to appear knowledgeable on the subject matter. However, this approach often backfires, as the subjects' lack of expertise is highlighted. Others seem eager to please, striving to satisfy the interviewer's expectations by providing what they perceive as the most appropriate response: "I don't know." However, for some of these interviewees, the trap may be an even more profound one, as the most confident-sounding respondents often appear to believe they possess some degree of knowledge, as if there is some fact, some memory, or some intuition that justifies their response as reasonable.At a certain point during South by Southwest, Kimmel's crew approached a poised young woman with brown hair. "What have you heard about Tonya and the Hardings?" the interviewer inquired. "Have you heard they're kind of hard-hitting?"The woman, seemingly unaware of the verbal cue, proceeded to give an elaborate response about the fictitious band. "Yeah, a lot of men have been talking about them, saying they're really impressed," she replied. "They're usually not fans of female groups, but they're really making a statement." Utilizing a combination of mental agility and a meticulous understanding of the subject, she proceeded to offer an authoritative review of Tonya and the Hardings, incorporating select factual details such as their authenticity as a female group, a distinction that sets them apart from notable acts like Marilyn Manson and Alice Cooper. Instead, individuals lacking in competence often exhibit an unwarranted confidence, seemingly bolstered by a sense of self-assurance.It is noteworthy that Kimmel's producers appear to exercise discernment in the selection of interviews, opting for those that elicit humor. However, it is important to recognize that such instances of extemporaneous discourse on subjects with which they are not well-versed are not exclusive to late-night television. Within the hallowed halls of a research laboratory at Cornell University, psychologists Stav Atir, Emily Rosenzweig, and I are engaged in a meticulous research endeavor that can be likened to Jimmy Kimmel's bit, albeit in a more restrained and controlled manner.In our study, we pose inquiries to survey respondents, seeking to ascertain their familiarity with specific technical concepts drawn from domains such as physics, biology, politics, and geography.A notable proportion of respondents profess familiarity with genuine terms like centripetal force and photon. However, it is noteworthy that respondents also profess familiarity with fictitious concepts such as the plates of parallax, ultra-lipid, and cholarine.In a particular study, approximately 90 percent of respondents claimed knowledge of at least one of the nine fictitious concepts presented in the survey.Remarkably, respondents who exhibited a greater degree of expertise in a specific domain also demonstrated a higher level of familiarity with the meaningless concepts associated with it.

It is noteworthy to observe individuals who profess expertise in political matters asserting their knowledge of figures such as Susan Rice, the national security adviser to President Barack Obama, and Michael Merrington, a seemingly innocuous series of syllables. However, this phenomenon is not entirely unexpected. For over two decades, my research has focused on the study of metacognition, the cognitive processes involved in the evaluation and regulation of knowledge, reasoning, and learning. My research consistently reveals a sobering reality: despite our self-perceived expertise, our understanding is often limited. This limitation is not merely a matter of ignorance; it is a result of our cognitive processes, which often obscure the distinction between what we know and what we don't. However, achieving this ideal has proven to be a formidable challenge.While our knowledge is often discernible, the vast expanse of our ignorance remains largely obscured. We frequently fail to acknowledge the pervasiveness and extent of our ignorance.

In 1999, my graduate student at the time, Justin Kruger, and I published a paper in the Journal of Personality and Social Psychology that documented how, in many areas of life, incompetent individuals often fail to recognize—or, more accurately, cannot recognize—their own incompetence, a phenomenon that has come to be known as the Dunning-Kruger effect.This lack of self-insight is a logical consequence of the human condition. For individuals with poor performance skills to acknowledge their inadequacies would necessitate the possession of the very expertise they lack.To comprehend one's proficiency or lack thereof in grammatical rules, for instance, one must have a solid understanding of these rules, an impossibility among the incompetent.Poor performers—and we all fall short in certain domains—fail to discern the flaws in their own thinking or the deficiencies in their responses.

It is noteworthy that, in many instances, incompetence does not result in a state of disorientation, perplexity, or caution. Instead, the incompetent often exhibit an unwarranted confidence, fueled by a sense of self-assurance that, to them, appears to be rooted in knowledge.This phenomenon is not merely a theoretical construct; it is a reality that has been empirically substantiated. A comprehensive array of studies, conducted by myself and numerous colleagues, has substantiated this phenomenon. These studies have demonstrated that individuals with limited proficiency in a specific cognitive, technical, or social domain frequently overestimate their aptitude and performance. This overestimation has been observed in domains such as grammar, emotional intelligence, logical reasoning, firearm care and safety, debating, and financial knowledge. College students who submit exams that will result in grades ranging from D to F often hold the misguided belief that their efforts will be meritorious of significantly higher grades. Similarly, low-performing chess players, bridge players, and medical students, as well as elderly individuals applying for a renewed driver's license, frequently overestimate their competence by a substantial margin.On occasion, this tendency can be observed in the broader historical context. A notable example is the 2008 financial crisis, which was triggered by the collapse of a substantial housing bubble fueled by the actions of financiers and the lack of financial literacy among consumers.Recent research indicates that a significant proportion of Americans exhibit a form of financial ignorance characterized by inflated self-confidence. In 2012, the National Financial Capability Study, conducted by the Financial Industry Regulatory Authority (with the U.S. Treasury), surveyed approximately 25,000 respondents, who were asked to rate their own financial knowledge and subsequently assessed their financial literacy.The approximately 800 respondents who indicated that they had filed for bankruptcy within the previous two years demonstrated substandard performance on the test, with an average ranking of 37th percentile. However, these respondents expressed higher levels of confidence in their financial knowledge compared to other participants.The discrepancy between self-perception and reality was statistically significant, with 23% of recently bankrupt respondents rating themselves as highly as possible, in contrast to only 13% of the rest of the sample.This discrepancy could be attributed, at least in part, to a tendency among bankrupt respondents, similar to the tendency observed among Jimmy Kimmel's victims, to avoid disclosing their lack of knowledge. Notably, when confronted with a question, respondents who had previously experienced bankruptcy were 67% more likely to endorse false information compared to their peers. Consequently, despite their self-perceived financial literacy, respondents exhibited a tendency to believe in the accuracy of their financial knowledge.It is important to note that the phenomenon of self-assessment can be influenced by external perceptions of others' ignorance. This may lead to the assumption that this phenomenon does not apply to oneself. However, it is crucial to recognize that the issue of unrecognized ignorance is a universal challenge. Over the years, I have become convinced of one key, overarching fact about the ignorant mind: it should not be regarded as uninformed, but rather as misinformed.

An "ignorant mind" is not, in fact, a pristine vessel devoid of content; rather, it is a receptacle filled with the detritus of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that, regrettably, bear the appearance and feel of useful and accurate knowledge.This detritus is an unfortunate by-product of one of our greatest strengths as a species: we are unbridled pattern recognizers and profligate theorizers. While these cognitive abilities are often sufficient to navigate daily life or procreate, they can also, in certain contexts, result in situations that are potentially embarrassing, unfortunate, or even dangerous. This is particularly salient in technologically advanced, complex democratic societies that, on occasion, ascribe immense destructive power to misguided popular beliefs (e.g., the financial crisis; the Iraq war). As humorist Josh Billings insightfully observed, "It's not what you don't know that gets you into trouble. It's what you know for sure that just isn't so." (It is noteworthy that many individuals erroneously believe that this quote was first uttered by Mark Twain or Will Rogers, when in fact, it was not.)

Given our inherent cognitive biases and the way we learn from our environment, we are all prone to believing misinformation. However, by gaining a deeper understanding of how our complex cognitive processes work, we can better navigate towards a more objective understanding of the truth, both as individuals and as a society.

BORN WRONG

Some of our earliest intuitions about the world are ingrained from childhood. By the age of two, infants demonstrate a grasp of spatial reasoning, recognizing that two objects cannot occupy the same physical space simultaneously. They also develop an understanding of the continuity of existence, perceiving that objects persist even when not visible, and the principle of gravity, recognizing that objects fall if unmoored.Furthermore, they exhibit an understanding of autonomy, recognizing that people can move around independently, but not inanimate objects like a computer.However, not all of our earliest intuitions are so well-founded.

Very young children also carry misbeliefs that they will harbor, to some degree, for the rest of their lives.Their thinking, for example, is marked by a strong tendency to falsely ascribe intentions, functions, and purposes to organisms.In a child's mind, the most important biological aspect of a living thing is the role it plays in the realm of all life.Asked why tigers exist, children will emphasize that they were "made for being in a zoo." When asked about the function of trees in producing oxygen, children respond with the assertion that trees enable the sustenance of animal life.Conventional biology and natural science education endeavors seek to mitigate this inclination towards purpose-driven reasoning; however, it persists as an innate tendency. Adults with limited formal education exhibit a comparable bias, and even seasoned scientists, when pressured, can succumb to purpose-driven errors. A study by Deborah Kelemen, a psychologist at Boston University, and her colleagues provides a compelling illustration of this phenomenon. In this study, 80 scientists from various fields, including geoscience, chemistry, and physics, were asked to evaluate 100 statements pertaining to the "why" of natural phenomena as either true or false. Among the explanations presented were false purpose-driven ones, such as "Moss forms around rocks in order to stop soil erosion" and "The Earth has an ozone layer in order to protect it from UV light."Study participants were allowed to work through the task at their own speed or were given a maximum of 3.2 seconds to respond to each item. The presence of these purpose-driven explanations increased from 15% to 29% among the scientists, suggesting a tendency to hasten their conclusions.This phenomenon poses significant challenges in the dissemination of evolutionary theory, a foundational concept in modern science. Even individuals with no formal training in the field often hold misconceptions about evolution, attributing to it a level of purpose and organization that is, in fact, nonexistent. When asked to explain the rapid speed of cheetahs, many individuals cite the hypothesis that the species developed this trait in response to a need to capture more prey, a trait that was then passed down to their offspring. In this perspective, evolution is regarded as a strategic game at the species level.

This perspective overlooks the critical role of individual variations and competition among members of a species in response to environmental pressures.Individual cheetahs that can run faster tend to catch more prey, live longer, and reproduce more successfully, while slower cheetahs face challenges and eventually die, leading the species to gradually become faster over time.Evolution is the result of random differences and natural selection, not deliberate choice or intention.However, the concept of evolution as a deliberate process is deeply entrenched in popular belief. While the process of educating people about evolution can indeed lead them from a state of uninformed to a state of well-informed, in some cases, it can also result in a state of confident misinformation.In 2014, Tony Yates and Edmund Marek published a study that tracked the effect of high school biology classes on 536 Oklahoma high school students' understanding of evolutionary theory.The students were rigorously quizzed on their knowledge of evolution before taking introductory biology classes and then again just afterward. The results of this study indicated that the students exhibited a notable increase in confidence and an enhancement in their understanding of evolutionary theory, as evidenced by their higher endorsement of accurate statements. However, the study also revealed a concomitant rise in misconceptions among the students. For instance, the percentage of students strongly agreeing with the true statement "Evolution cannot cause an organism's traits to change during its lifetime" increased from 17% to 20% following instruction, while the percentage strongly disagreeing increased from 16% to 19%. In response to the true statement "Variation among individuals is important for evolution to occur," exposure to instruction led to an increase in strong agreement from 11% to 22%, accompanied by a rise in strong disagreement from 9% to 12%. Notably, the only response that uniformly decreased after instruction was "I don't know."Furthermore, students' difficulties extend beyond evolution. Conventional educational practices have repeatedly been found to be ineffective in eradicating a number of deeply entrenched misconceptions. These misconceptions include the belief that vision is made possible only because the eye emits some form of energy or substance into the environment, the misconception that the trajectory of falling objects is a result of the laws of physics, and the misconception that light and heat act under the same laws as material substances. However, education often appears to instill a sense of confidence in these misconceptions.

MISAPPLIED RULES

Consider the following illustration of a curved tube lying horizontally on a table:

In a 2013 study of intuitive physics, Elanor Williams, Justin Kruger, and I presented subjects with multiple variations of this image and requested that they identify the trajectory of a ball after its passage through each variation (marked A, B, or C in the illustration). Some participants achieved perfect scores and exhibited high levels of confidence in their responses. In contrast, some participants demonstrated less proficiency, yet their confidence levels remained notably high.However, a noteworthy phenomenon emerged as we examined the performance of those who exhibited significant deficiencies in the quiz.Contrary to expectations, these individuals did not manifest a decline in confidence; rather, they exhibited levels of confidence that were comparable to those displayed by the top performers. This study, in fact, yielded the most striking manifestation of the Dunning-Kruger effect observed to date: When assessing confidence levels among those who correctly answered all questions (100% accuracy) versus those who answered zero questions correctly, it became difficult to differentiate between the two groups.The underlying reason for this phenomenon lies in the participants' perceived understanding of a rigorous and consistent rule that governs the prediction of ball trajectories. One group was aware of the correct Newtonian principle, which stipulates that the ball would persist in the direction it was traveling the instant it left the tube—Path B. Once liberated from the constraints of the tube, the ball would proceed in a straight trajectory.Individuals who correctly answered zero percent of the items typically asserted that the ball would follow Path A. In essence, their rule posited that the tube would impart a curving impetus to the ball's trajectory, which it would persist in following upon its exit. This response is demonstrably erroneous; nevertheless, it is endorsed by a considerable proportion of individuals.This phenomenon is not unique to this particular experiment; in 1500 AD, the majority of intellectuals interested in physics concurred with the answer that the ball would follow Path A. This perspective was endorsed by renowned figures such as Leonardo da Vinci and the French philosopher Jean Buridan. The answer is not without its rationality. A theory of curved impetus, for instance, could provide a rationale for phenomena such as the persistent rotation of wheels after the cessation of external forces, or the regular and predictable orbits of planets around the sun.The transfer of this theoretical framework to other domains, including the study of tubes, is a straightforward extension.This study underscores a broader phenomenon in human cognition: the propensity to generate misbeliefs, in addition to the cognitive biases inherent in our cognitive development. Specifically, it involves the transfer of knowledge from suitable contexts to inappropriate ones.As Pauline Kim, a professor at Washington University Law School, observes, individuals often make legal inferences based on their understanding of informal social norms. This practice frequently results in a misinterpretation of legal rights, particularly in the context of employment law, where individuals often overestimate their entitlements. In 1997, Kim presented a series of morally abhorrent workplace scenarios to 300 residents of Buffalo, New York. For example, an employee was fired for reporting that a co-worker had been stealing from the company. These scenarios were nonetheless legal under the state's "at-will" employment regime. A significant proportion of the participants, ranging from 80 to 90 percent, incorrectly identified each of these scenarios as illegal, thereby underscoring their limited comprehension of the extent of managerial prerogative in terminating personnel.The implications of this study are significant, particularly in the context of legal scholarship that has historically defended "at-will" employment principles on the premise that employees tacitly consent to these regulations without demanding more favorable terms. Kim's findings revealed that employees often lack a comprehensive understanding of the implications of their consent.

A similar phenomenon pertains to the field of medicine, where doctors often encounter challenges in effectively addressing patient misconceptions, which can hinder the treatment of underlying medical conditions. For instance, elderly patients often decline to engage in recommended physical activity to alleviate pain, despite its well-documented efficacy. This refusal is frequently rooted in their association of physical discomfort and soreness with potential injury and further degeneration. Research by behavioral economist Sendhil Mullainathan has revealed that in India, many mothers withhold water from infants suffering from diarrhea, perceiving their children as leaky containers rather than as creatures in urgent need of hydration.MOTIVATED REASONINGA significant number of our most entrenched misperceptions do not stem from primitive, childlike intuitions or careless category errors. Rather, they are deeply rooted in the values and philosophies that define our individual identities. These foundational beliefs, which encompass narratives about the self and ideas about the social order, are considered inviolable, as challenging them would challenge our sense of self. Consequently, these beliefs demand unquestioning adherence from other opinions, and any information that does not align with these beliefs is either distorted or forgotten.

The prevailing conceptualization of ignorance as a mere absence of knowledge often leads to the misguided belief in education as its panacea. However, education can, in fact, engender illusory confidence.A pervasive sacrosanct belief, for instance, might be expressed as follows: "I am a capable, good, and caring person." Any information that contradicts this premise is likely to encounter significant mental resistance. A similar phenomenon can be observed in the realm of political and ideological beliefs, which often become sacrosanct.According to the anthropological theory of cultural cognition, people tend to align ideologically into cultural worldviews that diverge along a few key axes: These axes of ideological sorting are often defined by a dichotomy between individualism and communitarianism, as well as hierarchism and egalitarianism. According to the theory of cultural cognition, humans process information in a way that not only reflects these organizing principles, but also reinforces them. These ideological anchor points can have a profound and wide-ranging impact on what people believe, and even on what they "know" to be true.

The phenomenon of individuals aligning facts, logic, and knowledge to align with their subjective worldview is not an unprecedented observation. Indeed, accusations of "motivated reasoning" are frequently made by politicians against their political opponents. However, the extent to which these principles are distorted can be striking. In ongoing research conducted in collaboration with the political scientist Peter Enns, our laboratory has identified a phenomenon in which individuals' political inclinations appear to significantly influence their perception of and response to logical or factual information, often leading to contradictions with other sets of beliefs. In a survey of approximately 500 Americans conducted in late 2010, it was found that over 25% of liberals (but only 6% of conservatives) endorsed both the statement "President Obama's policies have already created a strong revival in the economy" and "Statutes and regulations enacted by the previous Republican presidential administration have made a strong economic recovery impossible." While these statements align with liberal ideology, it is pertinent to question the validity of attributing credit to President Obama for a recovery that is deemed impossible due to Republican policies.In contrast, 27 percent of conservatives (compared to a mere 10 percent of liberals) concurred with the statements that "President Obama's rhetorical skills are elegant but insufficient to influence major international issues" and "President Obama has not done enough to utilize his rhetorical skills to effect regime change in Iraq." However, if Obama's rhetorical aptitude is deemed inadequate, why should he be held accountable for its perceived inadequacy in influencing the Iraqi government?Sacrosanct ideological commitments can also prompt the formation of hasty, fervent opinions on subjects about which we possess minimal knowledge, subjects that, prima facie, bear little relation to ideology.A salient example of this phenomenon is the burgeoning field of nanotechnology. Nanotechnology, in its broadest definition, encompasses the fabrication of products at the atomic or molecular level, with applications in medicine, energy production, biomaterials, and electronics.As with any nascent technology, nanotechnology carries the promise of significant benefits, such as antibacterial food containers, and the risk of substantial drawbacks, including nano-surveillance technology.

In 2006, Daniel Kahan, a professor at Yale Law School, conducted a study with colleagues on public perceptions of nanotechnology. Their findings, consistent with those of previous surveys, indicated a general lack of knowledge about the field among the public. However, this ignorance did not preclude individuals from offering opinions on the potential risks and benefits of nanotechnology.When Kahan surveyed uninformed respondents, the results revealed a wide range of opinions. However, when a concise and balanced overview of the promises and perils of nanotechnology was provided to another group of respondents, the profound influence of deeply held beliefs became evident.Despite having access to only two paragraphs of concise yet accurate information, there was a notable shift in respondents' views on nanotechnology, aligning with their prevailing worldviews.Individualists and hierarchists exhibited a more favorable view of nanotechnology. In contrast, egalitarians/collectivists voiced concerns that nanotechnology posed a greater threat to society than it offered in terms of benefits.The underlying reasons for these divergent views can be traced back to the respondents' underlying beliefs.Hierarchists, who hold a favorable view of authority figures, may place their trust in industry and scientific leaders who extol the as-yet unproven potential of nanotechnology. In contrast, egalitarians may harbor apprehensions that nanotechnology could confer benefits to a select few, potentially exacerbating existing socio-economic disparities. Collectivists, too, have expressed concerns that nanotechnology firms may neglect the environmental and public health implications of their activities.Kahan's conclusion asserts that individuals often polarize their views on complex issues after being exposed to only a limited amount of information. This phenomenon, he contends, reinforces existing biases rather than fostering a shared, neutral understanding of the facts.

It is often presumed that forming opinions regarding an esoteric technology is a challenging endeavor. The evaluation of whether nanotechnology will prove beneficial to humanity or lead to disastrous consequences necessitates a comprehensive understanding of materials science, engineering, industry structure, regulatory issues, organic chemistry, surface science, semiconductor physics, microfabrication, and molecular biology. However, in the daily decision-making process, individuals often depend on cognitive frameworks, whether ideological reflexes, misapplied theories, or intuitive responses, to address technical, political, and social issues that fall outside their direct area of expertise.This phenomenon, exemplified by Tonya and the Hardings, underscores the pervasive influence of these cognitive frameworks on our daily lives.

It is imperative to discern the fallacies in such reasoning.Regrettably, policies and decisions founded on a dearth of knowledge tend to manifest as short-sighted outcomes. This prompts the following inquiry: How can policymakers, educators, and the general public navigate the labyrinth of counterfeit knowledge—their own and that of their peers—that hinders our capacity for informed decision-making?

The prevailing conceptualization of ignorance as a mere absence of knowledge often leads to the misguided belief that education is its antidote. However, even when education is administered competently, it can engender illusory confidence.A particularly alarming illustration of this phenomenon is the tendency of driver's education courses, particularly those designed to instruct students in emergency maneuvers, to actually increase accident rates. This phenomenon occurs because training individuals to navigate conditions such as snow and ice can instill a sense of expertise that dissipates rapidly once the training is over. Consequently, individuals may possess a false sense of confidence, accompanied by a waning of practical skills, leading to a situation where their ability to handle winter driving conditions may be significantly diminished.

In such cases, the most advanced approach, as proposed by Swedish researcher Nils Petter Gregersen, involves the deliberate avoidance of teaching such skills. Rather than training drivers to navigate icy conditions, Gregersen suggests that educational programs should merely convey the inherent dangers associated with winter driving, effectively deterring inexperienced individuals from engaging in such activities.

However, it must be acknowledged that shielding individuals from their own misconceptions by averting them from the risks associated with life is, in most cases, an unfeasible proposition.The process of persuading people to relinquish their misguided beliefs, therefore, represents a far more arduous and consequential undertaking.Fortunately, a scientific discipline is evolving, spearheaded by scholars such as Stephan Lewandowsky at the University of Bristol and Ullrich Ecker of the University of Western Australia, that holds the potential to facilitate this process.

In the classroom setting, effective techniques for addressing misconceptions often draw upon variations of the Socratic method. To address prevalent misconceptions, instructors can initiate a lesson by introducing these gaps in understanding or the implausible conclusions they lead to.For instance, an instructor might initiate a discussion on evolution by highlighting the purpose-driven evolutionary fallacy, prompting the class to question it. (How do species just magically know what advantages they should develop to confer to their offspring? How do they manage to decide to work as a group?)This approach can enhance the memorability of the correct theory when it is revealed and can promote general improvements in analytical skills.

However, there is also the issue of pervasive misinformation in settings, such as the Internet and news media, that are difficult to regulate. In these unregulated environments, it is advisable to refrain from perpetuating common misconceptions.For instance, asserting that Barack Obama is not a Muslim does not effectively alter people's perceptions because they often recall the entire statement, excluding the crucial qualifier "not." A comprehensive approach to dispelling misbeliefs entails not only the removal of the misinformation itself, but also the subsequent mitigation of the emotional impact and the reinforcement of alternative beliefs. In instances where the repetition of misinformation is deemed essential, researchers have identified the efficacy of providing clear and repeated warnings that the misbelief is false.It is imperative to reiterate that the falsehood of the belief in question must be emphasized.The most challenging misconceptions to dispel pertain to those that reflect deeply held beliefs. Indeed, it is often the case that these notions are resistant to change. The act of questioning a deeply held belief can lead to a more comprehensive examination of one's self-concept, prompting individuals to defend their cherished convictions. However, this challenge to entrenched beliefs can be mitigated by providing individuals with opportunities to fortify their sense of identity in other domains. Researchers have found that requesting individuals to articulate aspects of themselves that engender pride or to profess values they hold dear can render any incoming threat seemingly less menacing.

For instance, in a study conducted by Geoffrey Cohen, David Sherman, and other colleagues, self-described American patriots exhibited a greater degree of receptivity to the assertions of a report that critiqued U.S. foreign policy. This increased receptivity was observed in instances where the participants had previously engaged in writing an essay concerning a significant aspect of their identity, such as their creativity, sense of humor, or family. The essays further elaborated on the personal significance of these aspects to the participants. In a subsequent study, pro-choice college students engaged in deliberations concerning the parameters of federal abortion policy. The participants demonstrated a propensity to yield to restrictions on abortion after engaging in the composition of self-affirming essays.Researchers have identified instances where deeply held beliefs have been utilized to persuade individuals to reevaluate established facts with reduced bias. For instance, conservatives exhibit a lower propensity to endorse policies aimed at preserving the environment, a stance that contrasts with the more pronounced support among liberals for such policies. However, they do prioritize issues related to "purity" in thought, action, and reality.By framing environmental protection as a means to preserve the integrity of the Earth, research by Matthew Feinberg and Robb Willer of Stanford University suggests that conservatives become more supportive of such policies. In a similar vein, liberals can be persuaded to endorse increased military spending if such a policy is linked to progressive values like fairness and equity beforehand. For instance, the military's ability to offer recruits a way out of poverty or the equitable application of promotion standards can be highlighted.However, the crux of the issue lies in recognizing our own ignorance and misbeliefs. To illustrate, one may envision a scenario in which a small group is tasked with making a decision regarding a matter of significance. Behavioral scientists frequently advocate for the appointment of a "devil's advocate" within such groups. This individual's primary function is to challenge and critique the group's logical process. While this approach can potentially extend the duration of group discussions, provoke irritation within the group, and engender a sense of discomfort, the decisions that emerge from such deliberations are often more precise and better grounded than they would have been otherwise.

For individuals, a similar approach entails engaging in self-reflection by considering the potential flaws in one's own conclusions, questioning the validity of one's assumptions, and envisioning how outcomes might diverge from anticipated results.This process, termed "considering the opposite," as coined by psychologist Charles Lord, aids in enhancing one's ability to make well-informed decisions. This involves envisioning a future in which one's initial judgment proves to be erroneous and then examining the most probable course of events that led to that misjudgment.Additionally, seeking counsel from others can be instrumental in identifying and dispelling erroneous beliefs. Individuals who are committed to intellectual growth can often benefit from engaging in discussions with their peers or experts, as these interactions can facilitate the correction of significant misconceptions.

CIVICS FOR ENLIGHTENED DUMMIES

In an edition of "Lie Witness News" last January, Jimmy Kimmel's cameras were present in the streets of Los Angeles the day before President Barack Obama was scheduled to give his annual State of the Union address. Interviewees were asked about John Boehner's nap during the speech and the moment at the end when Obama faked a heart attack. The reviews of the fictitious speech were varied, ranging from "awesome" to "powerful" to "all right."As is often the case, the producers managed to find individuals willing to offer their opinions on events about which they had no knowledge.American comedians such as Kimmel and Jay Leno have a long history of lampooning their countrymen's ignorance, and American scolds have a long history of lamenting it. On a semi-annual basis, and for at least the past century, various groups of earnest citizens have conducted studies of civic literacy, inquiring about the nation's history and governance, and presented the results as evidence of a profound crisis in cultural decline and decay. For instance, in 1943, a survey of 7,000 college freshmen revealed that only 6% could identify the original 13 colonies, with some erroneously associating Abraham Lincoln with the emancipation of slaves. The New York Times expressed concern over the nation's "appallingly ignorant" youth. In 2002, subsequent to a nationwide assessment of fourth, eighth, and 12th-grade students that yielded analogous outcomes, The Weekly Standard characterized America's students as "dumb as rocks."Given the simplicity of passing judgment on the intellectual aptitude of others, it may be all too easy to believe that this does not apply to oneself. However, the issue of unrecognized ignorance is one that affects us all.

In 2008, the Intercollegiate Studies Institute conducted a survey of 2,508 Americans, revealing that 20 percent of respondents believed that the electoral college "trains those aspiring for higher political office" or "was established to supervise the first televised presidential debates."This prompted renewed concerns about the decline of civic literacy. However, as Stanford historian Sam Wineburg has noted, those who bemoan America's deteriorating knowledge of its own history are frequently oblivious to the fact that numerous individuals before them have expressed similar concerns. A historical perspective reveals that there has not been a precipitous decline from a hypothetical benchmark of American excellence, but rather, a consistent level of fumbling with factual information.The inclination to be concerned about these subpar performances is understandable, particularly when the subject is civics. In the aftermath of a 2001 assessment, then-Secretary of Education Rod Paige lamented that the questions posed to students "involve the most fundamental concepts of our democracy, our growth as a nation, and our role in the world." This observation implicitly poses a question, albeit one that is often met with a sense of embarrassment: How would the Founding Fathers view these seemingly uninformed descendants? However, I posit that our collective understanding of the Founding Fathers' perspective on this matter is already well-established. As proponents of the Enlightenment, they placed significant value on recognizing the limitations of human knowledge, often prioritizing this understanding over rote memorization of factual information.In his critique of the prevailing state of political journalism during his era, Thomas Jefferson noted that individuals who abstained from newspaper reading were often better informed than those who habitually consumed such media. He argued that a person who knows nothing is closer to the truth than one whose mind is filled with falsehoods and errors. Benjamin Franklin's assertion that "a learned blockhead is a greater blockhead than an ignorant one" underscores this perspective. Additionally, a frequently cited statement attributed to Franklin, "the doorstep to the temple of wisdom is a knowledge of our own ignorance," further reinforces this viewpoint.While the inherent capabilities and life experiences of the human brain can indeed accumulate substantial knowledge, they do not inherently provide insight into the vast expanse of our ignorance. Therefore, wisdom may not entail facts and formulas so much as the ability to recognize when one has reached a limit.Stumbling through all one's cognitive clutter just to recognize a true "I don't know" may not constitute failure as much as it does an enviable success, a crucial signpost that shows us we are traveling in the right direction toward the truth.

Time Travelling Human Brain

The Human Brain as a Temporal Entity

The tendency to contemplate the future has been a hallmark of the human experience. Could artificial intelligence emerge as the ultimate temporal apparatus?In 1991, while pursuing his graduate studies at Washington University in St. Louis, Randy Buckner made a significant discovery in the field of modern brain science. This discovery, which would prove to be of immense importance, was so counterintuitive that it required years for its full implications to be recognized.

Buckner's laboratory, under the direction of neuroscientists Marcus Raichle and Steven Petersen, investigated the potential of PET scanning technology to reveal the neural underpinnings of language and memory. The transformative potential of PET imaging stemmed from its ability to noninvasively measure blood flow to distinct regions of the brain, providing researchers with a previously unattainable level of detail regarding neural activity. In Buckner's study, participants were tasked with recalling words from a memorized list, and the researchers sought to identify the brain regions engaged in memory processing by tracking the areas with the highest energy consumption during the task.However, a caveat emerged. It was recognized that different regions of the brain exhibit significant variations in their energy consumption, irrespective of the specific task being performed. Consequently, a baseline comparison, or a control, was necessary to discern the specific regions activated by a particular task.

Initially, this appeared to be a straightforward approach: subjects were to be positioned in the PET scanner and asked to remain still, a state sometimes referred to as the "resting state," followed by the execution of the targeted task.It was hypothesized that by comparing the images of the resting and active brains, the researchers could identify the regions exhibiting heightened energy consumption during task performance.However, a series of unexpected findings emerged during Buckner's experimentation. Buckner recalls, "We began to install scanners capable of measuring brain activity, and, to our surprise, Mother Nature herself seemed to object."When subjects were instructed to rest, PET scans revealed a notable increase in mental energy activity in specific regions, indicating that the resting state was more active than the active state.This unusual surge of activity during rest was observed in numerous other studies employing a comparable control structure during this era. To this initial cohort of scientists utilizing PET scans, the active rest state was perceived as "a confound, as troublesome," according to Buckner.A confound, in the context of scientific research, is an errant variable that hinders the execution of a rigorous control study, thereby introducing noise and interference that obscures the desired signal. Buckner and his colleagues acknowledged this peculiar phenomenon in a 1993 paper, albeit somewhat peripherally.However, this initial recognition of the anomalous activity in the "resting state" would eventually play a pivotal role in precipitating a transformative paradigm shift in our understanding of human intelligence. Subsequent to the publication of Buckner's paper, a brain scientist at the University of Iowa named Nancy Andreasen elected to invert the task/control structure that had come to dominate the early neuroimaging studies. Rather than grappling with the "troublesome" resting state, Andreasen and her team elected to make it the focal point of their study.Andreasen's background outside of neuroscience might have enabled her to discern the value lurking in the rest state, where her peers perceived only challenges. Andreasen's academic background, which included a Ph.D. in Renaissance literature, informed her perspective on the value of the resting state. She published a scholarly appraisal of John Donne's "conservative revolutionary" poetics and, after transitioning to the field of brain imaging in her 30s, began exploring the enigma of creativity. In her later works, she reflected on her experience with the "control task" rest, noting its futility in understanding the complexities of the human mind. She noted that most investigators had made the convenient assumption that the brain would be blank or neutral during "rest," a notion that stemmed from introspection. However, she argued that her own brain was often at its most active when she stretched out on a bed or sofa and closed her eyes.Andreasen's study, the results of which were eventually published in The American Journal of Psychiatry in 1995, included a subtle dig at the existing community for demoting this state to a baseline control. Andreasen designated this mode the REST state, an acronym for Random Episodic Silent Thought.The surge of activity that the PET scans revealed was not a confound, Andreasen contended; rather, it was a clue.In our resting states, we do not rest. Left to its own devices, the human brain resorts to one of its most emblematic tricks, perhaps one that helped make us human in the first place.It time-travels.

To illustrate this phenomenon, consider the scenario of a workday nearing its end, with the subject taking their dog for a walk before returning home. As the subject approaches their residence, their mind transitions to thoughts concerning a significant meeting scheduled for the following week. As you envision the meeting's favorable outcome, a sense of anticipatory satisfaction washes over you, and you allow yourself to harbor the hope that this could potentially serve as a catalyst for requesting a salary increase from your employer, albeit not in the immediate future. Instead, you envision a scenario in which your request might be granted in the ensuing months. This potential increase in salary could potentially enable you and your spouse to consider purchasing a house in a more desirable neighborhood with a superior school district in the near future.However, your thoughts are momentarily interrupted by a concern that has been preoccupying your mind recently: a colleague of yours, who is remarkably intelligent but also quite erratic in his emotional responses, recently exhibited a outburst in a meeting due to perceived disrespect from a colleague. He appears to lack decorum and the capacity to regulate his emotions.As you walk, you recollect the physical sense of unease in the room during your colleague's outburst over a seemingly trivial offense. You envision a meeting six months from now with a comparable eruption, only this time it will occur in front of your boss.A wave of stress washes over you. This prompts a consideration of whether the colleague is a suitable fit for the position, which in turn evokes memories of a past employee termination that occurred five years prior. The recollection of the awkward intensity of that conversation is accompanied by an imagining of how a comparable conversation with the current employee would unfold. As the mind traverses through this scenario, a sense of physical fear emerges.

In a matter of minutes, your mind has traversed back and forth between the past and future, envisioning scenarios ranging from a week ahead to a year or more in the future. Your thoughts have journeyed back to the present, then forward to a meeting scheduled today, and then back to a few weeks ago. You have constructed a series of cause-and-effect chains, seamlessly transitioning from real-life events to imagined ones. As one navigates through these temporal realms, distinct responses are generated by the brain and body's emotional system, both to actual and imagined situations.This phenomenon can be likened to a master class in temporal gymnastics.During these moments of unstructured thinking, our minds move rapidly between past and future, akin to a film editor reviewing a movie's frames.The succession of thoughts does not, in subjective terms, demand significant effort. It does not seem to require mental effort; the scenarios just flow out of one's mind.Because these imagined futures come so easily to us, we have long underestimated the significance of the skill.The PET scanner allowed us to appreciate, for the first time, just how complex this kind of cognitive time travel actually is.In her 1995 paper, Nancy Andreasen included two key observations that would grow in significance over the subsequent decades. In her subsequent interviews with the subjects, she found that they described their mental activity during the REST state as a kind of effortless shifting back and forth in time.Andreasen noted that the subjects reported thinking freely about a variety of things, especially events of the past few days or future activities of the current or next several days. Andreasen's observations also highlighted the involvement of the association cortices in this process, regions that are particularly pronounced in Homo sapiens compared to other primates and that often become fully operational late in the human brain's development, during adolescence and early adulthood.She noted that when the brain/mind operates in a free and unencumbered manner, it engages the most human and complex parts of the brain.

In the years following Andreasen's trailblazing work in the late 1990s and early 2000s, a succession of studies and papers delineated the network of brain activity that she had initially identified.In 2001, Marcus Raichle, Andreasen's mentor at Washington University, coined a novel term for this phenomenon: the "default-mode network," or simply "the default network." This designation gained traction. Presently, Google Scholar has catalogued innumerable academic studies that have investigated the default network.Martin Seligman, a psychologist at the University of Pennsylvania, has stated that, in his opinion, this is the most significant discovery in the realm of cognitive neuroscience. The seemingly trivial activity of mind-wandering is now believed to play a central role in the brain's "deep learning," the mind's sifting through past experiences, imagining future prospects, and assessing them with emotional judgments: that flash of shame, pride, or anxiety that each scenario elicits.

A growing number of scholars, representing diverse fields such as neuroscience, philosophy, and computer science, contend that this capacity for cognitive time travel, as elucidated by the discovery of the default network, may be the defining attribute of human intelligence.As Seligman and Tierney articulated in a Times Op-Ed, "What best distinguishes our species is an ability that scientists are only beginning to acknowledge: We contemplate the future." "A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise."

The question of whether nonhuman animals possess any genuine conception of the future remains unresolved. While some organisms engage in behaviors that bear long-term consequences, such as a squirrel storing nuts for the winter, these behaviors are instinctive. The most recent studies on animal cognition suggest that certain primates and birds may engage in deliberate preparations for events that are imminent.However, the ability to make decisions based on future prospects, even on a relatively short timescale such as months or years, would be beyond even our closest primate relatives. The Homo prospectus theory posits that these limited temporal-traveling abilities elucidate a significant component of the technological disparity that distinguishes humans from other species on Earth.The conception of a future in which a tool might be useful is a crucial factor in the invention of new instruments. The catalyst for the human mind's inventiveness may not have been the conventional suspects of our opposable thumbs or our aptitude for language. Instead, the hypothesis is that it was the ability to imagine a future independent of the constraints of the present that led to significant advancements in various fields.The capacity for prospection has been reflected in, and amplified by, many of the social and scientific revolutions that have shaped human history, including the advent of agriculture. The ability to predict seasonal changes, visualize the long-term benefits of domesticating crops, and imagine a future independent of the constraints of the present is what made such advancements possible. The advent of banking and credit systems hinges on the capacity to prioritize future gains over present-tense values.The development of vaccines has relied on individuals willing to inoculate themselves with potential pathogens, thereby acquiring lifelong protection against diseases.Human beings are endowed with a unique faculty for imagining the future, and this faculty has been refined and cultivated since the dawn of civilization. In the contemporary age, significant advancements are anticipated in the realm of machine learning algorithms, which have already demonstrated superior performance in specific forecasting applications. As artificial intelligence (A.I.) approaches a state of augmentation that will enhance human capabilities, a compelling inquiry emerges: How will the future be distinct if we possess a substantially enhanced ability to predict it?

As James Gleick observes in his 2017 book, Time Travel: A History, time travel has deep roots in ancient mythologies, often involving deities and fantastical creatures. However, time travel was not a concept within the reach of ancient minds. While ancient civilizations imagined concepts such as immortality and rebirth, the technological capabilities to manipulate time were not within their grasp. Time travel, therefore, can be considered a modern fantasy. The notion of employing technology to traverse time with the same ease as space has been first conceptualized by H.G. Wells at the close of the nineteenth century, as depicted in his seminal work of science fiction, The Time Machine. However, the concept of machines as soothsayers can be traced back to antiquity. In 1900, sponge divers who had been stranded after a storm in the Mediterranean discovered an underwater statue on the shoals of the Greek island Antikythera. It turned out to be the wreck of a ship more than 2,000 years old.During the subsequent salvage operation, divers recovered the remnants of a puzzling clocklike contraption with precision-cut gears, annotated with cryptic symbols that were corroded beyond recognition. For years, the device remained in a museum drawer, unnoticed, until the British historian Derek de Solla Price rediscovered it in the early 1950s and began the arduous process of its reconstruction, a task that scholars have continued into the 21st century.We now know that the device was capable of predicting the behavior of the sun, the moon, and five of the planets. The device's sophistication is underscored by its ability to predict solar and lunar eclipses with a reasonable degree of accuracy, even those that would not occur for decades.The Antikythera mechanism, as it is now known, has been referred to as an "ancient computer." However, this comparison is somewhat deceptive. The technology underlying the device is more akin to that of a clock than a programmable computer. Fundamentally, it functioned as a prediction machine. A clock serves to inform us of the present moment. In contrast, the mechanism was designed to predict future events.The creators' meticulous efforts to forecast eclipses are noteworthy. While some ancient societies believed that eclipses damaged crops, the ability to predict them would not have had practical benefits.However, the sense of wonder and the perceived power that could be derived from such predictions would have been significant. Envision oneself before an audience, declaring that the sun will metamorphose into a fiery black orb for a span exceeding one minute the following day. The astonishment is all the more profound when the prophecy materializes.

The advent of prediction machines has only intensified since the era of the ancient Greeks. While the original clockwork mechanisms of that period addressed deterministic futures, such as the movements of celestial bodies, contemporary time-traveling instruments now predict probabilities and likelihoods, enabling us to envision possible futures for more intricate systems. In the late 1600s, significant advancements in public health records and mathematical statistics enabled the British astronomer Edmund Halley and the Dutch scientist Christiaan Huygens to make the first rigorous estimates of average life expectancy. Concurrently, there was a proliferation of insurance companies, whose business was facilitated by this novel ability to predict future risk. Initially, these enterprises concentrated on insuring the commercial risk associated with nascent shipping ventures. However, as time progressed, insurance coverage expanded to encompass a wide range of potential future hazards, including but not limited to fire, floods, and disease.In the 20th century, the advent of randomized, controlled trials enabled the prediction of the future effects of medical interventions, thereby distinguishing genuine cures from fraudulent remedies. In the digital era, spreadsheet software has evolved from accounting tools designed to record past business activities into forecasting instruments, enabling users to navigate through various financial scenarios as if mentally exploring multiple future possibilities.The evolution of cognitive time travel has been influenced not only by advances in science and technology but also by the very invention of storytelling, which can be regarded as an augmentation of the default network's innate temporal navigation capabilities. Stories enable us to imagine worlds beyond our own and offer a respite from the constraints of linear time.Analepsis and prolepse, the literary devices of flashbacks and flash-forwards, are among the oldest in the canon, found in ancient narratives such as the Odyssey and the Arabian Nights. The utilization of temporal manipulation in science fiction narratives has seen a marked proliferation since the publication of H. G. Wells's seminal work, "The Time Machine." However, temporal shifts have also become a hallmark of contemporary storytelling, with popular media increasingly embracing complex temporal structures that would have previously perplexed mainstream audiences. A notable example is the intricate and often enigmatic narrative structure of the television series "Lost," which deftly navigated between past, present, and future timelines.Another illustrative example is the 2016 blockbuster film "Arrival," which employed a complex temporal scheme, skipping ahead more than 50 times to future events while suggesting that these events were unfolding in the past. The current popular series "This Is Us" has reinvented the family soap opera genre by structuring each episode as a series of time-jumps, sometimes spanning more than 50 years.The final five minutes of the Season 3 opener, which aired earlier this fall, jump back and forth seven times among 1974, 2018, and some unspecified future that appears to be about 2028.

These narrative developments suggest an intriguing possibility: that popular entertainment is training our minds to become more adept at cognitive time travel.If one were to borrow H. G. Wells's time machine and jump back to 1955, then ask typical viewers of "Gunsmoke" and "I Love Lucy" to watch "Arrival" or "Lost," they would have found the temporal disorientation deeply disconcerting. In the past, even a simple flashback required additional explanatory measures to denote a temporal leap, as evidenced by the use of a rippling screen. Only experimental narratives were capable of challenging audiences with more intricate temporal schemes.Contemporary popular narratives, however, traverse their fictional timelines with the rapidity of the default network itself.The intricate temporal structures depicted in popular narratives may be training our minds to grasp more sophisticated temporal concepts. Nevertheless, could novel technological advancements enhance our capabilities in a more direct manner? The concept of "smart drugs," which aim to enhance memory and other cognitive functions, has been a subject of considerable interest and discussion. However, if the Homo prospectus argument is valid, we should also be looking for breakthroughs that will enhance our predictive abilities.In a sense, these advances are already within our reach, albeit in the form of software rather than pharmaceuticals.Have you ever found yourself mentally considering various possibilities for an upcoming event, such as the potential of rain? This phenomenon can be attributed to the advanced predictive capabilities of climate supercomputers, which utilize sophisticated algorithms to analyze vast historical and future atmospheric data sets, thereby providing highly accurate and personalized weather forecasts. These sophisticated visualizations are the first in human history to offer predictions that are more precise than random guesses, offering individuals a more informed understanding of the potential weather conditions a week from now. To illustrate this point, consider a hypothetical relocation to a desired neighborhood that is now within reach of financial accessibility. However, if the neighborhood is situated in a floodplain and there is a possibility of experiencing a major flood event in the coming decade due to a changing climate, the feasibility of this relocation is called into question. This contemplation is facilitated by the outputs of sophisticated climate models, which employ supercomputers to simulate the planet's distant past and future.Accurate weather forecasting stands as a notable achievement of software-based time travel, as algorithms enable us to discern future trends in ways that were previously unattainable. This phenomenon is explored in a recent book by a team of economists from the University of Toronto, which the authors term "prediction machines." Machine-learning systems employ algorithms that are trained to generate highly accurate predictions of future events by examining vast repositories of data from past events.One algorithm might be trained to predict future mortgage defaults by analyzing thousands of home purchases and the financial profiles of the buyers, testing its hypotheses by tracking which of those buyers ultimately defaulted. While not infallible, the predictions generated by such training are often comparable to those made in weather forecasting, offering a range of probabilities.A hypothetical scenario, such as purchasing a residence in a neighborhood with desirable educational institutions, could be further enhanced by incorporating software predictions. The algorithm might, for instance, caution about the potential for a 20 percent likelihood of a negative outcome, such as a market crash or hurricane, due to the purchase of a home. Alternatively, an algorithm trained on a distinct data set might propose alternative neighborhoods where home values are also likely to increase.These algorithms can assist in addressing a significant limitation of the default network: human beings are notoriously deficient in probabilistic thinking.The pioneering cognitive psychologist Amos Tversky humorously observed that, in the context of probability, humans possess three predefined settings: "gonna happen," "not gonna happen," and "maybe." Humans excel at conceptualizing hypothetical scenarios and assessing their potential emotional implications, yet discerning subtle differences in probability, such as between a 20% and a 40% likelihood, poses a significant challenge.Algorithms can assist in addressing this cognitive blind spot by providing a quantitative framework for evaluating probabilities.Additionally, machine-learning systems have the potential to greatly facilitate decision-making processes involving numerous options. Humans exhibit remarkable proficiency in concurrently constructing multiple imagined futures, such as the scenarios of pursuing or rejecting a new employment opportunity.However, our cognitive abilities are constrained when attempting to track numerous future outcomes, such as those resulting from the decision between accepting or rejecting a job offer. In contrast, the prediction capabilities of artificial intelligence (AI) do not face this limitation, rendering them remarkably adept at assisting with a substantial subset of significant life decisions, particularly those with ample training data and a multitude of alternative futures for analysis.

The selection of a college, a decision that was not commonplace 200 years ago and is now made by a significant portion of the global population, is a decision that falls within the domain of machine learning.In the United States, there are over 5,000 colleges and universities, and a substantial proportion of them are not well-suited for any specific candidate. However, irrespective of an individual's academic achievements or economic background, there are likely numerous institutions that could offer a stimulating educational experience.One can explore a select few of these institutions, heed the counsel of advisors, and consult the insights of experts, whether through online resources or comprehensive handbooks. However, the algorithm would be examining a substantially more extensive array of options, encompassing data from millions of applications, college transcripts, dropout rates, and the entirety of information derivable from the social media presence of college students—a vast repository of data that encompasses virtually all aspects of college life.The algorithm would also be scanning a parallel data set that is typically not emphasized by college advisors: successful career paths that do not require a college degree. From this training set, the system could generate numerous predictions for potentially suitable colleges, tailored to the applicant's self-reported criteria, such as long-term happiness, financial security, social impact, fame, and health.It is important to note that this data is vulnerable to misuse, including its sale to advertisers or theft by cybercriminals. This will likely result in a multitude of op-eds expressing concern. However, the system's efficacy, to the best of our current measurement capabilities, is probable. The reception of this technology has been met with polarized responses, with some expressing strong support and others expressing strong opposition. Regardless of these divergent views, the implementation is imminent.In late 2017, the Crime Lab at the University of Chicago unveiled a new collaborative endeavor with the Chicago Police Department. The objective of this partnership is to develop a machine-learning-based "officer support system." The primary function of this system is to predict which officers are likely to experience an "adverse incident" while on duty. The algorithm sifts through a vast repository of data generated by every police officer in Chicago, including arrest reports, gun confiscations, public complaints, and supervisor reprimands, among others. The algorithm uses archived data, coupled with actual cases of adverse incidents, such as the shooting of an unarmed citizen or other excessive uses of force, as a training set. This enables the algorithm to detect patterns of information that can predict future problems.

This predictive technology, reminiscent of the fictional Minority Report dystopia where machines convict individuals of crimes before they have been committed, has raised concerns. However, the project lead, Jens Ludwig, has emphasized that the immediate consequence of this predictive system in Chicago will not be the imposition of criminal charges but rather the provision of additional support or counseling to officers, thereby averting potential crises. As Ludwig acknowledges, there is understandable trepidation regarding the prospect of artificial intelligence (AI) determining outcomes. However, he clarifies that this is not the intended function of the technology. Instead, he envisions it as a "decision-making aid," an algorithm designed to assist sergeants in prioritizing their duties.Despite the rigor with which the Chicago Police Department (CPD) plans to implement this technology, it is crucial to address the broader implications. It appears unavoidable that individuals will be terminated as a consequence of the predictive capabilities of machine-learning algorithms, a prospect that many find intuitively unsettling.Nevertheless, we are already engaged in the process of making significant decisions regarding personnel, such as who to hire, who to fire, who to heed, and who to disregard, based on human biases that are, at best, unreliable and, at worst, prejudiced. While the prospect of deriving such decisions from data-analytical algorithms may elicit feelings of unease, the prevailing practices of decision-making, which are often informed by intuitive biases, may be of greater concern.Regardless of one's personal response to this development, it is evident that the integration of machine-learning algorithms into decision-making processes is imminent. In the ensuing decade, a significant proportion of the population will rely on the forecasts of machine learning to guide critical life decisions, such as career transitions, financial planning, and hiring decisions.These enhancements have the potential to represent a significant evolutionary step in the development of Homo prospectus, enabling us to gain enhanced insight into the future, with a more nuanced understanding of probability, compared to our current capabilities. However, even under this optimistic scenario, the influence of these novel algorithms will be substantial. This has prompted Ludwig and numerous other members of the artificial intelligence community to advocate for the development of open-source algorithms, akin to the open protocols of the original internet and World Wide Web.The application of predictive algorithms to make significant personal or civic decisions will be challenging enough without the process being compromised or subtly redirected by the dictates of advertisers. The potential threat is compounded by the infiltration of mind-wandering, or the tendency to daydream, which has been a subject of concern in the context of social media. However, when viewed through the lens of Homo prospectus, the pervasive use of smartphones poses a distinct threat. The constant availability of a network-connected supercomputer in one's pocket reduces the amount of time available for mind-wandering, thereby disrupting the natural transitions between cognitively demanding tasks that facilitate restful states. This transition has led to a situation where the availability of information, such as Instagram updates, Nasdaq updates, and podcasts, has become a constant source of stimulation, effectively replacing the need for downtime that allows the brain to rest. Concurrently, the societal inclination towards "mindfulness" promotes the value of being present in the moment and of allowing thoughts to dissipate, rather than allowing them to wander.A cursory search on YouTube reveals a plethora of meditation videos that instruct individuals on how to suspend their natural cognitive processes.The Homo prospectus theory posits that, in order to maintain optimal cognitive function, it is essential to allocate time in our schedules—and potentially within educational institutions—to permit the natural drift of our thoughts.

According to Marcus Raichle of Washington University, there may be an opportunity to rectify any potential impairment to our prospective abilities. Initial studies have indicated that the neurons implicated in the default network exhibit genetic profiles frequently linked to long-term brain plasticity, a hallmark of neural complexity. Raichle has stated that "the brain's default-mode network appears to preserve the capacity for plasticity into adulthood." This suggests that the brain's ability to learn new skills, or plasticity, persists into adulthood. If the findings of these studies are confirmed, it would indicate that our capacity for mind-wandering is not fixed during our youth. Instead, it can be developed over time.Furthermore, the impact of artificial intelligence (AI) on our own time-traveling abilities, as we increasingly depend on prediction machines, is an intriguing question. The potential consequences of this technological advancement may encompass a spectrum of outcomes, ranging from the alarming to the liberating, or perhaps a combination thereof. It appears unavoidable that the advent of artificial intelligence will precipitate a transformation in our prospective abilities, albeit the nature of this transformation remains as yet uncertain. However, it would be a welcome prospect if the technological advancements that have facilitated our understanding of the default network also contributed to a reversion to our fundamental cognitive processes. This would afford our minds greater opportunity to wander, to disengage from the constraints of the present moment, and to become unmoored from the temporal constraints of the immediate.

The Empty Brain

The notion of an "empty brain" is a fallacious one. The human brain does not process information, retrieve knowledge, or store memories; therefore, it is not a computer.The question of what is in a brain is not easily answered. Despite extensive research, brain scientists and cognitive psychologists have not found a copy of Beethoven's 5th Symphony in the brain, nor have they found copies of words, pictures, grammatical rules, or other environmental stimuli.The human brain is not empty. However, it does not contain the majority of the elements that people commonly believe it does, including basic components such as memories.The prevailing misconceptions about the brain have deep historical roots, but the advent of computers in the 1940s further compounded these misunderstandings. For more than half a century, psychologists, linguists, neuroscientists, and other experts on human behavior have been asserting that the human brain functions as a computer.

However, the notion that the human brain functions as a computer is, at best, a fallacious one. To illustrate the untenableness of this assertion, one need only consider the brains of infants.Thanks to the evolutionary process, human neonates, like the newborns of all other mammalian species, are born with the capacity to interact with their environment effectively.A baby's vision is blurry, yet it exhibits a marked preference for faces and can rapidly identify its mother's. It shows a preference for the sound of voices over non-speech sounds and can differentiate between basic speech sounds. The ability to establish and sustain social connections is a hallmark of our species.A healthy newborn possesses an array of reflexes, which are spontaneous responses to external stimuli that are critical for its survival. These reflexes include head turning in response to tactile stimuli, sucking action upon swallowing, breath retention during submersion in water, and a robust grasp reflex. Perhaps most crucially, newborns possess advanced learning mechanisms that enable rapid adaptation, allowing them to interact effectively with their environment, despite its differences from that experienced by their distant ancestors.The senses, reflexes, and learning mechanisms represent the initial capacities with which newborns are equipped, and their significance cannot be overstated. The absence of any of these capabilities at birth would likely result in significant challenges in surviving and thriving.

However, we are not born with information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers. These design elements enable digital computers to exhibit intelligent behavior.Not only are we not born with these capabilities, but we also do not develop them over time.

The notion of storing words or the rules that govern their manipulation is foreign to us. The ability to create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device is not part of our repertoire. The retrieval of information, images, or words from memory registers is not a capability we possess. In contrast, computers have the capacity to perform all of these functions. However, organisms do not possess these capabilities.Computers, in a literal sense, process information – numbers, letters, words, formulas, images. The process of information encoding, or the conversion of information into a format that can be processed by computers, involves the organization of patterns of ones and zeroes, known as "bits," into smaller units called "bytes."Within the memory of a computer, each byte is comprised of 8 bits, with a specific pattern of these bits representing a particular letter, such as d for "d," o for "o," or g for "g." When positioned adjacent to each other, these three bytes collectively form the word "dog." The representation of a single image, such as a photograph of my cat Henry on my desktop, is achieved through a specific pattern of a million of these bytes ('one megabyte'), surrounded by special characters that instruct the computer to interpret the data as an image rather than a word.

Computers, in essence, transfer these patterns from one physical storage area to another, etched into electronic components. These patterns may also be copied or transformed during various processes, such as the correction of errors in a manuscript or the enhancement of a photograph.The rules governing the movement, duplication, and manipulation of these data arrays are also stored within the computer's memory.When considered collectively, these rules constitute a "program" or an "algorithm." A collection of algorithms that collaborate to facilitate specific tasks, such as the purchase of stocks or the pursuit of romantic partners online, is designated as an "application." This term is frequently equated with the contemporary concept of an "app."It is imperative to acknowledge the limitations of this introductory exposition on computing. However, it is essential to underscore the fundamental truth that computers genuinely operate on symbolic representations of the world. They genuinely store and retrieve information. They genuinely process data. They possess tangible physical memories. They are invariably guided in their operations by algorithms.

In contrast, humans do not, and never have. Given this reality, it is perplexing why many scientists discuss our mental life as if we were computers.

Join over 250,000+ newsletter subscribers

Our content is 100% free, and you can unsubscribe at any time.

Daily: A daily dose of essays, guides, and videos from Aeon+Psyche

Weekly: A week's worth of big ideas from AeonPrivacy PolicyIn his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. This spirit was regarded as the "cause" of our intelligence, at least from a grammatical perspective.The advent of hydraulic engineering in the 3rd century BCE gave rise to a hydraulic model of human intelligence, positing that the flow of different bodily fluids – the "humours" – mirrored our physical and mental faculties.This hydraulic metaphor persisted for over 1,600 years, significantly influencing medical practice throughout that period.

By the 1500s, the advent of automata, powered by springs and gears, had prompted prominent thinkers such as René Descartes to propound the notion that humans are complex machines.In the 1600s, the British philosopher Thomas Hobbes posited that cognitive functions stem from minute mechanical motions within the brain.By the 1700s, advancements in electricity and chemistry gave rise to novel theories of human intelligence, which, once again, were predominantly metaphorical in nature. In the mid-19th century, the German physicist Hermann von Helmholtz drew parallels between the brain and a telegraph, inspired by recent advances in communications.The mathematician John von Neumann asserted that the function of the human nervous system is 'prima facie digital,' drawing parallels between the components of computing machines of the era and the components of the human brain.Each metaphor reflected the most advanced thinking of the era that spawned it. Subsequent to the advent of computer technology in the 1940s, the brain was posited to function analogously to a computer, with the brain itself serving as physical hardware and our thoughts as software.The seminal publication Language and Communication (1951) by the psychologist George Miller marked the genesis of what is now widely referred to as "cognitive science." Miller advanced the notion that the mental domain could be rigorously studied through the lenses of information theory, computation, and linguistics.This line of thought reached its zenith in the seminal work The Computer and the Brain (1958), where the mathematician John von Neumann succinctly declared that the function of the human nervous system is 'prima facie digital.' While acknowledging the paucity of knowledge regarding the role of the brain in human reasoning and memory, he drew parallels between the components of computing machines of that era and the components of the human brain.Subsequent advancements in both computer technology and brain research led to a concerted, multidisciplinary endeavor to understand human intelligence, firmly rooted in the notion that humans are, like computers, information processors. This endeavor has led to a substantial investment of resources, both financial and human, and has given rise to a vast corpus of literature encompassing both technical and popular articles and books.A notable example of this perspective is Ray Kurzweil's book How to Create a Mind: The Secret of Human Thought Revealed (2013), which explores the "algorithms" of the brain, its data processing mechanisms, and its structural resemblance to integrated circuits.

The information processing (IP) metaphor of human intelligence has become pervasive in both popular thought and scientific discourse. It is virtually impossible to engage in any form of discourse on intelligent human behavior that does not make use of this metaphor, just as it was once impossible to engage in any form of discourse on intelligent human behavior without reference to a spirit or deity.The validity of the IP metaphor in today's world is generally accepted without question.

However, it is important to recognize that the IP metaphor is merely a conceptual framework, akin to a narrative that serves to organize our understanding of complex phenomena. As with previous metaphors, this one will inevitably be superseded by more sophisticated conceptual models or by the accumulation of empirical knowledge.

During a visit to a prominent research institute just over a year ago, I posed a challenge to the researchers, requesting that they explain intelligent human behavior without resorting to the IP metaphor. They were unable to do so, and when I raised the issue in subsequent email communications, they remained unable to offer an alternative explanation despite the passage of several months. While they recognized the challenge, they did not dismiss it as trivial. This suggests that the IP metaphor is "sticky," encumbering our thinking with powerful language and concepts that hinder our ability to think beyond them.The flawed logic of the IP metaphor can be succinctly expressed as follows: it is based on a syllogism with two reasonable premises and a flawed conclusion.Premise #1: all computers are capable of behaving intelligently.Premise #2: all computers are information processors. The fallacious conclusion drawn from these assumptions asserts that entities capable of intelligent behavior are inherently information processors.

Disregarding the formal language, the assertion that humans must be information processors merely because computers are information processors is evidently fallacious. When the IP metaphor is ultimately relinquished, it will likely be regarded by historians as erroneous, just as we currently perceive the hydraulic and mechanical metaphors to be erroneous.

Given the fallacious nature of the IP metaphor, it is perplexing why it has gained such traction. It is imperative to question the pervasive use of the IP metaphor and its implications for human intelligence. Is there a more robust and sustainable approach to understanding human intelligence that can replace the metaphor? Moreover, what are the long-term implications of relying on the IP metaphor for such an extended period? It is crucial to acknowledge that the IP metaphor has been a guiding influence in the writing and thinking of researchers across multiple fields for decades. This raises significant questions regarding its impact and the potential costs associated with its pervasive use.

In a classroom exercise I have conducted numerous times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill—"as detailed as possible," I say—on the blackboard in front of the class.When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. Upon completion, the original drawing is uncovered, and the class engages in a collective analysis of the discrepancies.For those uninitiated in this method or encountering difficulty visualizing the results, I have requested the assistance of Jinny Hyun, a student intern at the institute where I conduct my research. Jinny has graciously provided her interpretation of the exercise, which is presented here:

It is a rudimentary black-and-white depiction of a one-dollar bill, accompanied by the inscription "In God we trust" written beneath a central portrait.The subsequent drawing, conducted with the presence of a dollar bill, is noteworthy for its enhanced intricacy and textural details.

Jinny expressed astonishment at the outcome, a sentiment likely shared by many observers. However, this reaction is not uncommon. As can be seen, the drawing produced in the absence of the dollar bill is significantly less accurate compared to the drawing made from an exemplar, despite Jinny's extensive exposure to dollar bills.The underlying question is whether a "representation" of the dollar bill is stored in a "memory register" in the brain, and whether it can be "retrieved" and utilized to produce a drawing.

This is evidently not the case, and despite extensive research in the field of neuroscience, no evidence has been found to support the existence of such a representation.The notion that memories are stored in individual neurons is implausible, as it remains unclear how and where the memory is stored in the cell.A substantial body of research in brain science has revealed that multiple and often extensive areas of the brain are involved in even the most basic memory tasks. The occurrence of heightened neural activity in response to strong emotions has been demonstrated in numerous studies. For instance, in a 2016 study conducted by Brian Levine, a neuropsychologist at the University of Toronto, and his colleagues, the survivors of a plane crash exhibited increased neural activity in key regions of the brain, including the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex.

The hypothesis posited by several scientists that specific memories are stored in individual neurons is implausible; if this were the case, it would merely exacerbate the problem of memory: how and where is the memory stored in the cell?

The process of drawing a dollar bill in its absence is indicative of a cognitive process that occurs in the brain. If Jinny had never seen a dollar bill before, her first drawing would likely not resemble the second drawing. However, upon prior exposure to dollar bills, her subsequent drawing exhibited notable similarities, suggesting a modification in her cognitive processes.Specifically, her brain had been altered to a degree that enabled her to visually represent a dollar bill, thereby re-experiencing, at least to a certain extent, the sensation of seeing a dollar bill.The discrepancy between the two diagrams underscores the notion that the process of visualizing an object (i.e., perceiving it in its absence) is significantly less precise than observing it in its presence. This phenomenon elucidates the reason why we are more adept at recognizing than recalling. Recalling involves a deliberate attempt to relive an experience, whereas recognition merely requires awareness of a previously encountered stimulus.It is possible to object to this demonstration on the grounds that, while Jinny had previously encountered dollar bills, she had not made a deliberate effort to commit the details to memory. It could be contended that, had she committed to memory the details of the second image, she would have been capable of drawing it without the presence of the bill.Nevertheless, in this case, no image of the dollar bill has been stored in Jinny's brain. She has merely become better prepared to draw it accurately, in a manner akin to how a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.

This fundamental exercise can serve as a foundational building block for the formulation of a theory of intelligent human behavior that is devoid of the metaphorical baggage associated with the IP metaphor.As individuals navigate the complexities of the world, they are shaped by a myriad of experiences, of which three types merit particular attention: (1) the observation of external events (e.g., the behavior of others, auditory stimuli, instructions, written words, visual stimuli); (2) the exposure to the pairing of unimportant stimuli (e.g., sirens) with important stimuli (e.g., the appearance of police cars); and (3) the receipt of positive or negative reinforcement for specific behaviors.

The enhancement of one's effectiveness in life is contingent upon the ability to modify behaviors in a manner consistent with these experiences. This entails the ability to recite a poem or sing a song, adhere to instructions, respond to unimportant stimuli in a manner analogous to that of important stimuli, refrain from behaviors that were previously punished, and engage in behaviors that were previously rewarded.

Contrary to sensationalist media portrayals, the brain's changes in response to learning musical or poetic skills are not fully understood. However, it is important to note that these skills are not merely stored in the brain. Instead, the brain undergoes a systematic change that enables the execution of these tasks under specific conditions. When prompted to perform, neither the song nor the poem is retrieved from any specific region of the brain. This phenomenon is analogous to the way in which one's finger movements are not "retrieved" when tapping on a surface, such as a desk. The act of singing or reciting occurs without the need for retrieval.

In a recent conversation, I inquired with Eric Kandel, a neuroscientist at Columbia University who was awarded a Nobel Prize for his discoveries concerning the chemical changes in neuronal synapses of the Aplysia (a marine snail) in response to learning, regarding his estimation of the time required to comprehend the intricacies of human memory. His swift response was, "A hundred years." I did not deem it necessary to inquire whether he believed the IP metaphor was hindering neuroscience. However, some neuroscientists are beginning to consider the unthinkable: that the metaphor is not indispensable.A few cognitive scientists, notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009), now completely reject the view that the human brain functions like a computer. The prevailing perspective, which finds expression in the field of cognitive science, posits that humans, akin to computers, interpret their environment through the execution of computations based on internal representations. In contrast, Chemero and others have proposed an alternative interpretation of intelligent behavior as a direct interaction between organisms and their environment.

A compelling illustration of the stark contrast between the IP perspective and what has come to be termed the "anti-representational" view of human functioning can be observed in the divergent explanations for how a baseball player successfully catches a fly ball. This fascinating topic has been elegantly expounded upon by Michael McBeath, currently at Arizona State University, and his colleagues in a 1995 paper published in Science. The IP perspective necessitates the formulation of an estimate of the initial conditions of the ball's flight, including the force of impact and the trajectory angle. This estimate is then used to create and analyze an internal model of the ball's probable trajectory. Finally, the model is employed to guide and adjust motor movements in real time to intercept the ball.

This approach is analogous to the functionality of a computer. However, McBeath and his colleagues proposed a simpler account. They asserted that to catch the ball, the player simply needs to maintain movement in a way that maintains a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a "linear optical trajectory"). This concept, while seemingly intricate, is, in essence, remarkably straightforward, devoid of complex computations, representations, and algorithms.The prospect of a human mind overwhelmed by cyberspace is a concern that can be dismissed, and the prospect of attaining immortality through downloading is equally implausible.Andrew Wilson and Sabrina Golonka, two psychology professors at Leeds Beckett University in the UK, exemplify this perspective by incorporating the baseball example, among numerous others, into their work by adhering to a sensible and straightforward approach that transcends the IP framework. They have been publishing their perspectives in the form of blogs for an extended period, coining the term "more coherent, naturalized approach to the scientific study of human behavior... at odds with the dominant cognitive neuroscience approach."This endeavor, however, does not constitute a movement in itself. The prevailing cognitive sciences continue to adhere to the IP metaphor, and certain prominent thinkers have made substantial predictions about humanity's future that hinge on the metaphor's validity.

One such prediction, posited by the futurist Kurzweil, the physicist Stephen Hawking, and the neuroscientist Randal Koene, asserts that human consciousness, akin to computer software, will soon be capable of being downloaded into a computer, thereby enabling a hypothetical enhancement of cognitive abilities and, perhaps, even immortality. This concept forms the foundation of the dystopian film Transcendence (2014), starring Johnny Depp as a scientist resembling Kurzweil, whose mind is downloaded into the internet, resulting in disastrous consequences for humanity.However, the IP metaphor does not accurately reflect reality. Consequently, the prospect of a human mind malfunctioning in cyberspace is not a concern. Regrettably, however, the attainment of immortality through downloading remains a distant prospect. This is not solely due to the absence of consciousness software in the brain; rather, it is because of a more profound issue, which can be termed the uniqueness problem. This problem is both inspiring and disheartening in equal measure.

The absence of memory banks and representations of stimuli in the brain, coupled with the notion that the fundamental prerequisites for our functionality are the capacity for orderly brain changes in response to our experiences, underscores the notion that no two individuals are inherently altered in a uniform fashion by a given experience. To illustrate this point, consider the experience of attending a concert. The changes that occur in one's brain when listening to Beethoven's 5th will likely differ significantly from the changes experienced by another individual. These individual differences in neural activity are the result of unique experiences accumulated over a lifetime, shaping the specific neural structure present in each individual.

This phenomenon, as demonstrated by Sir Frederic Bartlett in his 1932 publication Remembering, leads to the conclusion that no two individuals will recount a story in an identical manner, and that over time, their retellings will diverge increasingly. Rather than producing a "copy" of the story, each individual undergoes a transformation, albeit to a different extent, upon hearing the story. This transformation is significant enough to allow the individual to re-experience the story when asked to do so, albeit not with the same level of detail or clarity as the initial experience (see the first drawing of the dollar bill above).

This phenomenon underscores the uniqueness of each individual, not only in terms of genetic makeup but also in the manner in which their brains undergo changes over time.Conversely, the complexity of this process can be daunting for neuroscientists, as it defies facile explanation. The orderly change that occurs during a given experience may involve a multitude of neurons, a million neurons, or even the entire brain, with each brain exhibiting a distinct pattern of change.

Furthermore, even if we possessed the capability to capture a comprehensive snapshot of all 86 billion neurons and then to replicate their state within a computer, such a vast pattern would hold no significance outside the confines of the brain that originally generated it.This represents a pronounced distortion in our cognitive processes, stemming from the IP metaphor. While computers can store exact copies of data, capable of persistence for extended periods even in the absence of power, the brain's functionality is contingent on its own vital activity. There is no clear delineation between on and off states; the brain either persists in its function or ceases to exist. Furthermore, as neurobiologist Steven Rose noted in The Future of the Brain (2005), a comprehensive depiction of the brain's current state might also be devoid of meaning unless the complete life history of the brain's owner is taken into account, including the social context in which they were raised.The intricacies involved in this process are substantial. To comprehend even the fundamental mechanisms by which the brain sustains human intelligence, it may be necessary to consider not only the present state of all 86 billion neurons and their 100 trillion interconnections, but also the varying strengths with which they are connected, and not only the states of more than 1,000 proteins that exist at each connection point, but also how the moment-to-moment activity of the brain contributes to the integrity of the system. The uniqueness of each brain, influenced in part by the distinct life histories of individuals, further complicates this endeavor. As Kandel's prediction suggests, elucidating the intricacies of neuronal connectivity may require centuries, as neuroscientist Kenneth Miller recently noted in a op-ed in The New York Times.

Concurrently, substantial financial resources are being allocated to brain research, often based on flawed concepts and unfulfillable commitments.A striking illustration of this misdirection in neuroscience was documented recently in a report in Scientific American, concerning the €1.3 billion Human Brain Project initiated by the European Union in 2013. The project was initially endorsed by EU officials, who committed substantial financial resources to it. The project's founder, Henry Markram, had presented a compelling argument, suggesting that he could develop a simulation of the entire human brain on a supercomputer by the year 2023. This simulation was expected to bring about a revolution in the treatment of Alzheimer's disease and other neurological disorders. However, less than two years after the project's inception, it was terminated, and Markram was asked to step down.This incident highlights a crucial point: the field of neuroscience, like all fields of study, should not be viewed as a computer science discipline. Instead, it should embrace its role as an organism, with its own unique processes and insights. Instead, we should direct our efforts toward self-understanding, unencumbered by extraneous intellectual baggage.The intellectual property (IP) metaphor has been in use for the past fifty years. However, it has yielded few, if any, insights. The time has come to discard this metaphor and pursue more fruitful avenues of inquiry.

Who Decides What Words Mean ?

The determination of lexical meaning is a matter of considerable interest. The English language, characterized by its adherence to established rules and its constant evolution, may be regarded as a self-regulating system, despite the absence of a governing authority.In the era preceding the advent of social media, discourse on language issues was marked by polarization. A similar pattern has persisted, with individuals who prioritize this subject matter often adopting one of two positions. On one side is the prescriptivist, who takes a smug stance on the mistakes they find abhorrent, and on the other is the descriptivist, who displays their knowledge of language change and challenges the prescriptivists' facts.Adherence to either stance is mandatory, and the two are mutually exclusive.However, this dichotomy need not be the case.In my professional capacity, I serve as both an editor and a language columnist. These roles necessitate a certain degree of adherence to prescriptivist and descriptivist principles.In my capacity as an editor, I am entrusted with the responsibility of rectifying grammatical and mechanical errors in submitted copy, a task that involves adhering to The Economist's established house style.Conversely, when I embark on the composition of my column, I immerse myself in the intricacies of authentic language, striving not to merely critique individual mistakes but rather to impart new insights and knowledge to both myself and my audience. This raises the question of whether these two facets of language can be integrated into a cohesive philosophy. The answer, it seems, is affirmative.The evolution of language is a dynamic process, marked by constant change. Some of these changes, however, are characterized by a high degree of disorder and disruption. A notable example is the prescriptivist principle of decimate, which derives from the ancient Roman practice of punishing a rebellious legion by killing every tenth soldier. In contemporary contexts, the necessity of a word denoting the destruction of precisely one-tenth of a quantity is rather infrequent. This phenomenon, termed the "etymological fallacy," underscores the notion that a word must inherently signify its component roots.Nevertheless, the term "decimate" has proven to be advantageous in contexts where a word signifying the destruction of a substantial proportion of something is desired.It is noteworthy that the scope of the term "decimate" has evolved to encompass a sense approaching "to wipe out utterly."

Descriptivists, that is, virtually all academic linguists, observe that semantic expansion is an inherent aspect of linguistic evolution. This phenomenon, often termed "semantic creep," is a natural process observed in language development. A comprehensive review of the extensive historical Oxford English Dictionary (OED) reveals numerous instances of this linguistic evolution, particularly in nontechnical terms. The OED meticulously documents the historical development of word meanings, showcasing how words undergo continuous transformation and adaptation. It is evident that linguistic elements do not remain static; they are constantly evolving. One linguist proffers the analogy of capturing a moment in time, akin to taking a snapshot of the surface of the ocean, while insisting that this captures the essence of ocean surfaces. Prescriptivists counter by asserting that this phenomenon, while undeniable, does not diminish its vexation. The term "decimate" does not possess a satisfactory synonym in its traditional sense, which is to "destroy a portion of." It shares this characteristic with numerous other terms, including "destroy," "annihilate," and "devastate."If "decimate" were to ultimately adopt this latter sense, it would result in the loss of a distinctive word.Moreover, individuals who utilize "decimate" in either of its traditional or new senses may experience confusion.Another example is the term "literally," which I adhere to in its traditional usage. The ability to employ a literal interpretation in a favorable manner is a source of great satisfaction. To illustrate this point, I recall a recent incident in which my son fell off a horse during a holiday. I was able to reassure my mother by stating that he "literally got right back on the saddle," a statement that brought me great joy. However, when individuals employ the term "literally" in contexts such as "We literally walked a million miles," I experience a sense of bemusement. While it is true that prominent writers such as James Joyce and Vladimir Nabokov employed a figurative literal interpretation, its role as an intensifier is not particularly valuable or aesthetically pleasing. Its true value lies in its traditional use, where it serves as a substitute for other terms.In my opinion, when a language undergoes changes, it can have detrimental effects. This is not a catastrophic scenario, but it is a concern that merits attention.

It is important to note that no language has fallen apart due to insufficient care. This is simply not a possibility. Prescriptivists have been unable to identify a single language that has become unusable or inexpressive as a result of people's failure to uphold traditional vocabulary and grammar.In fact, every language existing today is highly expressive.It would be a miracle, except that it is utterly commonplace, a fact shared not only by all languages but by all the humans who use them.The question, then, is how this can be. Why does change of the decimate variety not add up to chaos? If one such "error" is considered detrimental, and these kinds of occurrences are prevalent, how does the system maintain cohesion?The answer lies in the nature of language as a system. Sounds, words, and grammar do not exist in isolation; each of these three levels of language constitutes a system in itself.Remarkably, these systems undergo change as systems. When one change threatens to disrupt the system, another change compensates, resulting in a new system that, though different from the old, remains an efficient, expressive, and useful whole.The analysis begins with sounds.Every language has a characteristic inventory of contrasting sounds, called phonemes.The vowels in "beet" and "bit" are different phonemes in English. Italian has only one phoneme, which is why Italians tend to make homophones of "sheet" and "shit."

An additional peculiarity of the vowels in English is the uniform usage of the letter A for the sound /a/ in words such as latte, lager, and tapas, a practice shared by the languages of Italy, Germany, and Spain. This phenomenon, while seemingly arbitrary, is not entirely so, as evidenced by the natural tendency to pronounce "frango" as 'chicken' in Portuguese with an /a/ sound, rather than an /ay/ sound. The question thus arises as to how English acquired the distinctive pronunciation of the letter A in words such as plate, name, and face.A similar phenomenon can be observed in other "long" vowels in English, which often deviate from the expected pronunciation. For instance, the letter I often takes on an "ee" sound in words like Nice and Nizhni Novgorod. However, it remains a mystery why this sound is not consistently maintained in words like write and ride. Additionally, the "o" sound in words like boot and food is another subject of curiosity.

In the 15th-century tavern, where men carried knives and communication was often direct, the distinction between meet, meat, and mate was crucial.The Great Vowel Shift, a significant phonetic change in English, is the likely culprit. From the middle English period through the early modern era, the entire set of English long vowels underwent a radical disruption. "Meet" used to be pronounced similarly to modern "mate," and "boot" sounded like "boat." (It should be noted that both vowels were monophthongs, not diphthongs; the modern long A is pronounced like ay-ee said quickly, but the vowel in medieval meet was a pure single vowel.)During the Great Vowel Shift, ee and oo began to evolve towards their contemporary sounds, although the reason for this change remains uncertain. It is plausible that some individuals observed this shift and expressed dissatisfaction with it. The shift resulted in a real problem, as ee was now very similar to the vowel in time, pronounced tee-muh, and oo was very similar to the vowel in house, pronounced hoose.Speakers did not passively accept the confusion, but rather exhibited what economists call "spontaneous order." In response to the encroachment of the vowels in time and house, which had begun to change, the vowels in meet (then pronounced mah-tuh) and mate (then pronounced mah-tuh) underwent further changes, moving towards the sounds of the modern vowels in cat and meat, respectively. Consequently, the vowel in "meat" underwent a change as well.Throughout the system, vowels underwent changes, and in the 15th-century tavern setting, where individuals carried knives, the potential for confusion between "meet," "meat," and "mate" was a concern. In response to this potential confusion, a change was made to a different aspect of the system, resulting in the merging of a few vowels. This led to "meet" and "meat" becoming homophones. However, the system ultimately stabilized, with each vowel occupying a distinct place. This phenomenon, known as the Great Vowel Shift, stands in contrast to the notion of a Great Vowel Pile-Up.Such shifts are not uncommon and have been termed "chain shifts," whereby one change prompts another, which in turn gives rise to yet another, and so on, until a new equilibrium is achieved. A notable chain shift is currently ongoing: the Northern Cities Shift, as identified and documented by William Labov, a pioneering figure in the field of sociolinguistics, in the cities surrounding the Great Lakes of North America. A similar phenomenon, known as the California Shift, has also been observed. These shifts, while seemingly chaotic and random, are part of a larger system that works to maintain linguistic stability.

Addressing the question of individual words, it is important to recognize that the number of vowels in any language is finite, yet the number of words is vast. Consequently, changes in the meanings of words might not be as orderly as the chain shifts seen in the Great Vowel Shift and others.Nevertheless, despite potential harm done by an individual word's change in meaning, cultures tend to have all the words they need for all the things they want to talk about.In researching Samuel Johnson's dictionary for my new book, Talk on the Wild Side (2018), I made a startling find. In his 1747 account to the Earl of Chesterfield, Johnson outlined his vision for the dictionary, stating that buxom, denoting mere obedience, had evolved into a euphemism for wantonness. This transformation, as Johnson explained, can be traced back to an archaic form of marriage prior to the Reformation, where the bride vowed compliance and submissiveness through the phrases "I will be bonair and buxom in bed and at board."

Contemporary notions of buxomness, however, do not align with these traditional connotations. (To address this incongruity, it is noteworthy to mention that a Google Images search for buxom reveals the evolution of its connotation over time.)A thorough examination of the Oxford English Dictionary (OED) reveals that buxom derives from the medieval term buhsam, which is cognate to the modern German biegsam, signifying 'bendable.' The evolution of the term can be traced from its physical to a metaphorical sense, eventually signifying 'pliable' or 'obedient' in its connotation. This interpretation is further elaborated by Johnson's characterization of buxom as 'obedient.'Subsequent stages in the semantic progression of buxom include its association with amiability and liveliness, reflecting its adaptability to different contexts and applications. (For example, William Shakespeare employs the term "buxom valour" in Henry V to describe a soldier.)The subsequent shift to "healthy, vigorous" appears to align with contemporary interpretations.The progression from "good health" to physical plumpness, and then specifically to "big-breasted," reflects a logical extension.

The transition from 'obedient' to 'busty' might appear remarkable when viewed in isolation, but upon closer inspection, it reveals a logical progression.It is noteworthy that 'nice' previously signified 'foolish', while 'silly' denoted 'holy'.The etymology of 'assassin' is derived from the plural form of the Arabic word for 'hashish(-eater)', while 'magazine' is derived from the Arabic word for a storehouse.This phenomenon exemplifies the inherent evolution of language.Additionally, 'prestigious' previously carried a pejorative connotation, signifying ostentatious yet superficial. Such changes are a common occurrence.The term "hangry" is a recent addition to the English lexicon, and its prevalence is notable.In the previous paragraph, the terms "leap" and "jump" were utilized. However, these "leaps" are only evident when lexicographers, in their analysis of word histories, divide a word's evolution into distinct meanings for their dictionaries. The evolution of word meanings occurs through a gradual process, wherein a small number of speakers adopt a word in a novel way, which subsequently influences others. This gradual and incremental change in meaning can lead to significant and comprehensive shifts over time.It is noteworthy that these changes do not result in chaos. In the case of the word "buxom," its evolution in meaning could have potentially left a lexical void for the original meaning. However, in each instance, another word has taken its place, as evidenced by the following examples: pliable, obedient, amiable, lively, gay, healthy, plump, and so on.It appears that the lexicon exhibits a reluctance to accept a concept if it is not already present. The concept of "hangry" in English, for instance, was previously met with skepticism, given its frequent use in my own experience. Nevertheless, it was eventually coined to address this need.

The lexicon undergoes several predictable changes in meaning.For instance, some individuals insist that the word nauseous only means 'causing nausea'. However, the shift from cause to experiencer is a common semantic shift, as evidenced by the fact that many words can be used in both active and agentless constructions (e.g., I broke the dishwasher and the dishwasher broke). Nonetheless, true confusion is rare.For nauseous's old meaning, we have nauseating.Words also weaken with frequent use: The song "Everything Is Awesome" from The Lego Movie (2014) captured a prevalent sentiment, highlighting the frequent use of the term in American English. Once a potent expression of admiration, the word can now be applied to a wide range of experiences, as evidenced by its use in contexts such as the burrito being served. However, it can also lose its semantic weight, as illustrated by Steven Pinker's example, "If you could pass the guacamole, that would be awesome."

However, it is important to note that languages are always adapting and evolving, and English speakers have a vast array of superlatives at their disposal, including incredible, fantastic, stupendous, and brilliant, which have morphed from their original meanings of "unbelievable," "like a fantasy," "inducing stupor," and "shiny, reflective," respectively. When these terms become overused, people tend to coin new ones, such as sick, amazeballs, and kick-ass.The vast array of words in any language is constantly evolving, with new terms emerging and others becoming less prevalent. The random, short-term fluctuations in language are driven by the systemic, long-term changes that shape the overall linguistic landscape.

At the level of grammar, change can be particularly unsettling, as it can signal a deeper kind of harm than a simple mispronunciation or new use for an old word.Take the long-term decline of whom, which signals that something in a question or relative clause is an object (direct or indirect), as in "That's the man whom I saw." Most people today would either say "That's the man who I saw" or just "That's the man I saw."

The distinction between the subject and the object of a clause is a crucial element in the analysis of grammatical change. However, even radical grammatical shifts in language tend to preserve this fundamental distinction.Readers of Beowulf, for instance, are readily able to recognize the significant differences between the poem's words and their modern counterparts.What may not be immediately apparent to those unacquainted with Old English is the extent to which the grammar has diverged. English, much like Russian or Latin, possesses case endings on nouns, adjectives, and determiners (words such as the and a). In essence, these endings function similarly to the case endings in "who/whom/whose" (where a fourth case existed).Contemporary English exhibits a stark contrast between direct and indirect objects, with only six words (I, he, she, we, they, and who) undergoing form changes. From a historical perspective, the loss of case endings in Modern English has led to a significant reduction in the communicative power that these endings previously provided. This decline raises the question of how English speakers can discern the subject of a sentence without the guidance of case endings.The answer lies in the principle of word order, as English is a subject-verb-object language. In the sentence "I love her," the subject "I" and the direct object "her" are both in the nominative case, thereby indicating the clear presence of case. However, the absence of case endings does not diminish the clarity of the sentence "Steve loves Sally." Subject-verb-object order, while not mandatory, is a common practice in English, and its violation in specific circumstances, such as in "Her I love the most," is expected by native speakers. This expectation serves as a replacement for the function previously fulfilled by case endings.

To a six-year-old, everything is epic, which strikes the ear as awesome. This response must have been instilled by the parents of the child. The question of why case endings disappeared remains unanswered; however, it is hypothesized that this phenomenon occurred as a result of two waves of conquest: the arrival of adult Vikings and Normans in Britain, and the subsequent learning of Anglo-Saxon imperfectly. This is analogous to the challenges faced by adults in learning fiddly inflections in a foreign language, both in the past and today. It is plausible that many adult learners would have disregarded these endings, relying instead on word order, thereby imparting to their children a version of the language that was slightly simplified. Consequently, these children would have employed the endings less frequently than their predecessors, until ultimately, they ceased to exist entirely.

In essence, the grammar adapted in a manner akin to a system. It is an inherent characteristic of any civilization to maintain a distinction between subjects and objects, a distinction that cannot be left to speculation. During the Anglo-Saxon period, word order exhibited a degree of flexibility. However, the loss of case endings led to a more rigid word order. This resulted in a potential loss of information due to the gradual disappearance of case endings. Nevertheless, the solidification of word order served to mitigate this potential loss.In the contemporary era, we have a framework that allows both prescriptivists and descriptivists to contribute their perspectives. Sound changes can be perceived as incorrect, understandably, by individuals who learned an older pronunciation. To my ear, nuclear sounds and uneducated are simply incorrect, and expresso is also incorrect. However, in the long run, sound systems compensate for any confusion through a delicate dance of changes that ensures the language's necessary distinctions remain.Word meanings change in two ways: by type (a change in meaning) and by force (a change in how powerful a word is). To a six-year-old, everything is epic, which strikes the ear in the same way that awesome must have done to his parents.A lunch, therefore, cannot be epic.However, when epic is exhausted, his children will press something else into service or coin something new.Even the deepest-seeming change to the grammar never destroys the language system. Some distinctions may become obsolete; for instance, classical Arabic has singular, dual, and plural numbers, while modern dialects tend to use singular and plural forms, similar to English. Latin, for example, had a rich morphology, but its derivatives, such as French and Spanish, lack this feature. However, this does not hinder their speakers from functioning effectively.Sometimes languages become more complex; the Romance languages incorporated freestanding Latin words into their structures until they became part of the verb endings. This also led to a satisfactory outcome.

The tendency of human beings to seek order and predictability in language use stands in contrast to the inherent complexity of languages, which, despite their intricate nature, are utilized by numerous individuals. While market economies have demonstrated their superiority over command economies, languages, due to their vast complexity and extensive usage, are resistant to centralized management. Individual decisions, while potentially detrimental, can be corrected over time, suggesting that, in the long run, change is an inevitable phenomenon.

The Science of Why Swearing Reduces Pain

Research has demonstrated that the use of curse words enables individuals to cope with adversity and actually mitigates the perception of pain. Conventional wisdom has long held that swearing is an ineffective response to pain. According to the prevailing view among psychologists, the use of profanity serves to exacerbate the sensation of pain, a phenomenon attributed to a cognitive distortion known as catastrophizing. This distortion involves the rapid progression from a balanced perspective to a perception of impending disaster. The utilization of exclamations such as "This is terrible!" and "I just can't!" is a common manifestation of catastrophizing, which often leads to a sense of helplessness.

However, this notion provoked concern for Richard Stephens, a psychologist and author of Black Sheep: The Hidden Benefits of Being Bad, who pondered the question, "why swearing, a supposedly maladaptive response to pain, is such a common pain response."Given the frequency with which he, like all of us, has sustained thumb injuries from hammer blows, he was keen to ascertain whether swearing truly exacerbates pain.

To this end, he enlisted the participation of 67 undergraduate students at Keele University in Staffordshire, England, who were willing to submerge their hands in ice-cold water for designated durations, repeating this task twice: once while uttering a swear word and once without. The study was approved by the Keele University School of Psychology Research Ethics Committee, a fact that may be worthy of consideration when selecting an institution for one's future academic pursuits.The underlying rationale for the experiment was as follows: The hypothesis was that if swearing is indeed maladaptive, then the volunteers would exhibit a greater reluctance to continue while uttering a swear word as opposed to a neutral word. Available on Amazon.To ensure a fair test, the students were permitted only one swear word and one neutral word, and the sequence of the swearing and neutral immersions was randomized. Stephens inquired about the words the volunteers would use if they dropped a hammer on their thumb and five words to describe a table. He then took the first swear word that appeared in the first list and its counterpart from the second list.When I conducted the experiment, my words were: "arrgh, no, fuck, bugger, shit" and "flat, wooden, sturdy, shiny, useful," which meant saying "F curse" in one trial and "sturdy" in the other.

Neuroscientist Explains One Concept in 5 Levels of Difficulty

The results could best be summarized by the phrase "Maladaptive, my ass!" The volunteers exhibited a notable increase in heart rate and a decrease in pain perception during the swearing trials, suggesting a heightened physiological response to the use of expletives. This phenomenon, which has significant implications for the study of pain perception and the role of language in influencing subjective experience, can be easily replicated in the comfort of one's own home or in a social setting, provided one has access to a bowl of ice water and a stopwatch.The question that naturally follows is why this experiment was not conducted in the immediate aftermath of the invention of the ice cube.About the authorEmma Byrne is a writer whose work has appeared in a variety of publications, including the BBC, Science, the BMJ, the Financial Times, and Forbes. She is based in London.

As Stephens notes, "Pain used to be thought of as a purely biological phenomenon, but actually pain is very much psychological. The same level of injury will hurt more or less in different circumstances."We know, for example, that if male volunteers are asked to rate how painful a stimulus is, most of them will say it hurts less if the person collecting the data is a woman.Pain isn't a simple relationship between the intensity of a stimulus and the severity of your response. A multitude of factors, including circumstances, personality, mood, and prior pain experience, influence the perception of physical discomfort.The Impact of Swearing on the BrainA study by Stephens examined the effect of swearing on volunteers. However, the researcher did not presume that swearing had induced a specific emotional state in all participants. Instead, he employs a quantitative approach, measuring the degree of each volunteer's arousal using heart rate and galvanic skin response (a measure of sweatiness of the palms, facilitated by the attachment of small electrodes to the fingertips of volunteers, which detect stress, fear, anxiety, and excitement).In the initial ice-water experiment, Stephens demonstrated that swearing indeed altered the volunteers' arousal levels. The experiment involved the administration of ice water to the volunteers, followed by the presentation of swear words, which resulted in a significant increase in their heart rate and the activation of the fight-or-flight response. This finding led to the hypothesis that swearing could potentially alleviate pain by inducing emotional arousal. To test this hypothesis, Stephens collaborated with one of his undergraduates, Claire Allsop, to design a specific experiment. The study was meticulously designed, and Allsop was subsequently awarded a prestigious prize from the British Psychological Society.The central question of the study was whether inducing heightened aggression could enhance pain tolerance. According to the hypothesis, if pain tolerance is influenced by "innate" aggression, then it should be impossible to elicit prolonged pain tolerance in individuals who typically exhibit mildmannered tendencies. However, the swearing study demonstrated that individuals can tolerate significantly higher levels of pain when swearing in comparison to other states. This raises the question of whether swearing can actually stimulate aggression, increase arousal, and facilitate pain management.

She followed in her mentor's footsteps, and managed to persuade 40 of her fellow undergraduates to repeat the ice-water test. Stephens elucidates, "We were exploring possibilities within the laboratory, and one straightforward option was to engage them in a first-person shooter game."In reality, each of her volunteers participated in either a first-person shooter game, a genre of video game characterized by running around attempting to eliminate other players before they eliminate you, or a golf game. To ascertain the precise impact of the game on the volunteers, Allsop administered a hostility questionnaire, wherein they self-rated their level of hostility on a scale from 1 to 5, using adjectives such as "explosive," "irritable," "calm," or "kindly."Subsequently, Allsop employed a sophisticated test to evaluate the students' level of aggression. This test entailed the presentation of prompts, such as "explo_e" or "_ight," to the volunteers in a solitary hangman-like exercise. Those who responded with "explode" and "fight" were classified as more aggressive than those who thought of "explore" or "light."The students scored consistently higher on the aggression measures when they played the shoot-'em-up game than when they played the golf game. They rated themselves as more hostile on the questionnaire and came up with more violent imagery in the solo hangman challenge.However, the study did not examine the impact of the game on the subjects' pain.

The results demonstrated a consistent pattern of effect, similar to the findings observed in the swearing task, with the male participants exhibiting a higher tolerance for the ice water and perceiving it as less painful. Additionally, a rise in heart rate was recorded.Following the golf game, the male students could submerge their hands for an average of 117 seconds, while the female students could submerge theirs for an average of 106 seconds. Following the shooting, the times increased to 195 seconds for the male subjects and 174 seconds for the female subjects, amounting to approximately three minutes.The experiment's validity is supported by the fact that it was replicated in our laboratory, where we compared swearing with positive affirmations, such as "Emma, you can do it." In this experiment, we observed a similar trend, with the male subjects outperforming the female subjects. Although I have misplaced my notes, I believe I endured for a maximum of ninety seconds, which is considerably less than my typical swearing duration of just over three minutes.This prompts the question of whether individuals with an innate propensity for aggression possess a heightened tolerance for pain.To investigate this, Kristin Neil and her colleagues at the University of Georgia conducted an undergraduate research study, exploring the correlation between aggression levels and the pain threshold of individuals. In this study, Neil and her team recruited 74 male undergraduate students to participate in a series of "reaction-time contests," ostensibly to assess their response speed when pressing a button.However, the underlying rationale for this approach was, in fact, quite divergent.

The majority of the male participants were enrolled in a series of "reaction-time competitions," ostensibly to ascertain the students' response time when prompted to initiate a specific action. However, the underlying rationale was, in fact, quite divergent.

The Best Beard Trimmers for Show "Buying Guides: The Best Beard Trimmers for Showing Your Face" by Andrew Williams"How Google Maps Makes It Harder for Palestinians to Navigate the West Bank" by Paresh DaveIn Neil's lab, volunteers were given "reaction buttons" to press. They were instructed to envision themselves as gunslingers in a western, tasked with pressing the button in response to a cue as quickly as possible to win the game.An intriguing variation was introduced by Neil: adjacent to the reaction button was a punishment button. In the event that the volunteer suspected their opponent of cheating or, alternatively, if the volunteer found themselves becoming excessively frustrated at their losses and desired to equalize the odds, the volunteer could press the punishment button, which would administer an electric shock for as long as it was held down.The intensity of the shock could be adjusted by the volunteer.To provide the volunteers with an understanding of the magnitude of the shocks they would be delivering, Neil administered a series of shocks to all of them prior to the commencement of the game, gradually increasing the intensity until the volunteers requested that she cease.

However, the nature of the "opponent" in the game was not as straightforward as it may have seemed. In reality, the "opponent" was a simple computer script designed to allow the volunteer to win a predetermined percentage of "gunfights." The function of the "punishment button" was merely to record the intensity level and the frequency and duration of the volunteer's button presses.It is important to note that the experiment had begun long before the game's initiation.The initial shocks served a hidden purpose: to collect data on each volunteer's pain tolerance.

The central inquiry concerned whether a correlation exists between an individual's pain threshold and the frequency, intensity, and duration of their punishment.The experimental results were unequivocal: volunteers who exhibited higher pain tolerance levels were more likely to deliver shocks at an earlier stage, with greater frequency, at higher voltages, and for extended durations compared to those with lower pain tolerance.The underlying mechanism behind this phenomenon remains to be elucidated. One hypothesis posits that less pain-tolerant individuals may exhibit heightened empathy towards their "victim." Alternatively, it is possible that the brains of highly aggressive individuals possess unique neurobiological mechanisms that facilitate the experience of discomfort over a greater duration.While Neil's experiment does not explicitly address this question, a comparative analysis of her results with those obtained by Claire Allsop and Richard Stephens offers a framework for formulating hypotheses.

It is understood that an individual's level of aggression is the result of a combination of their inherent aggressive tendencies (trait aggression) and their response to the prevailing circumstances (state aggression).The experimental findings suggest that individuals with high trait aggression may possess an enhanced capacity to endure pain. However, it is possible that the more aggressive volunteers were experiencing particularly challenging days, as the experiment does not explicitly differentiate between state and trait aggression. The Allsop and Stephens study offers a compelling perspective on the malleability of emotions, highlighting the potential for emotional manipulation as a coping mechanism. However, the implications of this finding extend beyond the realm of personal experiences, raising questions about the therapeutic benefits of swear words and violent video games.Is the effectiveness of swearing and shoot-'em-ups in alleviating pain contingent on individual differences?The study's findings are encouraging, as they demonstrate the universality of the swearing and shoot-'em-up remedies across the subjects studied by Stephens. Psychologists categorize individuals into two groups based on their anger expression tendencies: those who tend to express their anger frequently (referred to as "anger-out" people) and those who tend to internalize their anger (designated as "anger-in" people).Initially, Stephens hypothesized that swearing would be more effective for individuals who were comfortable with the concept of swearing or who frequently used it in their daily lives. To test this, he conducted an experiment in which participants were asked to rate their likelihood of swearing when experiencing anger. However, the results of the study were unexpected: The experiment yielded no significant differences between the two groups, suggesting that swearing is an effective form of self-expression regardless of one's comfort level with profanity. However, subsequent analysis revealed that the specific type of swearing employed might influence its effectiveness. This prompts the question of whether "minced oaths," or socially acceptable forms of swearing, can effectively mitigate aggressive tendencies. The evidence suggests that stronger swear words are more effective in alleviating pain.

The Best Beard Trimmers for Showing Your FaceBuying GuidesThe Best Beard Trimmers for Showing Your FaceBy Andrew WilliamsHow Google Maps Makes It Harder for Palestinians to Navigate the West BankBusinessHow Google Maps Makes It Harder for Palestinians to Navigate the West BankBy Paresh Dave"My students tried to see if there was a dose response for swearing," says Stephens. In a subsequent experiment, two students examined the relationship between the strength of language and its impact on pain over two consecutive years. In one year, one student compared the verbal expression of "fuck," "bum," or a neutral word. In the subsequent year, another student modified the experiment by replacing "bum" with "shit," as it was deemed insufficiently intense. In both experiments, "fuck" yielded the most significant analgesic effect, followed by "bum" and "shit," which provided a lesser degree of relief, albeit greater than the use of a neutral word.While the study was conducted as a classroom-based exercise and has not been published, it presents a promising avenue for further research and could potentially serve as an engaging presentation topic. One participant remarked, "I find it enjoyable to incorporate this slide into my presentations because it provides an opportunity to say the word 'bum,' which is quite pleasant."

The results of this study also suggest a counter-experiment: can the severity of swear words be assessed by their analgesic effect?Rather than asking people to subjectively rate the severity of a swear word as mild, moderate, or severe, the study proposes a methodology involving heart rate monitoring and the application of ice water to the hands.

What Psychopaths Teach Us

A Synopsis of Psychopathy's Implications on Achieving Success

A substantial corpus of research has demonstrated that individuals with psychopathic tendencies can offer a valuable lesson in the pursuit of success. A number of hallmark characteristics of psychopathy, including an inflated sense of self-worth, persuasiveness, superficial charm, ruthlessness, and a lack of remorse, have been observed in prominent figures such as politicians and world leaders. These individuals, it can be posited, have the capacity to evade law enforcement. However, these individuals often find themselves in positions of authority, such as politics or leadership. This profile enables those who exhibit these traits to act with impunity, undeterred by the social, moral, or legal implications of their actions.

In the event that an individual is born with a specific configuration of astrological and/or planetary factors, and is thereby endowed with the capacity to exert a profound influence over the human psyche, akin to the moon's influence over the sea, it is conceivable that said individual could orchestrate the mass murder of 100,000 Kurds and proceed to the place of execution with such an inscrutable nonchalance that even their most vehement detractors might find themselves, albeit tacitly, expressing a form of deference.A notable example of this phenomenon can be observed in the words of Saddam Hussein, who, while on the scaffold, addressed the executioner, saying, "Do not be afraid, doctor." "This is for men."If one is violent and cunning, like the real-life "Hannibal Lecter" Robert Maudsley, one might take a fellow inmate hostage, smash his skull in and sample his brains with a spoon as nonchalantly as This is analogous to the actions of a person who, in a state of relative ease and comfort, might consume a soft-boiled egg. (It is noteworthy that Maudsley has been confined in a solitary cell for the past three decades in a specialized containment unit located in the basement of Wakefield Prison in England.)

Conversely, if one were a highly skilled neurosurgeon, exhibiting unparalleled composure and focus under duress, one might venture into uncharted territory, akin to the individual referred to as Dr. Geraghty. This domain lies at the far reaches of 21st-century medicine, where risk permeates with the force of 100-mile-per-hour winds, and the oxygen of deliberation becomes scarce. During our conversation, he articulated, "I harbor no compassion for those whom I operate on." "That is an indulgence that I cannot currently allocate funds for. In the operating room, I experience a rebirth: I become a machine devoid of emotion, in complete synchronization with my instruments of surgery. During these moments of extreme intensity, when I am in a state of euphoria and manipulating life and death, emotional responses are counterproductive. Emotion is a hindrance to effective work, and I have, over time, eliminated it from my professional life."

Geraghty, a leading neurosurgeon in the United Kingdom, articulates a compelling perspective on the subject. While his words evoke a sense of foreboding, they also offer a rational viewpoint.In the depths of some of the brain's most complex regions, the psychopath is depicted as a lone and merciless predator, a solitary species of transient, deadly allure. The moment such information is disseminated, images of serial killers, rapists, and reclusive bombers begin to haunt our minds.However, an alternative perspective emerges. For instance, the arsonist who sets your house on fire may also, in an alternate reality, be the hero who courageously enters the burning building to rescue your loved ones. Furthermore, the youth brandishing a knife in a darkened theater might, in the fullness of time, find themselves wielding a different kind of weapon in a different kind of setting.While such assertions may defy credulity, they are substantiated by empirical evidence.Psychopaths, it must be noted, are characterized by a lack of fear, confidence, charm, and ruthlessness, coupled with an unwavering focus. Contrary to popular belief, however, they are not inherently violent. The distinction between psychopathy and other personality traits is not a simple binary choice; rather, it is a continuous spectrum with varying degrees of expression. This spectrum can be conceptualized as the fare zones on a subway map, with each individual occupying a specific location along the line. Only a small minority of individuals, akin to the A-listers residing in the "inner city" of this metaphor, exhibit the full range of psychopathic traits.

The psychopathic traits can be likened to the various dials on a studio mixing deck, each one representing a different aspect of a sound or video recording. If all the dials are cranked all the way up, the result is a soundtrack that is of no practical use. However, if the soundtrack is graded, with some dials adjusted to a higher level than others, such as fearlessness, focus, a lack of empathy, and mental toughness, one may have a surgeon who is significantly better than the average surgeon.

It should be noted that surgery is just one example of a situation in which psychopathic "talent" may prove advantageous. There are others. In 2009, I embarked on my own research initiative to ascertain whether psychopaths truly excel at decoding vulnerability, as purported by certain studies. I postulated that, contrary to the notion of being a societal burden, this ability might in fact confer certain advantages. Moreover, I speculated that there existed methodologies for studying this phenomenon.

My epiphany occurred when I encountered a friend at the airport. We often experience a sense of paranoia during airport security screenings, even when we are innocent, I remarked. However, what if we possessed a hidden ability that airport security officers were adept at detecting?To investigate this possibility, I designed an experiment.Thirty undergraduate students participated in the study, divided into two groups: high and low on the Self-Report Psychopathy Scale. Additionally, there were five associates who did not exhibit psychopathic tendencies. The students' task was straightforward: they were instructed to observe the associates' movements as they entered and exited through a door, navigating a small, elevated stage in the process. However, there was a caveat: they were also tasked with identifying the "guilty" associate, the one concealing a scarlet handkerchief.

To enhance the challenge and provide a tangible incentive, the associate possessing the handkerchief was awarded £100. However, if the observers successfully identified the guilty party, i.e., if their votes indicated the guilty individual as the winner upon the completion of the vote counting, the associate was required to relinquish the reward. Conversely, if the observers erred, leading to the wrong individual being identified as the guilty party, the associate would receive the reward. In the event of a positive identification, the £100 would be retained by the associate.To receive the Today in Science newsletter and to consent to our Terms of Use and Privacy Policy, please provide your email address.Which students would be the most suitable candidates for the role of "customs officers"?Would the psychopaths' predatory instincts prove to be a reliable indicator of their aptitude for the position, or would their ability to identify vulnerable individuals prove to be a disadvantage in this context?

A study revealed that more than 70 percent of individuals who scored high on the Self-Report Psychopathy Scale accurately identified the individual smuggling the handkerchief, while this ability was only demonstrated by 30 percent of those with lower scores.These findings suggest that the ability to identify weakness may be a component of the toolkit employed by serial killers. However, this skill may also be advantageous in airport security settings.

TrolleyologyJoshua Greene, a psychologist at Harvard University, has observed how psychopaths unscramble moral dilemmas, as described in my 2011 book, Split-Second Persuasion. Greene has stumbled upon an intriguing discovery: empathy is not uniform, but rather schizophrenic in nature, exhibiting two distinct varieties: hot and cold.

To illustrate, consider the following conundrum (Case 1), first proposed by the late philosopher Philippa Foot:A railway trolley is rapidly approaching a fork in the track, with five individuals trapped on the line and unable to escape.Fortunately, a switch exists that can divert the trolley away from the five people, but at a significant cost. However, this action would result in the death of another individual.The question arises as to whether the switch should be engaged in this scenario.It is noteworthy that many individuals experience minimal difficulty in determining the appropriate course of action. While the prospect of engaging the switch is not inherently pleasant, the utilitarian option—that is, the prevention of the death of five individuals and the death of one individual—is regarded as the "least worst choice."

However, a subsequent scenario, proposed by philosopher Judith Jarvis Thomson, introduces a variation that merits consideration.In this scenario, a railway trolley, rapidly approaching its terminal destination, is confronted by five individuals. The only viable option to avert a potential disaster is to maneuver a significantly larger individual over the tracks, thereby shielding the five individuals. This action, however, entails the unfortunate prospect of the larger individual succumbing to a fatal fall.Nevertheless, the substantial size of this individual could potentially obstruct the trolley, thereby safeguarding the lives of the five individuals.The fundamental question that arises in this context is as follows: Should one elect to push the aforementioned individual?It could be posited that the present scenario constitutes a genuine dilemma, despite the equal number of casualties in both cases (five in the first instance and one in the second). The act of playing the game has been observed to induce a heightened sense of caution and anxiety.However, the underlying reasons for this phenomenon remain unclear.According to Greene, the answer to this question lies in the differing regions of the brain responsible for processing climate-related information.

In the first case, he proposes, we are confronted with an impersonal moral dilemma, involving the prefrontal cortex and the posterior parietal cortex, particularly the anterior cingulate cortex, the temporal pole, and the superior temporal sulcus. These regions are implicated in our objective experience of cold empathy, in reasoning and rational thought.In the second case, on the other hand, we are faced with a personal moral dilemma. It engages the brain's emotional center, known as the amygdala, which is associated with the experience of hot empathy.

Psychopaths, like most ordinary individuals, effectively navigate the dilemma presented in Case 1. However, a notable distinction emerges in their approach to Case 2. Without hesitation, psychopaths demonstrate a willingness to disregard the individual in question, illustrating a stark contrast in their behavioral tendencies.This discrepancy in behavior is further accentuated by distinct neurological patterns. While the pattern of neural activation in both psychopaths and non-psychopaths is comparable when presented with impersonal moral dilemmas, a notable divergence emerges when personal factors are introduced.

In the event that an individual is placed in a functional magnetic resonance imaging (fMRI) machine and presented with these dilemmas, what would be observed as the subject navigates the moral minefields?Approximately at the point when the nature of the dilemma transitions from impersonal to personal, significant activation is observed in the amygdala and associated brain circuits, such as the medial orbitofrontal cortex. This phenomenon, known as the emotional response to moral dilemmas, has been observed in individuals with psychopathic tendencies. In contrast, individuals with psychopathic tendencies exhibit a different response, characterized by a lack of emotional response and a failure to engage with the moral aspects of the dilemmas.

The Psychopath Mix

The question of what it takes to succeed in a given profession, to deliver the goods and get the job done, is not all that difficult when it comes down to it. Alongside the dedicated skill set necessary to perform one's specific duties—in law, in business, in whatever field of endeavor one may mention—exists a selection of traits that code for high achievement.In 2005, Belinda Board and Katarina Fritzon, then at the University of Surrey in England, conducted a survey to ascertain precisely what it was that made business leaders tick. Specifically, they sought to identify the key facets of personality that differentiate those who opt for the left aisle when boarding an aircraft from those who select the right.To this end, Board and Fritzon (2005) assembled three distinct groups for comparison: business managers, psychiatric patients, and hospitalized criminals (excluding psychopaths and those with other psychiatric illnesses). These groups were then assessed using a psychological profiling test.

Their analysis revealed that a number of psychopathic attributes were actually more prevalent in business leaders than in so-called disturbed criminals, including superficial charm, egocentricity, persuasiveness, lack of empathy, independence, and focus.The main difference between the groups was in the more "antisocial" aspects of the syndrome. Specifically, the criminals exhibited higher levels of lawbreaking, physical aggression, and impulsivity.

This observation aligns with the "mixing deck" hypothesis, which posits that the distinction between functional and dysfunctional psychopathy is not determined by the presence of psychopathic attributes in isolation, but rather by their levels and the manner in which they are integrated. Mehmet Mahmut and his colleagues at Macquarie University in Sydney have recently demonstrated that patterns of brain dysfunction, particularly those observed in the orbitofrontal cortex (the brain area responsible for regulating emotional input during decision-making), manifest as dimensional rather than discrete differences in both criminal and noncriminal psychopaths. This finding suggests that these two groups should not be regarded as qualitatively distinct populations but rather as occupying different positions along a continuous spectrum.

In a similar vein, I inquired of a class of first-year undergraduates, "Suppose you were employed as a manager in a job placement company. How would you describe a client with a profile that is ruthless, fearless, charming, amoral, and focused? To which line of work do you think they would be suited?"The students' responses were remarkably insightful. The responses indicated a diverse range of potential career paths, including CEO, spy, surgeon, politician, and military roles, as well as less conventional options such as serial killer, assassin, and bank robber.One successful CEO offered a perspective on the challenges of achieving success in the professional realm, stating, "Intellectual ability on its own is just an elegant way of finishing second." He further emphasized the challenges of climbing the corporate ladder, likening it to a "greasy pole" due to the competitive nature of such endeavors. However, the CEO suggested that leveraging others to achieve one's professional goals can facilitate success. This assertion is further supported by Jon Moulton, a prominent venture capitalist in London, who in a recent interview with the Financial Times identified determination, curiosity, and a certain degree of insensitivity as his most valuable character traits.Moulton elucidates that while determination and curiosity are readily apparent, insensitivity is less so. However, he contends that this quality, while seemingly unremarkable, is of significant value, as it enables one to maintain their composure and resilience in challenging circumstances.

The Creativity Crisis

In 1958, Ted Schwarzrock, an 8-year-old third grader, participated in a study that would make him part of a prominent group of children known as the "Torrance kids." This group consisted of nearly 400 Minneapolis children who completed a series of creativity tasks that had been newly designed by Professor E. Paul Torrance.Schwarzrock still vividly remembers the moment when a psychologist handed him a fire truck and asked, "How could you improve this toy to make it better and more fun to play with?" Schwarzrock recollects the psychologist's enthusiasm in response to his responses, which included 25 proposed enhancements, such as the addition of a removable ladder and springs to the wheels. This instance, among others, led the scholars to recognize Schwarzrock's "unusual visual perspective" and "ability to synthesize diverse elements into meaningful products."

The prevailing definition of creativity, as elucidated by scholars in the field, encompasses the production of novel and useful concepts. The tests administered in this study reflect this definition, underscoring the absence of a single, definitive response.The process of creativity is marked by divergent thinking, which involves the generation of numerous unique ideas, and convergent thinking, which involves the synthesis of these ideas to achieve the optimal outcome.

In the 50 years since Schwarzrock and his contemporaries initiated the administration of their tests, scholars—initially led by Torrance, and now with Garnet Millar as his colleague—have engaged in the longitudinal tracking of the children, meticulously documenting every patent earned, every business founded, every research paper published, and every grant awarded. The researchers meticulously documented the children's achievements, including books authored, dances choreographed, radio shows produced, art exhibitions curated, software programs developed, advertising campaigns conceived, hardware innovations conceived, music compositions composed, public policies written or implemented, leadership positions held, invited lectures delivered, and buildings designed.

It is noteworthy that Torrance's tasks, which have become the gold standard in creativity assessment, do not measure creativity perfectly. However, the remarkable accuracy of Torrance's creativity index in predicting these children's creative accomplishments as adults is striking. The individuals who demonstrated a higher degree of creativity in Torrance's tasks often went on to become entrepreneurs, inventors, college presidents, authors, doctors, diplomats, and software developers.A recent study by Jonathan Plucker of Indiana University has further substantiated the predictive power of Torrance's creativity index. The correlation to lifetime creative accomplishment was found to be more than three times stronger for childhood creativity than for childhood IQ.

Torrance's test, a 90-minute series of discrete tasks administered by a psychologist, has been administered to millions worldwide in 50 languages. It is noteworthy that there is a crucial difference between IQ and CQ scores.Intelligence is associated with the so-called Flynn effect, in which scores increase by approximately 10 points with each generation.Enriched environments have been shown to enhance cognitive abilities. Conversely, a reverse trend has been identified in creativity scores, which are reported here for the first time. This reverse trend involves a decline in American creativity scores.Kyung Hee Kim at the College of William & Mary discovered this in May, after analyzing almost 300,000 Torrance scores of children and adults. Kim found creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward. The significance of this decline is underscored by the observation that it is among younger children in America—from kindergarten through sixth grade—where the decline is particularly pronounced.The potential ramifications of this decline are pervasive.The necessity of human ingenuity is an undisputed fact.A recent IBM poll of 1,500 CEOs identified creativity as the No. 1 "leadership competency" of the future. However, this imperative extends beyond the mere sustenance of our nation's economic growth.In the immediate vicinity, numerous matters of national and international significance stand as urgent calls for creative solutions, including the preservation of the Gulf of Mexico, the establishment of peace in Afghanistan, and the delivery of healthcare.The genesis of such solutions is believed to be a vibrant marketplace of ideas, sustained by a population that is perpetually contributing original concepts and demonstrating receptivity to the ideas of others.The underlying reasons behind the decline in U.S. creativity scores remain to be definitively ascertained. One potential contributing factor is the increased time children currently spend engaged with television and video games, as opposed to participating in creative activities.Another contributing factor may be the lack of creativity development in our educational institutions.Consequently, the outcome of this process is largely determined by chance, with no concerted effort to nurture the creativity of all children.However, other countries are prioritizing the development of creativity on a national scale. For instance, in 2008, the British government overhauled the secondary school curriculum, integrating idea generation across various subjects, including science and foreign languages. Additionally, pilot programs have been initiated to utilize Torrance's test as a metric for assessing progress.In a similar vein, the European Union designated 2009 as the European Year of Creativity and Innovation, hosting conferences on the neuroscience of creativity, financing teacher training programs, and implementing problem-based learning curricula for both children and adults. These curricula are designed to encourage real-world inquiry and promote creativity. In China, there has been a widespread educational reform movement aimed at replacing the drill-and-kill teaching style that has been prevalent in the nation's schools. In its stead, Chinese schools are adopting a problem-based learning approach.Plucker recently toured a number of such schools in Shanghai and Beijing. He was amazed by a boy who, for a class science project, rigged a tracking device for his moped with parts from a cell phone. When faculty members at a prominent Chinese university inquired about prevailing trends in American education, Plucker highlighted the nation's emphasis on standardized curricula, rote memorization, and nationalized assessments.In response, his interlocutors found humor in the situation, expressing their amusement and stating, "You're moving towards our former model, but we're moving towards yours, as quickly as we can." "Overwhelmed by curriculum standards, American teachers warn there's no room in the day for a creativity class, and students are fortunate if they get an art class once or twice a week. However, scientists consider this a non sequitur, a product of what the University of Georgia's Mark Runco calls "art bias."The age-old belief that the arts have a special claim to creativity is unfounded. When engineers and musicians were given creativity tasks, their results were found to be indistinguishable, with both groups demonstrating high averages and standard deviations. This suggests that, in the brain, creativity is supported by a similar process, regardless of the domain.Researchers have proposed that creativity should be integrated into regular education, rather than being confined to art classes. The argument that creativity cannot be taught due to the extensive learning demands of children is a false trade-off. Creativity is not synonymous with freedom from concrete facts; fact-finding and deep research are vital stages in the creative process.Current curriculum standards can be met if teaching methods are revised.To understand the optimal approach, it is necessary to understand the new story emerging from neuroscience.The prevailing wisdom in pop psychology is that creativity occurs on the right side of the brain. However, recent findings challenge this notion, suggesting that attempting to be creative solely with the right hemisphere may result in a state of perpetual ideation, albeit inaccessible due to its remote location.When confronted with a problem-solving task, individuals typically initiate by focusing on evident facts and well-known solutions, as if searching for an answer within these familiar domains. This stage of problem-solving is predominantly characterized by the left hemisphere's dominant role. In the absence of an immediate solution, the right and left hemispheres of the brain engage in a collaborative effort, with the right hemisphere scanning remote memories that may bear relevance and the left hemisphere accessing a vast array of distant information that typically goes unnoticed. This information is then processed by the left hemisphere to identify patterns, alternative meanings, and high-level abstractions.Having established this connection, the left brain must swiftly consolidate it before it dissipates. The brain must abruptly shift its focus from defocused to extremely focused attention. In a moment, the brain synthesizes these scattered thoughts, binding them into a novel concept that enters consciousness. This is the "aha!" moment of insight, often followed by a sense of pleasure as the brain recognizes the novelty of its creation.Subsequently, the brain must evaluate the recently generated idea. Is it worth pursuing? Creativity necessitates constant modulation, alternating between divergent and convergent thinking, to integrate novel information with existing knowledge.Individuals with high levels of creativity exhibit an adept ability to engage their brains in bilateral mode, and the more creative they are, the more they dual-activate.Can this ability be cultivated?To illustrate, consider basketball. While height is advantageous for pro basketball players, the rest of us can still develop a high level of proficiency in the sport through consistent practice. In a similar manner, certain innate features of the brain appear to predispose individuals towards divergent thinking. However, convergent thinking and focused attention are also necessary, and these require different neural gifts.Crucially, rapidly shifting between these modes is a top-down function under one's mental control.Rex Jung, a neuroscientist at the University of New Mexico, has concluded that those who diligently practice creative activities learn to recruit their brains' creative networks more quickly and effectively. A lifetime of consistent habits gradually changes the neurological pattern.A fine example of this phenomenon emerged in January of this year with the release of a study by University of Western Ontario neuroscientist Daniel Ansari and Harvard's Aaron Berkowitz, who studies music cognition.They put Dartmouth music majors and nonmusicians in an fMRI scanner and gave participants a one-handed fiber-optic keyboard to play melodies on. In some cases, melodies were rehearsed; in other cases, they were improvised.During the improvisation phase, the highly trained music majors exhibited a brain activity pattern that was distinct from that of the nonmusicians. Specifically, the music majors deactivated their right-temporoparietal junction (r-TPJ), a region of the brain that typically processes incoming stimuli to determine its relevance. By deactivating the r-TPJ, the musicians effectively blocked out all distractions, allowing them to focus entirely on the task at hand. This heightened state of concentration enabled them to engage with musical notes and spontaneously generate music.Charles Limb of Johns Hopkins University has identified a comparable pattern among jazz musicians, while Austrian researchers observed a similar phenomenon in professional dancers engaged in visualizing an improvised dance.Ansari and Berkowitz currently hypothesize that a similar mechanism may operate in orators, comedians, and athletes engaged in improvisation within competitive contexts.Notably, the efficacy of creativity training methodologies that are aligned with these emerging scientific insights has been surprisingly robust. A large-scale analysis of such programs was conducted independently by the University of Oklahoma, the University of Georgia, and Taiwan's National Chengchi University. All three teams of scholars concluded that creativity training can have a strong effect.James C. Kaufman, a professor at California State University, San Bernardino, asserts that creativity can be taught.Successful programs commonly alternate between periods of maximum divergent thinking and bouts of intense convergent thinking, progressing through several stages. It is important to note that significant and lasting improvements in brain function are not typically achieved within the confines of a single weekend workshop. However, when such practices are integrated into the quotidian academic or professional routine, there is a demonstrable enhancement in cognitive function.In the context of America's standards-obsessed educational institutions, this paradigm offers a compelling proposition. The crux of the matter lies in the manner in which students engage with the vast repository of information.A notable exemplar is the National Inventors Hall of Fame School, a recently established public middle school in Akron, Ohio. In alignment with Ohio's curriculum requirements, the school's faculty devised a project for the fifth-grade students: to devise methods for mitigating noise levels in the library.The library's windows faced a public area and, even when closed, permitted the entry of excessive noise.The students were allotted four weeks to formulate proposals.

Employing a small-team structure, the fifth-grade students initiated the process of "fact-finding," as described by creativity theorist Donald Treffinger. This phase entailed investigating the manner in which sound propagates through various materials. Which materials are the most effective in reducing noise?Following this, the students engaged in problem-finding, anticipating potential challenges to ensure the designs' viability. Subsequently, they embarked on idea-finding, generating as many concepts as possible.These concepts included draperies, plants, and large kites suspended from the ceiling, which have been shown to baffle sound. Alternatively, they considered a strategy of masking noise by playing the sound of a gentle waterfall. The evolution of the proposal for double-paned glass led to the concept of utilizing the space between the panes as a water reservoir.The subsequent phase involved solution-finding, aimed at identifying the most effective, cost-effective, and aesthetically pleasing ideas.Fiberglass demonstrated the greatest sound absorption properties; however, it did not meet safety standards.The team considered whether an aquarium with fish would be a more viable solution than water-filled panes.Subsequently, the teams developed a plan of action, constructing scale models and selecting fabric samples. The realization that a janitor would need to be engaged to care for the plants and fish during vacation periods emerged as a key challenge.To address this, teams strategically persuaded others to support their cause, and in some cases, this support was so robust that the teams chose to combine projects.The culmination of this effort was the presentation of designs to a group that included teachers, parents, and Jim West, the inventor of the electric microphone.

Throughout this process, the students exhibited the very definition of creativity, oscillating between divergent and convergent thinking, and ultimately generating original and practical ideas. Unbeknownst to them, they had unwittingly mastered Ohio's fifth-grade curriculum, encompassing subjects ranging from sound waves to per-unit cost calculations and the art of persuasive writing.School administrator Maryann Wolowiec observed, "Our students rarely assert that a subject is superfluous to their needs, opting instead to inquire about the timing of their departure from school." Two weeks prior, Principal Traci Buckner was moved to tears upon receiving the results of the state's achievement test. The raw scores indicate that, in its inaugural year, the school has already become one of the top three schools in Akron, despite having open enrollment by lottery and 42 percent of its students living in poverty.

The school dedicates a significant portion of its daily schedule, amounting to up to three-quarters of the day, to project-based learning. In this setting, Principal Buckner and her team meticulously design learning experiences that align with state-mandated curricula. They employ a variety of creativity-based pedagogical approaches, including Treffinger's Creative Problem-Solving method, to ensure that students can learn and develop creative skills.The efficacy of the creative problem-solving program in fostering creativity among children has been acknowledged by William & Mary's Kim.

This approach entails the cessation of immediate encouragement to progress directly towards the correct answer.In a personal anecdote, UGA's Runco recounts a moment during a family excursion through California when his son inquired about the capital of the state, Sacramento, posing the question of whether it should be San Francisco or Los Angeles. In response, Runco redirected the inquiry, encouraging his son to formulate a multitude of potential explanations.

Research indicates that preschool children typically pose approximately 100 questions to their parents daily. The incessant inquisitiveness can be exhausting for parents, prompting them to express their desire for it to cease. However, this desire is not often fulfilled, as by middle school, children typically cease to ask questions. This decline in questioning is concomitant with a marked decline in student motivation and engagement. It is important to note that the decline in questioning does not necessarily reflect a loss of interest on the part of the children; rather, it is more likely the other way around. The decrease in questioning can be attributed to a waning interest among the children.

Drawing from extensive research on the childhoods of highly creative individuals, conducted over several decades by Claremont Graduate University's Mihaly Csikszentmihalyi and University of Northern Iowa's Gary G. Gute, it is evident that highly creative adults frequently emerge from families that embody opposites. These families encourage uniqueness while providing stability, responding to children's needs while challenging them to develop skills. This dynamic fosters a form of adaptability, where in times of anxiety, clear rules can reduce chaos, and in moments of boredom, the pursuit of change becomes possible. It is within the space between anxiety and boredom that creativity flourishes.

Additionally, it is noteworthy that highly creative adults often come from backgrounds characterized by hardship. While hardship itself does not inherently lead to creativity, it does compel children to become more flexible, a quality that contributes to creativity.

In early childhood, distinct types of free play have been associated with high creativity. Preschoolers who engage in role-play, or the acting out of characters, have been shown to have higher measures of creativity. Voicing someone else's point of view has been demonstrated to help develop an ability to analyze situations from different perspectives.When playing alone, highly creative first graders may act out strong negative emotions, such as anger, hostility, and anguish. This phenomenon is theorized to be a secure environment for the processing of repressed thoughts and emotions.In middle childhood, the construction of paracosms emerges. Paracosms are fantasies that encompass entire alternative worlds, which children repeatedly revisit over extended periods, sometimes for months, and even create languages spoken within these worlds. This type of play reaches its zenith at age 9 or 10 and is a strong predictor of future creativity. A Michigan State University study of MacArthur "genius award" winners revealed a notably high prevalence of paracosm creation during their childhoods.

Beginning in fourth grade, creativity shifts from a state of isolation to one of integration, wherein researching and studying become integral components of problem-solving. This transition, however, is not without challenges. As children are inundated with increasingly complex information during their schooling, they may experience a decline in creativity. However, when creative children have a supportive teacher—someone tolerant of unconventional answers, occasional disruptions, or detours of curiosity—they tend to excel.Conversely, when they lack such support, they tend to underperform and drop out of high school or don't finish college at high rates.

The attrition of creative individuals from educational institutions is not due to their possessing negative traits such as depression, anxiety, or neuroticism. Rather, it is due to their feeling of discouragement and boredom in the educational environment. In fact, these traits, often associated with creativity, can impede it by reducing openness to new experiences and novelty-seeking behaviors. Instead, creative individuals tend to demonstrate active moods and positive affect, characterized by engagement, motivation, and openness to the world. This new perspective on creativity challenges the prevailing notion that it is an aberrant or exceptional phenomenon. Instead, it is seen as a natural aspect of normal brain function. Some scholars argue that a lack of creativity, rather than an abundance of it, is the true risk factor.In his research, Runco poses a question to college students: "Consider all the factors that could hinder your graduation from college." He then instructs them to select one of these factors and to devise as many solutions for the problem as possible.This is a classic divergent-convergent creativity challenge. A subset of respondents, akin to the proverbial Murphy, swiftly enumerates every potential impediment. However, they exhibit an utter absence of flexibility in formulating creative solutions.This incapacity to envision alternative approaches fosters a sense of despair.Runco's two-question survey has the capacity to predict suicide ideation, even when accounting for preexisting levels of depression and anxiety.

In subsequent research, Runco found that individuals who excel in problem-finding and problem-solving tend to have more robust relationships, better stress management, and a greater ability to overcome challenges.A similar study of 1,500 middle school students revealed that those with high levels of creative self-efficacy exhibited greater confidence in their future success and the efficacy of their problem-solving abilities. These students were confident that their capacity for alternative thinking would serve them well, regardless of the challenges they encountered.

At the age of 30, Ted Schwarzrock found himself in a position that diverged from the trajectory that would typically lead to inclusion in Torrance's longitudinal study. His early years did not exhibit any artistic inclinations, and his family's approach to his education did not prioritize or nurture his creative talents.The son of a dentist and a speech pathologist, he had been compelled to pursue medical school, an environment that he found to be stifling and characterized by frequent interactions with professors and superiors. However, he eventually discovered a means to integrate his creativity with his medical expertise by inventing new medical technologies.Presently, Schwarzrock possesses significant financial independence, having established and subsequently divested himself of three medical-products companies and being a partner in three additional ones.His contributions to healthcare have been multifaceted, ranging from the development of a portable respiratory oxygen device to the creation of skin-absorbing anti-inflammatories and insights into the mechanisms by which bacteria become antibiotic-resistant. Notably, his most recent project has the potential to reduce the cost of spinal surgery implants by 50%.Reflecting on his journey, Schwarzrock shares, "During my childhood, I did not perceive myself as a 'creative individual.' However, with the passage of time, I have come to recognize the significance of this perspective in understanding my experiences and emotions."In American society, creativity has historically been held in high esteem, yet its true nature remains elusive. As the nation's creativity scores continue to decline, the prevailing national strategy for fostering creativity remains largely unsatisfactory, relying on a passive hope for inspiration to materialize. However, the challenges confronting us today, and those that are likely to arise in the future, necessitate more proactive measures.Fortunately, the scientific community has made significant progress in identifying the mechanisms that facilitate creativity, offering us a roadmap to harness that elusive muse.

Book Creativity

The act of staring at a blank computer screen for a duration of two hours can be considered an exercise in creativity.When an author completes a literary work, inquiries are made into the amount of time devoted to the endeavor. The response to this question has been found to be remarkably challenging.Having authored nine books, it can be stated that each publication has been a distinct experience. However, there are two inquiries that are posed repeatedly, both by members of the press and by the general public.The first inquiry pertains to the origin of the idea behind the book. My most candid response to this query would be an admission of uncertainty, a lack of recollection, or a reluctance to disclose the full details. However, given the persistent nature of this inquiry, I have devised a semi-plausible response that I feel compelled to reiterate. It is conceivable that this habitual response represents the actual answer, and that it required 25 reiterations of the inquiry before I could discern this possibility.For instance, when confronted with the question of the genesis of the concept behind But What If We're Wrong?, I frequently cite my experience of watching a particular television series, Cosmos, while concurrently engaging with the biography of a specific author, Herman Melville.

While these two synchronized events may have indeed occurred, and may appear to be a plausible genesis for a workable idea, part of me suspects that I had, in fact, been thinking about the concept of collective wrongness unconsciously for 30 years, and that this serendipitous moment was merely the first occasion on which I decided to write a book about it.It is possible that what I classify as the inception of the idea is, in fact, the conclusion of that idea. However, the significance of this distinction is arguably negligible, as the primary motivation behind the inquiry appears to be the desire to understand the genesis of the concept, a topic of interest that is likely more pertinent to an individual who has dedicated themselves to the written word.The subsequent inquiry, while seemingly more straightforward, is of greater intrigue: "How long did it take you to write this?"This question engenders a multitude of ancillary inquiries, delving into the intricacies of the creative process. For instance, does the act of writing only encompass the mechanical typing, or does it extend to the conceptualization phase? Furthermore, does the act of staring at a blank computer screen for extended periods while consuming Mountain Dew qualify as creativity?If I conceive of a vague idea for a novel in 1996 but do not record any written work until autumn 2016, should the novel be considered as having taken 20 years or six months to develop?The commencement of the writing process remains ambiguous.

This issue is further complicated by the perception that my literary works are in a constant state of evolution, even after their publication and subsequent availability in bookstores, where they become impervious to alteration.I acknowledge the incongruity of this notion.The moment a book is physically assembled and made available in bookstores, it should be considered complete.The text should be frozen in time.The process should be considered finished.However, based on my experience, this is often not the case. Following the publication of a book, I am frequently subjected to an onslaught of inquiries regarding the intended implications of specific passages, my motives, the autobiographical or metaphorical nature of my work, and its potential composition under the influence of drugs.This phenomenon is further exacerbated by individuals who engage in literary criticism of my work and subsequently demand a response to their conjectures. It is important to note that these post-publication musings do not alter the content of the book itself. However, they do influence the reader's experience of the text, particularly those who encounter these additional thoughts before, during, or after reading the book. This additional context, albeit subtle, can significantly impact the reader's perception of the book. Even if they do not agree with the content of these additional thoughts, readers cannot ignore the intellectual position they represent. Consequently, it often feels as though I am perpetually engaged in the process of writing, with each inquiry regarding a past work serving as a reminder of this continuous endeavor.If I were more prudent, I would opt to lead a more secluded life. However, I find myself unable to do so.I once encountered a promotional poster for Black Sabbath from the 1970s that read, "More good reviews than most. More bad reviews than all." I can relate to this sentiment. I am taken aback by the frequency with which my books are reviewed, despite the common claim that this should be a source of satisfaction. However, I find it perplexing that this phenomenon continues to disconcert me, despite having spent the majority of my life engaged in the act of reviewing the works of others.It is a zero-sum game. The content of a review exerts negligible influence on book sales, with the exception of works by unknown authors. A favorable, high-profile review may temporarily increase sales by a negligible margin, while a negative, high-profile review can generate an almost identical commercial impact.My best-reviewed books have the lowest sales figures.

This phenomenon is exemplified by the case of Lisa Hilton, who, under her pseudonym LS Hilton, published an erotic thriller that received overwhelmingly negative reviews.Despite this, Hilton perceives a subtle influence of these reviews on the book's actual content, suggesting that the response to a book can shape its perceived meaning. This phenomenon, characterized by the perceived drift of a book from its initial conception as it receives increasing attention, is a source of disappointment for many writers. It is evident that publishers are driven by the objective of garnering maximum exposure for their publications. Consequently, a book is deemed successful only when the author relinquishes control over its interpretation and symbolism.A fundamental dichotomy emerges between the act of writing and the process of publishing: writing serves as a means to construct reality, while publishing is characterized by the gradual relinquishment of control over that very reality. This dynamic, however, is an inherent aspect of the publishing process, and as such, it is not subject to external critique. The disappointment experienced by the reader is, in essence, the compensation provided by the publishing industry for the loss of control over the text.

We Aren't the World

The objective is to effect a paradigm shift in the manner in which social scientists contemplate human behavior and culture.In the summer of 1995, a young graduate student in anthropology at the University of California, Los Angeles (UCLA) named Joe Henrich traveled to Peru to conduct fieldwork among the Machiguenga, an indigenous people who reside north of Machu Picchu in the Amazon basin. The Machiguenga had traditionally been horticulturalists who resided in single-family, thatch-roofed houses in small hamlets composed of extended families. Their diet consisted of local game and produce from small-scale farming, and they shared with their kin but rarely traded with outside groups.While the setting was fairly typical for an anthropologist, Henrich's research was not. Rather than engaging in traditional ethnographic methods, he opted to administer a behavioral experiment devised by economists. This experiment, akin to the renowned prisoner's dilemma game, aimed to ascertain whether isolated cultures possessed a comparable fundamental inclination towards fairness as those in Western societies. Henrich anticipated that this endeavor would substantiate a fundamental assumption underlying such experiments, one that serves as a foundational principle in the fields of economics and psychology: the notion that humans possess a uniform cognitive apparatus, a hypothesis that posits the presence of a shared evolutionary rational and psychological architecture.The experiment that was administered to the Machiguenga community was designated as the ultimatum game.The protocol of this game is as follows: each round consists of two anonymous participants. In each round, the first player is endowed with a sum of money, typically set at $100, and is informed that they must relinquish a portion of it to the second player. The second player has the prerogative to either accept or decline the offer.However, a caveat is introduced: if the recipient declines, both parties are left with nothing. North Americans, who typically serve as the subjects in such experiments, typically offer a 50-50 split when on the giving end. However, when on the receiving end, they demonstrate a propensity to punish the other player for unequal splits that benefit them.In essence, Americans exhibit a tendency to act equitably towards strangers while punishing those who do not meet this standard.Among the Machiguenga, word rapidly disseminated regarding the young, square-jawed visitor from America distributing monetary funds. The stakes employed in the game with the Machiguenga were not negligible; they were equivalent to a few days' wages earned from episodic work with logging or oil companies. Consequently, Henrich had no difficulty finding volunteers. However, he encountered significant challenges in explaining the rules, as the Machiguenga perceived the game as profoundly unusual.

When Henrich initiated the game, it became immediately apparent that the Machiguenga's behavior diverged significantly from the typical North American norm. Initially, the offers from the first player were considerably lower. Moreover, when they were on the receiving end of the game, the Machiguenga seldom refused even the lowest possible amount.Henrich notes that the Machiguenga perceived it as absurd to reject an offer of free money. Henrich and his research assistant administered the Third Party Punishment Game in the village of Teci on Fiji's Yasawa Island.The potential implications of the unexpected results were quickly apparent to Henrich. Henrich recognized that a substantial corpus of scholarly literature in the social sciences—particularly economics and psychology—depended on the ultimatum game and analogous experiments. A significant aspect of this research was the underlying assumption that the results revealed evolved psychological traits common to all humans, despite the fact that the test subjects were predominantly from the industrialized West. Henrich recognized that if the Machiguenga results were replicable and if similar disparities could be identified across other populations, this assumption of universality would need to be reevaluated.Henrich had anticipated that his contributions would be modest, adding a small branch to an established body of knowledge. However, his research revealed that he was actually disrupting the very foundation of the field. This prompted Henrich to question other assumptions about "human nature" in social science research, leading him to contemplate the potential for rethinking these principles when tested across diverse populations.

Subsequently, Henrich was awarded a grant from the MacArthur Foundation, enabling him to conduct his research in diverse populations. Collaborating with a dozen other researchers, he led a study of 14 small-scale societies spanning from Tanzania to Indonesia.The study revealed significant variations in the behavior of both players in the ultimatum game. In no society was there an observation of individuals who behaved purely selfishly (i.e., those who consistently offered the lowest amount and never refused a split). However, there was a considerable variation in average offers across different societies. In some societies, particularly those that heavily utilize gift-giving to gain favor or allegiance, the first player often made overly generous offers exceeding 60 percent, which were frequently rejected by the second player. These behaviors were rarely observed among Americans.

Henrich's research garnered significant recognition, leading to his selection as the recipient of the U.S. Presidential Early Career Award in 2004. However, his work also led to contentious debates.During a job interview at the University of British Columbia's anthropology department a year later, Henrich recounts experiencing a hostile reception. Anthropology, being the social science most interested in cultural differences, was particularly affected by this shift. The young scholar's methods, which involved the use of games and statistics to test and compare cultures with those of the West, were perceived as overly aggressive and intrusive by some members of the department. As Henrich recollects, "Professors from the anthropology department suggested that my work was unfavorable, and the word 'unethical' was mentioned."Consequently, rather than adhering to the prevailing norms, Henrich made a strategic shift in his academic affiliation. At the University of British Columbia, a group of influential individuals recognized the potential of Henrich's work and established a position for him, integrating elements of economics and psychology. It was within the psychology department that he encountered two individuals with whom he shared a similar intellectual perspective: Steven Heine and Ara Norenzayan.Collaborating, the three of them embarked on the endeavor of authoring a paper that aspired to fundamentally disrupt the prevailing paradigms of social scientists concerning human behavior, cognition, and culture.

A contemporary liberal arts education often pays lip service to the concept of cultural diversity, acknowledging the influence of social and cultural factors in shaping individual perspectives. There is a general consensus that cultural diversity is beneficial and that ethnocentrism is detrimental. However, beyond this consensus, the nuances become more complex. While the notion of embracing and celebrating individuals from diverse backgrounds appears self-evident, the subsequent implication—that individuals from distinct ethno-cultural backgrounds possess unique characteristics that enrich the body politic—can be more contentious.To circumvent the pitfalls of stereotyping, this implication is seldom articulated explicitly.Challenging liberal arts graduates to articulate their appreciation for cultural diversity often evokes a defensive response, with many resorting to the innocuous assertion that, in essence, we are all fundamentally similar.

A review of the social science curriculum of the last few decades reveals the factors that have contributed to the apparent sense of disorientation among modern graduates. The previous generation or two of undergraduates was primarily instructed by a group of social scientists who were actively atoning for the racism and Eurocentrism of their predecessors, albeit in different forms. A significant segment of the anthropological community engaged in introspective deliberations surrounding postmodernism, eschewing attempts at rationality and science, which were disparagingly characterized as instruments of cultural imperialism.Conversely, economists and psychologists circumvented the issue by adopting the expedient assumption that their purview was to study the human mind unencumbered by cultural influences. The prevailing consensus among these scholars was that the human brain, being genetically comparable across different populations, should also exhibit similar patterns of behavior, perception, and cognition. This assumption led to the conclusion that the study of human behavior could be confined to the population of undergraduates, as their behaviors were considered to be representative of the broader human condition. A survey of the top six psychology journals conducted in 2008 revealed the prevalence of this assumption. The survey found that more than 96 percent of the subjects tested in psychological studies from 2003 to 2007 were Westerners, with nearly 70 percent hailing from the United States alone.This indicates that 96 percent of the human subjects in these studies originated from countries that constitute only 12 percent of the global population.

Henrich's research on the ultimatum game exemplifies a modest yet growing countertrend in the social sciences, wherein researchers directly address the question of how profoundly culture influences human cognition.His new colleagues in the psychology department, Heine and Norenzayan, also contributed to this emerging trend.Heine examined the varied perceptions of the world, reasoning, and self-perception among Western and Eastern cultures. Norenzayan's research focused on the influence of religious belief on social bonding and behavior.The three researchers began to compile examples of cross-cultural research that, like Henrich's work with the Machiguenga, challenged long-held assumptions of human psychological universality.Some of that research went back a generation.For instance, in the 1960s, researchers discovered that aspects of visual perception differed from place to place. A seminal example of this literature is the Müller-Lyer illusion, which demonstrates that one's perception of line length is influenced by cultural and environmental factors.Researchers observed that Americans tend to perceive the line with the ends feathered outward (B) as longer compared to the line with the arrow tips (A). In contrast, San foragers of the Kalahari exhibit a stronger tendency to perceive the lines as equal in length. A study was conducted that involved testing subjects from more than a dozen cultures. The results showed that Americans were at the far end of the distribution, experiencing the illusion more dramatically than all other subjects.These findings challenge the universality of research conducted in the 1950s by Solomon Asch, a pioneering social psychologist. Asch's research discovered that test subjects were often willing to make incorrect judgments on simple perception tests to conform with group pressure. However, subsequent studies, performed across a diverse array of 17 societies, revealed that the influence of group pressure exhibited a wide spectrum of variability. Americans, once again, demonstrated a marked deviation from this norm, exhibiting a reduced propensity to conform to the group.

Subsequent research by Heine, Norenzayan, and Henrich revealed a plethora of studies suggesting substantial cultural variations across diverse geographical regions. These variations were observed in various domains, including spatial reasoning, the interpretation of others' motivations, categorization, moral reasoning, the delineation of the self and others, and other areas.The researchers hypothesized that these cultural differences were not attributable to genetic factors. The observed variations in the manner in which Americans and Machiguengans participated in the ultimatum game, for instance, were not attributed to differential evolutionary brain development. Instead, it was posited that Americans, unconsciously, exhibited a psychological tendency shared by individuals in other industrialized countries. This tendency had been refined and transmitted over millennia through increasingly complex market economies. In the context of frequent interactions with strangers, individuals are inclined to exert additional effort to ensure satisfaction when they perceive a breach of fairness. This tendency is evident in the diverse outcomes observed across different cultures.Machiguengans, with their distinct historical background, exhibit a unique conception of fairness that is shaped by their cultural norms. In societies with a strong tradition of gift-giving, an alternative conception of fairness prevails. Individuals often decline generous financial offers, having been shaped by cultural norms that teach them that accepting such gifts leads to onerous obligations.Our economies are not shaped by our sense of fairness; rather, it is the other way around.The mounting body of cross-cultural research that the three researchers were compiling suggests that the mind's capacity to adapt to cultural and environmental settings is far greater than previously assumed. The most intriguing aspect of cultures might not lie in the observable practices, such as rituals, dietary preferences, and behavioral norms, but rather in the manner they influence our fundamental conscious and unconscious thought processes and perceptions.To illustrate, the varied perceptions of the Müller-Lyer illusion are likely influenced by the differing physical environments people have inhabited over their lifetimes.American children, for instance, typically grow up in environments characterized by box-shaped rooms of varying dimensions. These environments, with their characteristic carpentered corners, serve as a sort of training ground for visual perception, helping individuals adapt to this novel setting. This adaptation involves learning to perceive converging lines in three dimensions, a skill that is strikingly novel in the context of human history.When unconsciously translated in three dimensions, the line with the outward-feathered ends (C) appears farther away, leading the brain to perceive it as longer. It is noteworthy that individuals who have spent considerable time in natural environments devoid of such carpentered corners tend to exhibit diminished sensitivity to this visual illusion.As the three researchers continued their work, they observed a phenomenon that merits attention: a recurrent pattern wherein a group of individuals appeared to deviate notably from other populations in terms of perceptions, behaviors, and motivations, often occupying the lower end of the human bell curve.This observation led them to title their paper "The Weirdest People in the World?" In this paper, the authors defined "weird" as both unusual and Western, educated, industrialized, rich, and democratic. The researchers concluded that the distinctive characteristics of the Western world, including its cultural preferences, are not the sole factors contributing to this phenomenon. Instead, they argued that the unique way in which we perceive ourselves and others, and our distinctive understanding of reality, distinguishes us from other humans on the planet, as well as from the vast majority of our ancestors. A close examination of the data revealed that Americans often stood out as particularly unconventional, leading the researchers to conclude that "American participants are exceptional even within the unusual population of Westerners—outliers among outliers."This finding led the researchers to conclude that social scientists might have selected a less than ideal population from which to draw broad generalizations.The researchers had, in effect, been studying penguins while believing that their findings were applicable to all birds.

In a recent conversation, I had the opportunity to engage in a discussion with Henrich, Heine, and Norenzayan during a dinner gathering at a modest French restaurant in Vancouver, British Columbia. The purpose of this meeting was to explore the reception of their unconventional paper, which was published in the esteemed journal Behavioral and Brain Sciences in 2010.The trio of researchers are relatively young, particularly within the context of academia, and they are family men of good humor. They recounted their apprehension as the publication date drew closer. The crux of their argument in the aforementioned publication posited that a considerable portion of the prevailing assumptions among social scientists concerning fundamental aspects of human cognition might be confined to a relatively narrow segment of the human population.This audacious proposition, which effectively challenged substantial bodies of research, left the researchers bracing for potential social censure within their respective academic domains.Henrich acknowledged the gravity of their situation, stating, "We were scared. We were warned that a lot of people were going to be upset."

Norenzayan interjected, "We were told we were going to get spit on."Henrich concurred, stating, "Yes, that we'd go to conferences and no one would sit next to us at lunchtime."Notably, they appeared less concerned about their use of the pejorative acronym WEIRD to describe a significant segment of the population, although they acknowledged that it was only applicable to their own group. Henrich further elaborated, stating, "The only people we could have called weird are right here at this table."

Nonetheless, the use of the term "weird" to describe the Western mind, and more specifically, the American mind, raised questions about whether it implied that our cognitive processes were not just different but somehow malformed or distorted. In their paper, the trio highlighted cross-cultural studies that suggest the "weird" Western mind is the most self-aggrandizing and egotistical on the planet, as we are more likely to promote ourselves as individuals rather than as part of a group.WEIRD minds are also more analytic, with a tendency to focus on a single object of interest rather than understanding that object in the context of its surroundings.

The WEIRD mind also appears to be unique in terms of its understanding and interaction with the natural world.Studies show that Western urban children grow up in environments that are isolated from the natural world, which results in their brains never forming a deep or complex connection to the natural world.While studying children from the U.S., researchers have suggested a developmental timeline for what is called "folkbiological reasoning." These studies posit that children do not develop the ability to displace human qualities onto animals until approximately seven years of age, at which point they begin to understand that humans are one of many animal species.However, compared to Yucatec Maya communities in Mexico, Western urban children appear to demonstrate a delayed development in this regard.Children who have constantly interacted with the natural world are much less likely to anthropomorphize other living things into late childhood.

This phenomenon can be attributed to the fact that individuals residing in Western, industrialized, rich, educated, democratic (WEIRD) societies rarely have opportunities to engage with animals other than humans or domestic pets. Consequently, their understanding of the natural world tends to be rather simplistic and cartoonish in nature.As the report notes, studying the cognitive development of folk biology in urban children would be analogous to studying the physical growth of malnourished children.

During our dinner conversation, I acknowledged to Heine, Henrich, and Norenzayan that the concept of perceiving reality through a distorted cultural lens was disconcerting. This notion gave rise to a range of metaphysical inquiries, such as whether my thinking is so atypical that understanding other cultures is challenging, and whether it is possible to shape my own psyche or that of my offspring to be less WEIRD and more adept at thinking like the rest of the world. Would such a change in my thinking make me happier?Henrich expressed mild concern that I was taking this research personally, stating that his intention was not for his work to be interpreted as postmodern self-help advice. He clarified, "Our primary interest lies in these questions themselves, not in their practical applications."

The three researchers emphasized that their objective was not to assert the superiority or inferiority of any specific cultural psychology, but rather to underscore the necessity for expanding the study's sample pool to encompass a more diverse range of human behaviors and cognitive processes.Despite these reassurances, however, I found myself unable to ignore a subtle message embedded within their research. For instance, their assertion that "weird children" develop their understanding of the natural world in a "culturally and experientially impoverished environment" and that they are, in this way, the equivalent of "malnourished children," is difficult to interpret positively.

The task that Henrich, Heine, and Norenzayan are posing for social scientists is not an elementary one: the endeavor to elucidate the influence of culture on cognition will be a formidable task.Cultures are not monolithic entities; they can be endlessly parsed. Ethnic backgrounds, religious beliefs, economic status, parenting styles, and rural versus urban or suburban upbringing are just a few of the numerous cultural differences that can influence our conceptions of fairness, how we categorize things, our method of judging and decision making, and our deeply held beliefs about the nature of the self, among other aspects of our psychological makeup.The impact of these fine-grained cultural differences on our thinking is just beginning to be explored. Recent research has demonstrated that individuals from "tight" cultures, characterized by stringent norms and minimal tolerance for deviant behavior (e.g., India, Malaysia, and Pakistan), exhibit superior impulse control and heightened self-monitoring abilities compared to those from other regions.Moreover, studies have revealed that males raised in the honor culture of the American South experience significantly more pronounced surges of testosterone in response to insults compared to those from the North.Research published late last year has also suggested the presence of psychological differences at the city level. In comparison to San Franciscans, Bostonians' sense of self-worth is more contingent on community status and financial and educational achievement.As Norenzayan notes, a cultural difference need not be substantial to be significant, underscoring the need to avoid oversimplifying complex cultural phenomena.

According to Norenzayan, contemporary psychologists have exhibited a tendency known as "physics envy," which he characterizes as a misguided aspiration to transcend the content of individuals' thoughts to focus exclusively on the underlying universal hardware. Norenzayan critiques this approach, stating that it is a flawed method for studying human nature because the content of our thoughts and their process are inextricably intertwined. This suggests that when studying human cognition, it is essential to consider the role of cultural ideas and behaviors, as they significantly influence thought processes. This novel approach proposes a shift in the focus of psychological research, emphasizing the examination of cultural content as a priority, followed by an investigation of cognition and behavior.A notable illustration of this paradigm shift can be found in Norenzayan's recent research on religious belief. When Norenzayan embarked on his academic journey in 1994, having relocated from Lebanon to America four years prior, he was eager to explore the influence of religion on human psychology.He recounts, "I would meticulously browse through textbooks and indexes, searching for the term 'religion,' only to find it consistently absent. This was a source of great astonishment. This led him to question the absence of a comprehensive exploration of religion in psychology, given its pervasive influence in shaping individual and collective perceptions.Norenzayan's research has focused on the role of entrenched religious beliefs in the formation of large-scale societies. He has proposed a hypothesis suggesting a potential correlation between the proliferation of religions that adhere to the concept of "morally concerned deities"—that is, deities who exhibit concern for the moral behavior of individuals—and the emergence of large cities and nations.The hypothesis posits that the development of cooperation within large groups of unknown individuals may have been facilitated by the shared belief that an omnipotent being was perpetually observing one's actions.

The question then arises as to whether large-scale societies require religion for their development, and if so, whether these societies can continue to exist without it.Norenzayan has noted that certain parts of Scandinavia, where atheism is prevalent, appear to be thriving. This suggests that these societies may have advanced along the religious ladder, only to discard it. Alternatively, the notion of an unseen entity perpetually observing one's actions may persist in our culturally influenced thought, even in the absence of religious belief.

Norenzayan poses the question of why, if religion played such a pivotal role in human psychology, researchers have not devoted more attention to this field. He suggests that experimental psychologists, in their peculiar way, might be the least religious among academic disciplines, closely followed by biologists. This tendency, he contends, could lead to a self-perpetuating cycle of academic discussion, where researchers, largely isolated from each other, reinforce the perception that religion is of little consequence. This perspective is further evidenced by the preponderance of prominent theorists who, over the past century, have predicted the imminent obsolescence of religion as a societal phenomenon. However, the world continues to demonstrate its profound religiosity.The apprehension of ostracism expressed by Henrich, Heine, and Norenzayan following the publication of the WEIRD paper has been shown to be unwarranted. The paper has garnered a predominantly positive response, with numerous colleagues and peers expressing their conviction that it will catalyze profound transformations within their respective fields.Richard Nisbett, a renowned psychologist at the University of Michigan, has articulated his unreserved confidence in the paper's potential to disrupt the landscape of the social sciences. He attributes its impact to its comprehensive nature and its audacious assertion of key principles.

Remarkably, the publication prompted a notable shift in perspective, with academics from various disciplines expressing their own shortcomings. Two researchers specializing in brain imaging from Northwestern University, for instance, contended that the emerging field of neuroimaging had replicated the same oversight as psychologists, highlighting that 90 percent of neuroimaging studies had been conducted in Western countries. Researchers in the field of motor development similarly suggested that their discipline's body of research ignored how different child-rearing practices around the world can dramatically influence states of development.Two psycholinguistics professors suggested that their colleagues had also made the same mistake: blithely assuming human homogeneity while focusing their research primarily on one rather small slice of humanity.

The crux of the challenge posed by the WEIRD paper, therefore, is not merely a call to augment cross-cultural studies in experimental human research; rather, it is an interrogation of the Western conception of human nature itself.For some time now, the predominant explanation for the remarkable success of humans in adapting to diverse environments across the globe has been attributed to our substantial brain size, which enables us to learn, improvise, and solve problems.

Henrich has proposed an alternative, termed the "cultural niche" hypothesis, challenging the prevailing "cognitive niche" hypothesis.Henrich's argument is as follows: the sheer volume of knowledge in any given culture far exceeds the individual capacity to fully comprehend it. He posits that individuals draw upon this vast cultural repository of knowledge by mimicking the behaviors and thought processes of their peers, often subconsciously. This phenomenon, as Henrich notes, is exemplified by behaviors such as the shaping of tools, adherence to food taboos, and conceptualizations of fairness. These behaviors are not necessarily the result of individualized adaptation, but rather, they are shaped by the cultural context's inherent trustworthiness.Henrich's research also highlights the cultural nuances of pregnancy and breastfeeding practices. In his study, he observed that Fijian women exhibited a tendency to avoid certain potentially toxic fish during these periods. However, many of these women lacked awareness of the risks or possessed misguided beliefs regarding the safety of these fish. Despite their individual comprehension, by emulating this culturally adaptive behavior, they ensure the safety of their offspring.According to these researchers, a distinctive facet of human psychology is that our large brains are adapted to allow local culture to guide us in life's complexities.The implications of this novel perspective on the human mind are yet to be fully explored.Henrich proposes that his research on fairness may initially be applicable to individuals engaged in international relations or development. Henrich emphasizes that individuals are not merely "plug and play" entities, suggesting that one cannot simply transplant a Western court system or form of government into another culture and expect it to function identically.Similarly, those seeking to utilize economic incentives to promote sustainable land use must understand local concepts of fairness to influence behavior in a predictable manner.

The notion of a culturally shaped mind, which is predicated on the idea that Westerners are independent of others, is met with resistance due to this preconceived notion. A significant body of research in cultural psychology, particularly the comparison of Western and Eastern concepts of the self, underscores this challenge. Heine's research has been significantly influenced by a seminal paper published in 1991 by Hazel Rose Markus, of Stanford University, and Shinobu Kitayama, who is currently affiliated with the University of Michigan.In their seminal paper, Markus and Kitayama proposed that different cultures nurture divergent perspectives on the self, particularly along a single axis: some cultures perceive the self as independent from others, while others conceptualize the self as interdependent. The interdependent self, which is more prevalent in East Asian countries such as Japan and China, is interconnected with others within a social group and prioritizes social harmony over self-expression. In contrast, the independent self, which is most prominent in America, emphasizes individual attributes and preferences, perceiving the self as separate from the group.The "rod and frame" task, a classic psychological experiment, involves determining whether a line in the center is vertical.

Heine posits that the development of the Western brain, characterized by its tendency to perceive self-separation from others, may be associated with differences in reasoning patterns. In contrast to the global norm, Westerners, particularly Americans, predominantly engage in analytical reasoning, a tendency that involves the disintegration of complex concepts into their constituent parts for systematic examination. A parallel example can be found in the observation that while viewing the same cartoon of an aquarium, Japanese individuals tend to recall details pertaining to the moving fish, whereas Americans tend to focus on elements such as seaweed and bubbles in the background.In a different experiment, known as the "rod and frame" task, Americans demonstrate a higher level of performance. This task involves judging whether a line is vertical, despite the presence of a skewed frame. This tendency is consistent with the tendency among Americans to perceive themselves as distinct from their group, a perspective that may be influenced by historical and cultural factors.Heine and others have proposed that these observed differences may be manifestations of long-standing cultural activities and trends, dating back millennia.The perception of independence or interdependence in oneself might be shaped by historical conditions, such as whether one's ancestors engaged in rice farming, a practice that necessitated collective labor and cooperation, or animal herding, a pursuit that promoted individualism and aggression. Heine further cites Nisbett's (2001) research, which utilizes a data set of 2,500 years of Greek and Chinese philosophical writing, to support the hypothesis that the dichotomy between analytic and holistic reasoning styles is evident in these texts. These psychological trends and tendencies, therefore, may persist for generations, even hundreds of years after the activity or situation that initially gave rise to them has either disappeared or undergone fundamental change.

The failure of Western researchers to adequately consider the interplay between culture and cognition can be partially attributed to the influence of their analytic/individualistic mindsets, which are shaped by cultural influences.The tendency to reduce human psychology to hardwiring is not surprising when considering the type of mind that has designed the studies. This tendency is characteristic of the analytic reasoning style prevalent in the West, where taking an object out of its context is a common practice. Similarly, the impact of culture may have been underestimated due to the notion that individuals are subject to larger historical currents and unconsciously mimic the cognition of others, which challenges the Western conception of the self as independent and self-determined. The historical missteps of Western researchers can be attributed to the predictable consequences of the WEIRD mind doing the thinking.

Happiness Is A Glass Half Empty

Contemporary society has adopted a pervasive mantra of positivity, emphasizing the importance of maintaining a positive outlook and focusing on achieving success. However, a critical examination reveals that this approach may not necessarily lead to true contentment.A poignant memorial situated in an unremarkable business park near Ann Arbor, Michigan, offers a counterintuitive perspective. Despite its exterior appearance, this memorial serves as a stark reminder of humanity's failed aspirations. Even for members of the public, who rarely visit, the interior is not immediately apparent. The structure resembles a vast and haphazardly organized supermarket, with grey metal shelves crammed with thousands of packages of food and household products along every aisle. A distinctive characteristic of this setting is the remarkably loud nature of the displays, which, upon closer inspection, is found to be attributable to the absence of items commonly found in traditional supermarkets. These products, deemed as "failures," are removed from sale after a brief period due to their lack of popularity among consumers. Within the realm of product design, the storehouse—operated by GfK Custom Research North America—has acquired a moniker: the Museum of Failed Products.This institution serves as a poignant testament to the unvarnished reality of consumer capitalism, offering a counterpoint to the pervasive culture of modern marketing, which is characterized by its relentless pursuit of success and its upbeat disposition. In essence, it serves as a microcosm of the global consumer landscape, where unconventional and often unsuccessful products coexist in a state of unintended yet poignant juxtaposition. This unique environment offers a counterpoint to the mainstream, where Clairol's A Touch of Yogurt shampoo finds company alongside Gillette's For Oily Hair Only, a product that failed to garner popularity. The museum also features a bottle of Pepsi AM Breakfast Cola, a beverage that was initially well-received but ultimately met with commercial failure in 1989. The museum houses a collection of discontinued brands, including caffeinated beer, TV dinners bearing the logo of the toothpaste manufacturer Colgate, self-heating soup cans that had a regrettable tendency to explode in customers' faces, and packets of breath mints that were withdrawn from sale due to their resemblance to the small packages of crack cocaine dispensed by America's street drug dealers. The museum also houses microwaveable scrambled eggs, which are pre-scrambled and sold in a cardboard tube with a pop-up mechanism for easier consumption in the car.

The Japanese term mono no aware, which translates as "the pathos of things," captures a sense of bittersweet melancholy at life's impermanence, as exemplified by the beauty attributed to cherry blossoms or human features, a consequence of their transient nature. This concept can be extrapolated to the museum's proprietor, Carol Sherry, an understatedly stylish GfK employee. Sherry's sentiments extend to the cartons of Morning Banana Juice under her supervision and the Fortune Snookies, a short-lived line of fortune cookies for dogs.In Sherry's perspective, every failure symbolizes a poignant narrative, a testament to the struggles of designers, marketers, and salespeople. She is acutely aware that the success or failure of products like A Touch of Yogurt can have far-reaching consequences, including the financial stability of designers, marketers, and salespeople.She expresses particular empathy for the developer who inadvertently created breath mints that resemble crack cocaine, stating, "I feel really sorry for the developer on this one." She notes that she has met the developer and questions why he would have experienced homelessness or been involved with drug culture.She then shakes her head in disbelief. "These are real people who sincerely want to do their best, and then, well, things happen."

Museum of Failed Products

The Museum of Failed Products was itself a kind of accident, albeit a happier one.Its creator, a now-retired marketing man named Robert McMath, merely intended to accumulate a "reference library" of consumer products, not failures per se. Beginning in the 1960s, he initiated the acquisition and preservation of a comprehensive sample of each novel item he encountered.The collection rapidly exceeded the capacity of his office in upstate New York, necessitating a relocation to a converted granary. Subsequently, GfK acquired him and relocated the entire collection to Michigan.McMath had not anticipated the three-word truth that would propel his career: "Most products fail."According to some estimates, the failure rate is as high as 90%.By collecting new products indiscriminately, McMath ensured that his collection would consist predominantly of unsuccessful ones.The most striking aspect of the museum, however, is its ability to function as a profitable business. It is reasonable to assume that a reputable consumer product manufacturer would possess its own collection, meticulously curated to prevent the recurrence of past missteps. However, the consistent influx of new products at Sherry's door serves as a testament to the rarity of such a practice. Product developers are so engrossed in their future aspirations and so reluctant to allocate time or energy to contemplating past failures within their industry that they only belatedly recognize the value of accessing GfK's collection.Perhaps the most noteworthy aspect of this phenomenon is that many designers have visited the museum to examine—or have been surprised to discover—products their own companies had created and subsequently abandoned. This apparent reluctance to confront the challenges of past failures can be attributed to a general aversion to dwelling on negative aspects of business operations.It is noteworthy that failure is pervasive; however, individuals often choose to avoid acknowledging its presence.The prevalent contemporary approaches to happiness and success, including popular philosophies such as focusing on positive aspects, are founded on the notion that success is often defined by the presence of positive outcomes. However, since the earliest philosophers of ancient Greece and Rome, a dissenting perspective has proposed an alternative viewpoint: that our relentless pursuit of happiness or the achievement of specific objectives is, in fact, precisely what engenders feelings of misery and undermines our efforts. This perspective asserts that our incessant endeavor to eliminate or disregard negative emotions, such as insecurity, uncertainty, failure, and sadness, is what initially engenders feelings of insecurity, anxiety, uncertainty, and unhappiness.

This perspective, however, does not necessarily culminate in despondency. Instead, it proffers an alternative approach, a "negative path" to happiness, entailing a radical shift in perspective toward the very elements that most individuals endeavor to evade throughout their lives. This entails the cultivation of a capacity to embrace uncertainty, accept insecurity, and become acquainted with failure. The prevailing notion posits that embracing a more receptive stance towards negative emotions, or at the very least, a willingness to confront them, is a prerequisite for true happiness.In the realm of self-help literature, the most prominent manifestation of this preoccupation with optimism is the technique known as "positive visualisation," which advocates for the mental rehearsal of positive outcomes as a means to enhance their likelihood of materialization. Indeed, a tendency to perceive the bright side of life may be so deeply intertwined with human survival that evolution has shaped us to adopt this perspective.In her book, The Optimism Bias, the neuroscientist Tali Sharot compiles mounting evidence that a well-functioning mind may be built so as to perceive the odds of positive outcomes as greater than they really are. Research indicates that individuals who do not experience depression typically possess a less precise and more optimistic perspective on their capacity to influence events compared to those grappling with depression.However, this perspective is not without its drawbacks. In addition to experiencing disappointment when circumstances do not unfold as anticipated, there are significant issues with this outlook.Over the past few years, the German-born psychologist Gabriele Oettingen and her colleagues have conducted a series of experiments aimed at elucidating the truth about "positive fantasies about the future." The results of these experiments are striking: individuals who focus intently on positive future outcomes appear to experience a reduction in motivation to achieve these goals.For instance, experimental subjects who were encouraged to envision a high-achieving week at work often demonstrated less success in achieving these expectations.In a particularly ingenious experiment, Oettingen induced mild dehydration in participants. Subsequently, they were separated into two groups: one was instructed to envision the consumption of an icy, refreshing beverage, while the other engaged in a different exercise.The individuals who engaged in positive visualizations exhibited a substantial decline in energy levels, as evidenced by a decrease in blood pressure.Contrary to the anticipated enhancement in motivation to hydrate, the subjects responded by experiencing relaxation.This suggests a subconscious misinterpretation of imagining success as having already attained it.

While this does not imply the superiority of negative visualizations, it aligns with a fundamental tenet of Stoicism, a philosophy originating in Athens shortly after Aristotle's death and dominating Western conceptions of happiness for nearly five centuries.

According to the Stoics, the ideal state of mind was tranquility, which differed from the excitable cheer typically associated with positive thinking. Achieving tranquility did not entail pursuing enjoyable experiences; rather, it required cultivating a form of calm indifference toward one's circumstances. The Stoics proposed that negative emotions and experiences, when examined closely, contributed to this state.

The Stoics further posit that individuals often misperceive the source of their emotions, attributing distress to specific people, situations, or events. For instance, when irritated by a colleague's incessant chatter, one might naturally assume the colleague to be the source of irritation. Similarly, when confronted with the illness of a cherished relative, it is understandable to perceive the illness as the root of the pain. A thorough examination of one's experiences, as posited by the Stoics, necessitates the conclusion that neither of these external events can be categorized as inherently "negative."Indeed, any entity external to one's own mind cannot be accurately designated as either negative or positive; the source of suffering is not found in the external world, but rather, in one's personal beliefs about it.For instance, a colleague's irritating behavior is not, in and of itself, negative; it becomes problematic due to one's belief that the completion of work without interruption is of paramount importance. This perspective is further exemplified by the notion that even a relative's illness is perceived as negative in the context of the belief that illness is detrimental to one's relatives.It is noteworthy that a significant number of individuals contract illnesses daily, yet their experiences do not evoke a sense of distress, underscoring the notion that external events, in and of themselves, do not inherently carry a negative connotation.This perspective serves as a compelling argument for individuals with a positive outlook to adopt more optimistic beliefs. However, Stoics such as Seneca often counseled the active contemplation of adverse future scenarios. This approach, often referred to as "staring the worst-case scenarios in the face," is predicated on the notion that ceaseless optimism can lead to a greater reaction when confronted with adversity. This philosophy is predicated on the idea that envisioning the worst can, in itself, engender benefits. Psychologists have long acknowledged that "hedonic adaptation," the predictable and frustrating way in which any new source of pleasure is swiftly relegated to the backdrop of our lives, whether it's as minor as a new electronic gadget or as significant as a marriage, is one of the greatest enemies of human happiness. We grow accustomed to these sources of pleasure, and they cease to deliver the same level of joy. Consequently, the act of consciously reminding oneself of the potential loss of currently enjoyed objects or experiences has been shown to counteract the aforementioned adaptation effect. This practice effectively re-positions the object or experience from the background of one's life to the forefront, where it can once again elicit feelings of pleasure.A secondary, albeit more subtle, advantage of this form of negative thinking is its potential as an anxiolytic agent. The typical approach to allaying concerns about the future involves seeking reassurance and convincing oneself that outcomes will be favorable. However, reassurance is a double-edged sword. While it can be beneficial in the short term, like all forms of optimism, reassurance necessitates constant reinforcement. Providing reassurance to an anxious friend may result in the individual seeking ongoing reassurance. It is important to note that reassurance can, in fact, worsen anxiety. When one conveys to a friend that the worst-case scenario they fear is unlikely to occur, one is effectively reinforcing their belief that it would be catastrophic if it did occur. This results in the tightening of the coil of their anxiety, rather than the loosening of it.The Stoics observe that events frequently do not turn out as one would hope. However, it is important to note that when events do not unfold as anticipated, they rarely lead to the worst possible outcomes.For instance, the loss of a job is unlikely to result in starvation or death, and the dissolution of a relationship does not necessarily guarantee a life of unrelenting misery.These fears are often rooted in irrational assumptions about the future.As the Stoic-influenced psychologist Albert Ellis once noted, "The worst thing about any future event is usually your exaggerated belief in its horror." By engaging in vivid mental simulations of potential failures, individuals can effectively transform their fears into manageable concerns. The transient and evanescent nature of happiness derived from positive thinking is often accompanied by its fragility. In contrast, negative visualizations have been shown to promote a more stable and enduring sense of calm.

At the Museum of Failed Products, it is evident that a notable disadvantage of the positive-thinking culture, namely, an aversion to confronting failure, may have been responsible for the existence of numerous products on its shelves. Each product must have successfully navigated a series of meetings where its potential failure was overlooked. This reluctance to confront failure may stem from a collective aversion or a reluctance on the part of individuals in positions of authority to acknowledge or discuss the potential negative outcomes of a product.Even if they recognize the direction in which a product is headed, marketers may face a perverse incentive to continue investing in a product that is not viable, as this approach may result in increased sales and preserve their professional reputation.By the time the true state of a product becomes evident, the original developers may have moved on to other projects or joined other firms. The energy expended in investigating the underlying causes of failure is often negligible, and the involved parties often engage in a conspiratorial silence, unwittingly perpetuating the issue.A further impediment to contemplating failure, whether one's own or others', is the profound distortion it engenders in our understanding of the factors that lead to success. This phenomenon is exemplified by the prevalence of autobiographies, such as the 2006 release by the multimillionaire publisher Felix Dennis, entitled How To Get Rich: The Distilled Wisdom Of One Of Britain's Wealthiest Self-Made Entrepreneurs. These books, while entertaining, perpetuate a misguided perspective, asserting that financial success is contingent on qualities such as stubbornness and a propensity for risk-taking. However, research by the Oxford management theorist Jerker Denrell suggests that these characteristics are equally applicable to those who are highly unsuccessful. It is simply that the unsuccessful individuals do not often publish autobiographies.Autobiographies of individuals who have taken risks but have not experienced success are a rarity.Fortunately, developing a healthier approach to failure may be more straightforward than one might expect. The Stanford University psychologist Carol Dweck's research posits that our perceptions of failure are predominantly shaped by our beliefs regarding talent and ability. According to Dweck, these beliefs can be modified, leading to an improvement in our outlook.Dweck proposes that each individual can be positioned along a continuum based on their "implicit view," which refers to an unspoken attitude concerning the nature of talent and its origins. Individuals with a "fixed theory" adhere to the notion that ability is inherently innate, while those with an "incremental theory" espouse the belief that it is developed through challenges and diligent effort.If one tends to exhibit a strong aversion to failure, it is probable that one's position along Dweck's continuum is near the "fixed" end. Individuals with a "fixed theory" perspective regard challenges as opportunities to showcase their innate abilities, leading them to perceive failure as a sign of inadequacy. A notable example is that of a young athlete who, despite being encouraged to believe in his natural talents, fails to adequately prepare, thus hindering his potential. This perspective is rooted in the unspoken assumption that innate talent is self-explanatory, implying a lack of necessity to exert effort.In contrast, those subscribing to the incremental theory approach challenges as opportunities to develop their abilities. The experience of failure, in this context, is interpreted not as a sign of failure per se, but rather as evidence of pushing limits and striving for growth. This dynamic bears a resemblance to the process of weight training in which muscles experience growth through being pushed to their current capacity, leading to the tearing and subsequent healing of fibers. Among weightlifters, "training to failure" is not an admission of defeat but rather a strategic approach.Dweck's studies suggest that individuals have the capacity to shift their mindset, rather than being confined to a fixed perspective. The introduction to the distinction between fixed and incremental thinking can facilitate this shift in perspective. Alternatively, it is recommended to recall this distinction when confronted with setbacks, such as poor performance in an exam or challenging social situations. Furthermore, in the event that one seeks to cultivate an incremental mindset in their offspring, Dweck suggests that parents offer praise for effort rather than intelligence, as the latter can reinforce a fixed mindset, thereby increasing reluctance to accept potential failure.The incremental mindset is more likely to result in sustainable success; however, the underlying point is that adopting an incremental outlook can enhance overall well-being, irrespective of its impact on future achievements. This approach is characterized by a win-win proposition, contingent on a genuine willingness to accept the possibility of failure.

The proponents of positivity and optimism appear to find it challenging to acknowledge the possibility of contentment in embracing failure as failure, rather than merely as a strategy for achieving success. However, as Natalie Goldberg, an author influenced by Zen Buddhism, contends, there is a sense of openness and honesty in failure, a genuine confrontation with reality that can be perceived as lacking in the context of more significant achievements.Perfectionism is a trait that many individuals appear to be privately or not-so-privately proud to possess, as it does not appear to be a character flaw. Nevertheless, at its core, it is a fear-driven striving to avoid the experience of failure at all costs. At the extremes, living in this manner can be exhausting and permanently stressful. Researchers have found a stronger correlation between perfectionism and suicide than between feelings of hopelessness and suicide. To fully embrace the experience of failure, rather than merely tolerating it as a stepping stone to glory, requires relinquishing the constant strain of never making a mistake and relaxing.