
About the Book
This book is based on the peer-reviewed scientific work:
Theory of Stupidity: A Formal Model of Cognitive Vulnerability (General Theory of Stupidity). Igor Sergeyevich Petrenko (December 2025).
Original Publication
Journal of scientific articles “Science, Technology and Education”
Journal Identifier: DOI-PREFIX — 10.20861
Published in the proceedings of: CXI International Scientific-Practical Conference “International Scientific Review of the Problems and Prospects of Modern Science and Education”. Boston, USA, December 2025.
This book expands and develops the model presented in the article, adding Cultural Intelligence (CQ), practical protocols, and detailed case analysis.
Central Idea
This work introduces a formal mathematical model of “Stupidity” (G) — not as an absence of intelligence, but as an architectural cognitive vulnerability. Stupidity is defined as a system failure that occurs when the demands for information filtering exceed the available attention control resources.
The book is intended for cognitive science researchers, practitioners in education and technology, as well as anyone who wants to understand the mechanisms of irrational behavior in the information age.
Preface
In this book, “stupidity” is viewed not as a deficit of intelligence, but as a functional cognitive vulnerability: a state in which an agent loses decision-making agency when the demands for signal filtering and social conformity exceed available regulatory resources.
In the 21st century, humanity faces a paradox: access to knowledge has become virtually instantaneous, yet the quality of decision-making — both at the individual and societal levels — continues to degrade. Traditional psychometrics, focusing on general intelligence (the g-factor), fails to explain why highly intelligent people fall victim to manipulation, believe in absurd conspiracy theories, or make catastrophic business and political decisions.
This book offers a new perspective on the nature of human error. Stupidity is treated here not as a lack of mind, but as a functional cognitive vulnerability (G) arising from the interaction between the brain’s biological limits and an aggressive information environment.
Relevance and Research Goals
Why do we need another theory of irrationality? Existing models (from Kahneman to Stanovich) brilliantly describe specific cognitive biases but rarely view them as a dynamic system operating under load. The research goals are formulated as follows:
1. Formalization: Translate the intuitive concept of “stupidity” into the language of measurable variables and cybernetic models.
2. Attention Crisis: Investigate how, in the modern attention economy, the cognitive resource (A) becomes the primary commodity for which algorithms compete, fragmenting human thought.
3. Social Polarization: Analyze new forms of group pressure (S) in the digital environment that reinforce conformity and suppress critical thinking (C).
4. Practice: Propose tools for “cognitive hygiene” and the design of systems resilient to the human factor.
Object and Subject of Study
The object of study is the human agent’s decision-making process under conditions of high information entropy. The subject is the mechanisms of systemic cognitive failures, which in this work are combined into the integral indicator G (Gullibility/Generic Stupidity factor).
Guide to the Book
The book is structured to move from abstract models to concrete practical solutions:
Part I (Theory) lays the foundation. It breaks down the variables of the G model: intelligence, errors, motivations, attention, noise, and social pressure.
Part II (Mechanics) analyzes how these variables interact in real-time. How stress “shuts down” intelligence and why digital noise renders people suggestible.
Part III (Recovery and Resistance) is dedicated to applying the model in education, analyzing historical cases, and predictive modeling.
Part IV (Cognitive Immunity) offers practical protocols: from attention hygiene to environmental design and diagnostics.
Part V (Perspectives and Synthesis) examines the future of research, the impact of AI, and summarizes the findings.
Limitations and Assumptions
The work consciously uses simplified mathematical metaphors to make the model accessible for analysis. The G model does not claim to be the final truth in neurobiology but serves as an effective instrument (a map) for navigation in a complex world of information overload.
Having outlined the boundaries of the study, we can proceed to its substantive part. The path to understanding the nature of stupidity begins not with a blank slate, but with an analysis of why this phenomenon has eluded strict formalization for centuries. Chapter 1 examines how perceptions of human irrationality have changed — from the philosophical treatises of the past to modern discoveries in cognitive science — and determines exactly what was missing in this mosaic to create a coherent picture.
PART I: THEORY OF STUPIDITY
Chapter 1. Literature Review: From “Stupidity” to the Vulnerability Metric G
The history of studying human irrationality resembles an attempt to map fog — a phenomenon obvious to every observer yet elusive when one tries to fixate it rigidly. Over millennia, the term “stupidity” has drifted in complex ways: from the theological sin of ignorance in medieval treatises to a moral-ethical problem in Enlightenment philosophy, then to a clinical diagnosis in early psychiatry, and finally to an object of study in 21st-century cognitive science.
Despite this long journey, the concept remained hostage to everyday language for a long time — a label for condemning others’ mistakes rather than a tool for understanding them. Until recently, science lacked a unified optical system capable of assembling fragmented knowledge about thought failures into a coherent picture. This chapter revises the intellectual heritage — from ancient dialogues to modern neurobiological discoveries — to highlight those key mechanisms that allowed for the formulation of the functional cognitive vulnerability model.
The Concept of “Stupidity” in the History of Philosophy
For ancient philosophy, particularly for Plato, stupidity (amathia) was not merely an absence of knowledge. It was “ignorance of one’s own ignorance” — a dangerous state of the soul where a person believes they know what they actually do not. In Plato’s Protagoras, stupidity is opposed to wisdom not as emptiness to fullness, but as a false structure to a true one.
In the Renaissance, Erasmus of Rotterdam, in his The Praise of Folly, proposed a paradoxical view: he considered Stupidity as the foundation of human happiness and social harmony. According to Erasmus, without a certain degree of self-deception and lack of criticality, life would become unbearable, and social bonds impossible. This is a crucial observation for the proposed model: cognitive vulnerability is often a side effect of adaptation mechanisms.
The Enlightenment returned rationality to the pedestal. Immanuel Kant defined stupidity as a lack of judgment (Urteilskraft). In his view, one can be a highly educated person, mastering a mass of rules and laws, yet be “stupid” in applying them to specific life situations. This is an important distinction: intelligence as a storehouse of tools versus intelligence as the skill to use them.
A special place in modern philosophical reflection is occupied by Carlo Cipolla’s “The Basic Laws of Human Stupidity”. His definition of a stupid person as “a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses” has become classic. Cipolla emphasized the irrationality and destructiveness of this phenomenon, which defies the logic of profit.
Nietzsche went further, linking stupidity to social instincts. In his understanding, “herd stupidity” is not a biological defect but a protective mechanism of culture that suppresses individual criticism for the sake of group stability. This thesis finds direct reflection in the Social Pressure parameter (S). Thus, philosophy paved the way for understanding stupidity as a systemic process rather than an individual character trait.
Rationality, Dysrationalia, and the Limits of IQ
For a long time, psychology was dominated by the assumption that a high Intelligence Quotient (IQ) is a universal defense against thinking errors. However, research in recent decades, particularly the work of Keith Stanovich, has shattered this myth. Stanovich introduced the term “dysrationalia” to describe the inability to think and act rationally despite an adequate level of intelligence.
Stanovich identifies two main sources of dysrationalia:
1. Cognitive Miserliness: The brain is evolutionarily programmed to minimize computational effort. Humans prefer simple heuristics to complex logical operations.
2. Mindware Gaps: The absence of necessary knowledge about probability, logic, and scientific methods that allow for effective information processing.
The gap between intelligence and rationality is explained by the fact that IQ tests measure the brain’s computational power (algorithmic level) but barely touch upon the reflective level — the ability to question one’s own beliefs and switch thinking strategies in time. David Robson, in his book The Intelligence Trap, describes how people with high IQs can be even more vulnerable to certain types of errors, as their developed minds allow them to construct more convincing (but false) justifications for their biases.
This leads to a critical division of errors:
1. Stochastic Errors (B {err}) — These are random glitches in logic caused by fatigue or task complexity. They do correlate with IQ: the higher the intelligence, the fewer such errors the agent commits.
2. Motivated Beliefs (B {mot}) — These are distortions caused by the desire to protect one’s identity, status, or comfortable worldview. Research shows that B {mot} is orthogonal to IQ. Moreover, people with high intelligence often cope better with rationalizing their biases, becoming “hostages of their own minds.” Thus, intelligence without rationality is a powerful engine without steering.
The Psychology of Errors: Heuristics and Metacognitive Failures
A fundamental contribution to understanding the mechanisms of irrationality was made by Daniel Kahneman and Amos Tversky. Their theory of two systems (System 1 — fast, intuitive; System 2 — slow, analytical) explains why people are prone to cognitive biases. Stupidity in this paradigm is the unjustified dominance of System 1 in situations requiring the engagement of System 2.
System 1 works like an autopilot: it instantly generates solutions based on associations and past patterns. This is evolutionarily advantageous (allowing quick reactions to threats) but in the modern complex world leads to systematic errors — heuristics. For example:
— Availability Heuristic: The probability of an event is estimated by how easily examples come to mind. If news constantly shows plane crashes, an irrational fear of flying begins, despite statistics.
— Anchoring Effect: The first piece of information received (even if random) becomes an “anchor” that distorts all subsequent judgments.
However, the most dangerous thing for rationality is not the error itself, but the inability to notice it. Here comes a metacognitive failure known as the “bias blind spot,” described by Emily Pronin. Experiments show that people easily spot errors in opponents’ logic but are practically incapable of identifying them in themselves. There is a sincere belief in the objectivity of one’s own judgments, while the judgments of others are perceived as distorted by ideology or emotions.
This phenomenon explains the persistence of delusions. When facts contradict a belief, the brain does not change the belief but engages defense mechanisms. This process is called “motivated reasoning.” Instead of seeking the truth, the intellect begins to work as a lawyer, selecting arguments to justify an already adopted intuitive decision. The higher the IQ, the more complex and convincing (to oneself) justifications the agent can build. Thus, intelligence turns from a tool of cognition into a tool of self-deception.
Neuroscience of Control and Overload
Modern neurobiology allows us to see the physiological substrate of these processes. Decision-making relies on the work of executive functions localized in the prefrontal cortex (PFC) of the brain. These functions — attention control (A), working memory, and impulse inhibition — have strict biological limits on energy consumption and bandwidth.
The PFC is the most energy-consuming part of the brain. Its operation requires maintaining high levels of glucose and oxygen. When the brain faces the need for prolonged attention retention or complex choice, a state of “cognitive fatigue” sets in. At this moment, metabolic resources are depleted, and the brain, striving to save energy, forcibly reduces PFC activity.
When the volume of incoming noise (D) exceeds a certain threshold, the prefrontal cortex enters “overload” mode. In this state, the brain automatically switches to more primitive, energy-efficient strategies (System 1 heuristics) controlled by subcortical structures (e.g., basal ganglia). From a neurobiological perspective, stupidity is a temporary or chronic degradation of the brain’s executive functions under environmental pressure or due to energy deficits. This explains why even genius people make ridiculous mistakes under stress or sleep deprivation.
Sociology of Conformity and Collective Dynamics
Humans are social animals, and their cognitive processes are inextricably linked to group dynamics. Solomon Asch’s experiments showed that a significant percentage of people are ready to ignore obvious facts if they contradict the majority opinion. The Social Pressure parameter (S) in the proposed model reflects this weight of collective influence.
Irving Janis introduced the concept of “Groupthink” to describe situations where the striving for consensus within a group suppresses critical thinking. In such groups, illusions of invulnerability and moral superiority arise, and any doubts are perceived as betrayal. Another phenomenon — the “Abilene Paradox” — describes situations where a group makes a decision that none of its members like, simply because each individually was afraid to object to the others.
In the digital environment, these effects are amplified manifold. Information cascades and “virality” create conditions where false but emotionally charged information spreads faster than the truth. Group pressure in social networks becomes an invisible background that constantly “erodes” individual Critical Thinking (C), making the agent part of a collective irrational subject. Social networks turn S from a local factor into global noise, suppressing personal autonomy.
Attention Economy and Information Load
Herbert Simon, a Nobel laureate, noted back in the 1970s: “A wealth of information creates a poverty of attention.” In the modern attention economy, the cognitive resource (A) has become a scarce resource for which thousands of algorithms compete. In this struggle, user attention is not just time; it is the “currency” with which influence and data are bought.
For the G model, it is fundamentally important that the digital environment does not just provide information; it actively filters it, creating “filter bubbles” and echo chambers. The growth of input signal entropy (D) manifests not only in data volume but also in its fragmentation. Modern humans live in a state of “Continuous Partial Attention,” as Linda Stone calls it. This is not effective multitasking, but constant context switching that “blows” cognitive fuses and deprives the brain of the deep immersion necessary for Critical Thinking (C) to work.
Every notification or advertising banner is a micro-cost of resource A. As a result, by the time a critical decision needs to be made, the agent’s cognitive budget may already be exhausted by “junk” content. Here, Brandolini’s Law (the Bullshit Asymmetry Principle) comes into play: the amount of energy needed to refute bullshit is an order of magnitude larger than to produce it. Under conditions of high information entropy (D) and time deficits, rational fact-checking becomes too “expensive” a strategy.
The attention economy creates a situation where “being smart” (i.e., deeply analyzing information) becomes economically unfeasible for the individual in the short term. The author emphasizes: when the cost of quality analysis exceeds the benefit from it, the system chooses functional stupidity as a way to save energy. Thus, the parameter D in the G model acts as a tax on rationality paid by every participant in modern information exchange.
Language, Culture, and Narratives
Finally, one cannot ignore the role of cultural context and language. Cultural Intelligence (CQ) determines the ability to recognize manipulative narratives and understand contexts different from one’s own. Narratives are not just stories; they are cognitive frames, as George Lakoff called them. They predetermine what information a person will consider relevant and what they will ignore or deem “noise.”
In the “post-truth” era, the struggle is not for facts, but for the right to define these frames. Stupidity in this context manifests as cognitive rigidity — the inability to step outside an imposed ideological or cultural narrative, even when it begins to fatally contradict reality. Low CQ makes an agent vulnerable to “semiotic viruses” — ideas packaged in an attractive form that bypass Critical Thinking (C) filters by appealing to basic values or fears.
Culture also sets standards for what is considered “reasonable.” In some environments, functional stupidity (e.g., blind adherence to rituals or bureaucratic instructions) is rewarded as a sign of loyalty. In such cases, the Social Pressure parameter (S) merges with the cultural code, creating a powerful barrier to rational analysis.
Modern Digital Environments
Today, humanity lives in the “digital Anthropocene,” where interface architecture directly influences thought architecture. Langdon Winner used the term “technological somnambulism” to describe the unconscious acceptance of technologies that radically change the cognitive landscape. Infinite scrolling, instant notifications, and ranking algorithms that reward anger and outrage are all factors maximizing Digital Noise (D) and minimizing Attention Control (A).
The digital environment has become the ideal incubator for functional stupidity. It does not just passively broadcast content; it actively exploits the brain’s systemic vulnerabilities — from dopamine loops to confirmation bias. The use of “dark patterns” in interface design forces the user to take actions they did not plan, effectively disabling their agency.
As a result, we observe an erosion of the capacity for prolonged concentration. Cognitive vulnerability G in such conditions becomes not an unfortunate mistake, but a target state to which algorithms strive to bring the user for profit maximization. A person in a digital environment ceases to be a decision-making subject, turning into an object of algorithmic control whose reactions are predictable and automatic.
Why is the G Model Needed?
Summarizing the review, one can see that all pieces of the puzzle have already been described in different disciplines. Philosophy pointed to the problem of “false knowledge,” psychology separated intelligence and rationality, neuroscience explained brain limits, sociology — the power of the crowd, and economics — the attention deficit.
However, until now, there has been no unified language allowing these factors to be combined into a dynamic system. Why does the same person act brilliantly in one situation and commit catastrophic stupidity in another? How to predict the moment when the system will “break” under load?
Answering these questions requires a formal model. The G model views stupidity not as a static quality or personality defect, but as a dynamic state — a systemic failure arising from a specific combination of internal resources and external pressure. This is a shift from the question “who is stupid?” to the question “how and why does stupidity arise?”. In the next chapter, the author moves to the cybernetic definition of variables I, B {err}, B_ {mot}, A, D, S, E, C, and CQ, laying the mathematical foundation for the proposed theory.
Chapter 2. Definition of Stupidity and the Cybernetic Approach
The previous chapter demonstrated that the phenomenon of stupidity eludes simple definitions. Traditional attempts to classify it as a deficit of intelligence or a result of mental disorder face an insurmountable contradiction: highly intelligent and mentally healthy people systematically commit fatal cognitive errors. This indicates that the root of the problem lies not in the structure of the “tool” (the brain) itself, but in the dynamics of its operation under specific environmental conditions. This chapter proposes a change of optics: instead of viewing stupidity as a static personality trait, it will be considered as a dynamic state of a system.
To do this, it is necessary to turn to cybernetics — the science of general laws of control and information processing. The cybernetic approach allows us to abstract from biological and psychological details, focusing on the architecture of the decision-making process. In this paradigm, the mind is viewed as a control loop operating under load. Stupidity, in this case, appears not as a “breakdown,” but as a systemic failure — a situation where external pressure and internal noise exceed the throughput of control mechanisms. This approach makes it possible to move from value judgments to measurable parameters and formalize system vulnerability in the form of the G model.
Definition: G as Functional Cognitive Vulnerability
The traditional approach links stupidity to a “lack of mind.” The G model severs this link. Within the framework of the proposed theory, stupidity (G) is defined as a functional cognitive vulnerability of an information processing system, leading to decisions that contradict the agent’s long-term interests or objective reality.
The key elements of this definition require detailed analysis:
1. Functionality: Stupidity is not what an agent is, but how an agent functions at a specific moment in time under the influence of certain factors. It is a shift from a static quality (IQ) to a dynamic state. A brilliant physicist buying lottery tickets in hopes of getting rich demonstrates functional stupidity at that moment. Their computational power (I) remains high, but the control loop has switched to a vulnerability mode.
2. Vulnerability: This is a probabilistic value describing the fragility of the system. It is not that the agent is “stupid” in the colloquial sense, but that their decision-making system is vulnerable to failure under certain conditions. These conditions can be external (high noise D, social pressure S) or internal (emotional stress E, motivated bias B {mot}). The higher the G, the less external stimulus is required for the system to produce an erroneous result.
3. Loss of Subjectivity and Agency: Stupidity is characterized by a temporary or chronic loss of control over the information processing process. The agent ceases to act based on rational analysis of facts and begins to react to external algorithms, manipulative narratives, or archaic biological impulses. In the G state, a person transforms from a subject making decisions into an object acted upon by environmental forces.
4. Contradiction to Interests and Reality: The result of the G state is a decision that either ignores physical laws and logic (contradiction to reality) or destroys the agent’s strategic well-being for the sake of immediate satisfaction or anxiety relief (contradiction to long-term interests).
Thus, G is a measure of the unreliability of the cognitive control loop, arising at the intersection of the brain’s architectural limitations and the aggressiveness of the information environment.
Cybernetics and Control Theory: Architecture of Control
The human mind can be represented as a cybernetic control system whose task is to minimize the entropy (uncertainty) of the external world to ensure survival and prosperity. In the terms of Norbert Wiener and Stafford Beer, any viable system must possess mechanisms for filtering environmental variety and feedback loops.
The architecture of this system in the context of the G model includes the following stages:
1. Input Signal: Data flow from the external environment. In the modern world, this flow is characterized by ultra-high entropy. It contains not only useful data but also noise (D), intentional disinformation, and “semiotic viruses.”
2. Filtration and Attention (A): This is the primary line of defense. Attention is a limited resource acting as a switchboard. It determines which data will be admitted for deep processing and which will be discarded. In the G state, this filter is either overloaded (attention paralysis) or captured by an external stimulus (clickbait, fear).
3. Processing (I, C, CQ): Here, decoding and analysis occur. Intelligence (I) is responsible for logical operations and calculation speed. Critical Thinking (C) acts as an “antivirus,” checking incoming data for logical consistency. Cultural Intelligence (CQ) allows for recognizing context and hidden meanings, preventing literal (and often erroneous) reading of complex signals.
4. Modulation (E, B {mot}, B {err}): This is the stage of “internal distortion.” Emotional state (E) can either accelerate processing or completely block rational circuits (“tunnel vision” effect). Motivated Bias (B {mot}) acts as censorship: the system subconsciously filters out facts that threaten the stability of the agent’s worldview. Random errors (B {err}) add stochastic noise to the process.
5. External Pressure (S): The social context creates a force field in which the system operates. Group pressure can force the system to ignore the results of its own data processing to preserve social status.
6. Output Signal and Feedback: The decision turns into action. Normally, the result of the action is checked against reality. If reality signals an error, the system corrects its algorithms.
System failure (G) occurs when load parameters exceed control parameters. In control theory terms, stupidity is a break in the feedback loop. The system stops perceiving corrective signals from reality and closes in a self-confirming cycle. Instead of adapting to the environment, the agent begins to defend their error, leading to the accumulation of systemic risk and final catastrophe.
Axioms of the G Model
The construction of the model relies on three fundamental axioms that define the boundaries of the theory’s applicability and determine the basic properties of the cognitive system.
Axiom 1: Resource Scarcity
Cognitive resource — primarily Attention (A) and operational computational power — is strictly limited by the biological parameters of the brain. Any operation of information processing, fact verification, or impulse suppression has an energetic cost. A biological system always strives to minimize energy expenditure. Therefore, the mind defaults to the path of least resistance — heuristics, stereotypes, and ready-made templates. Stupidity in this axiom appears as a result of “cognitive miserliness,” when the system refuses expensive verification in favor of a cheap but erroneous conclusion.
Axiom 2: Motivation Primacy
From an evolutionary perspective, the brain developed not as a tool for seeking objective truth, but as a tool for survival, reproduction, and social navigation. Maintaining psychological homeostasis, protecting self-esteem, and maintaining group belonging (S, B {mot}) have unconditional priority over the accuracy of perception of reality. In a conflict between an unpleasant fact and a comfortable delusion, the system will spend colossal intelligence (I) on rationalizing the delusion, not on refuting it. Rationality is a fragile superstructure that shuts down first when a threat to the ego or social status arises.
Axiom 3: Environmental Entropy
The information environment is not neutral; it possesses its own dynamics and tends toward chaos. Without active, energy-consuming efforts to structure and filter (C), the agent’s cognitive map of the world inevitably degrades. In the modern world, this axiom is supplemented by the factor of “environmental aggressiveness”: external systems (social media algorithms, advertising mechanisms, political technologies) intentionally increase entropy (D) to breach cognitive defenses and seize control over the agent’s behavior.
Dictionary of Variables and Intuitions
To formalize the model, a set of variables describing the state of the system is introduced. In subsequent chapters, they will receive mathematical expressions, but now it is necessary to fix their semantic content.
I (Intelligence) — Pure computational power, capacity for abstract thinking and pattern recognition. In the G model, intelligence is not immunity. It acts as a “processor” that can process both true data and false dogmas equally efficiently. High I allows for creating more sophisticated excuses for stupid decisions.
B {err} (Stochastic Bias) — Random error. “Type I” errors caused by biological imperfection: fatigue, neural noise, lapses in attention. These errors decrease with the growth of intelligence and concentration.
B {mot} (Motivated Bias) — Motivated bias. “Type II” errors arising from the desire to confirm one’s rightness or protect identity. Unlike B {err}, this parameter often correlates with high I: the smarter the person, the better they are at fitting facts to their desires.
A (Attention) — Attention resource. A dynamic parameter determining the system’s throughput. Attention is the currency the agent spends on resisting entropy. When A is depleted, the system switches to automatic reaction mode.
D (Entropy/Noise) — Information noise and environmental pressure. An external factor creating load on attention. Includes data volume, arrival speed, and degree of contradictoriness. High D literally “burns out” resource A.
S (Social Pressure) — Social pressure. The force of group attraction. A parameter describing how much the agent is willing to sacrifice logic and facts for the sake of conformity or status.
E (Emotional State) — Emotional background. A coefficient modulating the work of all other parameters. A high level of fear or anger shuts down Critical Thinking (C) and switches the system to survival mode.
C (Critical Thinking) — Verification skills. This is the “software code” of the system, allowing for the detection of logical errors and manipulations.
CQ (Cultural Intelligence) — Cultural Intelligence. The ability to read context, irony, subtext, and manipulative narratives. Protects against “semiotic viruses” that pass through filters of formal logic.
What the Model Explains (and What It Doesn’t Promise)
The proposed G model is developed to explain phenomena that baffle classical theories of intelligence and psychometric approaches. Its main value lies in identifying mechanisms, not just stating errors.
Spheres of Explanatory Power:
1. The Paradox of “Smart Stupidity”: Classical psychology often falters before the question: why do people with outstanding IQs become victims of primitive cults or Ponzi schemes? The G model gives a clear answer: high Intelligence (I) is merely computational power. If the filtration loop (A, C) is damaged, and Motivated Bias (B {mot}) is directed at protecting an irrational idea, high intelligence only helps build more complex and “impenetrable” defenses for that idea. A smart person is not insured against stupidity; they just commit it more sophisticatedly.
2. The Effect of Digital Degradation: In the era of unlimited access to information, a paradoxical growth of anti-scientific theories is observed. The model explains this through the dynamics of parameters D and A. When information noise (D) grows exponentially, the attention resource (A) is depleted faster than the system can engage Critical Thinking (C). As a result, the environment “breaches” cognitive defenses, forcing the system to switch to primitive heuristics.
3. Collective Madness: Social Pressure (S) is capable of completely suppressing individual control parameters. In groups with high connectivity and rigid hierarchy, the individual critical filter (C) shuts down to preserve social homeostasis. The model allows calculating the “tipping point” after which the group turns into a single control object with an extremely high G coefficient.
What the Model Does Not Promise:
The model is not a tool for deterministic prediction of every step of a specific person. It is a probabilistic risk map. It does not replace psychiatry or classical neurobiology but works at the level of system architecture. The goal of the model is not to “heal” the mind, but to provide metrics for assessing its vulnerability under specific conditions.
Transition to Formalization
Defining variables and axioms is only the first step toward creating a working analytical tool. For the G model to stop being a set of intuitive guesses and become a rigorous theory, it is necessary to move from qualitative description to mathematical structure.
In the next chapter, the formal specification of the model will be presented. The main equation of functional cognitive vulnerability will be derived, where environmental parameters (D, S) and internal modulators (E, B_ {mot}) are linked to control resources (A, C, CQ, I).
This transition will allow us to see why some factors add up while others multiply. It will become clear why simply increasing intelligence or awareness does not lead to a decrease in stupidity, and exactly how a “noisy” environment turns a rational agent into a predictable object of manipulation. The mathematical foundation will turn the model from a descriptive concept into a tool suitable for testing, simulation, and, ultimately, for designing systems of cognitive sovereignty protection.
Chapter 3. Operationalization and Measurement Design
Any theoretical model, however elegant it may seem, remains a speculative construct until its variables are translated into the language of measurable quantities. Operationalizing the G model is the process of turning abstract concepts like “information noise” or “motivated bias” into concrete metrics that can be recorded during an experiment or field observation. This chapter describes the methodological foundation upon which the empirical verification of the functional cognitive vulnerability theory is built.
Principles of Normalization: Range [0,1]
To ensure mathematical correctness and comparability of various factors, the model adopts a unified standard of measurement. Each variable is brought to a normalized range from 0 to 1.
A value of 0 means the absence of the factor or its minimal influence (e.g., complete absence of external noise D=0), while a value of 1 represents its maximum possible intensity within the human population or technical limits of the environment.
This approach allows for aggregating heterogeneous data — from IQ test results to screen time data — into a single cognitive state vector. It is important to distinguish the direction of influence: parameters like Intelligence (I), Attention (A), and Critical Thinking (C) act as protective resources (the higher the value, the higher the resilience), whereas Noise (D), Social Pressure (S), and Biases (B {err}, B {mot}) are interpreted as vulnerability indices.
Variables and Proxy Measurements
For ease of operationalization, we group all variables of the G model into three functional clusters: Resources (defense), Internal Vulnerabilities (distortions), and External Load (environment).
Cluster 1: Defensive Resources
These parameters ensure system stability. The higher their value, the lower the resulting stupidity index G.
1. Intelligence (I):
Definition: Computational power (“hardware”). Capacity for abstract thinking and learning.
Measurement: Standard IQ tests (WAIS-IV, Raven’s Matrices).
Dynamics: Stable characteristic (Trait).
2. Attention Control (A):
Definition: Channel bandwidth (“RAM”). Ability to maintain focus and ignore distractors.
Measurement: Attention Control Scale (ACS), Stroop tests, Flanker task.
Dynamics: Highly volatile state. Drops sharply with fatigue.
3. Critical Thinking (C):
Definition: Verification algorithms (“antivirus”). Skill in evaluating arguments and detecting logical fallacies.
Measurement: Watson – Glaser Critical Thinking Appraisal (WGCTA), Cognitive Reflection Test (CRT).
Dynamics: Skill requiring activation.
4. Emotional Regulation (E):
Definition: Cognitive damper. Ability to maintain rationality under affective pressure.
Measurement: MSCEIT (emotional intelligence), stress resilience scales.
Dynamics: Depends on physiological state.
5. Cultural Intelligence (CQ):
Definition: Semiotic filter. Ability to read context, subtext, and recognize manipulative narratives.
Measurement: Cultural Intelligence Scale (CQS), tests for understanding irony and metaphors.
Role: Critically important for protection against social engineering and propaganda.
Cluster 2: Internal Vulnerabilities
Factors generating errors “from within.”
6. Stochastic Errors (B {err}):
Definition: Random neural noise. Attention or memory lapses.
Relation: Inversely proportional to intelligence (B {err} \propto 1/I).
7. Motivated Bias (B {mot}):
Definition: Systematic distortion of logic to protect beliefs (“devil’s advocate”).
Measurement: Myside Bias scales, dogmatism tests, inverse correlation with Actively Open-Minded Thinking (AOT).
Relation: Orthogonal to intelligence. Smart people often have high B {mot}.
Cluster 3: Environmental Load
Environmental factors attacking the cognitive system.
8. Information Noise (D):
Definition: Input signal entropy. Quantity, speed, and contradictoriness of information.
Measurement: Digital Overload Index (DOI), screen time metrics, Task Switching Frequency.
Effect: When D> 0.7, it causes exponential error growth.
9. Social Pressure (S):
Definition: Force of conformity compulsion.
Measurement: Conformity scales, experimental paradigms (Asch, Milgram) in digital environments (number of likes/dislikes).
This grouping allows for clearly seeing the equation architecture: Load (Cluster 3) must be compensated by Resources (Cluster 1), while Vulnerabilities (Cluster 2) create a constant background of risk.
Research Design: Collecting the State Vector
The goal of empirical research within the G model framework is to obtain the complete cognitive state vector of an agent at a specific moment in time. The research design involves two levels of data collection:
Static Profiling: One-time measurement of the agent’s base resources (I, C, E, B {mot}).
This creates a “cognitive passport” describing potential resilience.
Dynamic Monitoring: Real-time recording of environmental variables (D, S) and the current state of resources (A, E). This allows for seeing how vulnerability changes under load.
To detect non-linear effects (e.g., phase transition during overload), the method of controlled provocations is used. During the experiment, the level of external noise (D) or social pressure (S) is systematically increased, allowing for the fixation of the point at which the system stops coping with data processing and demonstrates a sharp jump in error probability.
Ethical Boundaries and Minimizing Stigmatization
Scientific work with the concept of “stupidity” is inevitably fraught with the risk of ethical speculation. The history of psychometrics knows many examples where intelligence measurement tools were used for discrimination, segregation, and justification of social inequality. The G model is developed with full awareness of this historical responsibility and categorically rejects the essentialist approach. We postulate that a high G index is not an innate personality defect but represents a temporary functional failure that can happen to anyone, regardless of academic regalia or IQ level.
The central ethical principle of the research is shifting the focus from “victim blaming” to analyzing the environment. What is subject to measurement is not a static “quality of a person,” but the dynamic vulnerability of the “Agent – Environment” system at a specific moment in time. Stupidity in this paradigm is viewed as a curable state of maladaptation, often provoked by toxic information interface design or manipulative social context. Acknowledging that even a genius mind can be suppressed by super-normative noise (D) makes the theory a tool of empathy, not condemnation.
Special attention is paid to data collection protocols. Since calculating variables D (digital footprint) and B {mot} (beliefs) requires access to sensitive personal information, researchers are obliged to observe the strictest standards of anonymity and digital privacy. It is unacceptable to use G metrics for corporate screening, loyalty scoring, or state control, as this would create incentives for simulation and distort the very essence of diagnostics.
The ultimate goal of operationalizing stupidity is purely constructive. We create measurement tools not to label, but to design individual “cognitive body armor” and systemic filters. Vulnerability diagnosis is the first step toward reclaiming agency. Only by understanding the mechanics of one’s own failure can a person regain control over decision-making in an aggressive world.
Chapter 4. Mathematical Specification of the G Model
In this chapter, the verbal description of functional cognitive vulnerability is translated into the language of mathematics. This is necessary to move from philosophical reasoning to testable hypotheses. The G formula does not claim physical precision akin to Newton’s laws, but it formalizes the relationships between variables, allowing us to model system behavior under load.
Logic of Equation Construction
The G model equation is constructed as the sum of three independent components, each describing a separate vector of vulnerability. The general structure looks like this:
G = \text {Internal Fragility} + \text {Environmental Pressure} + \text {Social Conformity}
Where:
1. Internal Fragility — errors generated by the brain itself (biological glitches and motivated distortions).
2. Environmental Pressure — errors arising from the inability to filter incoming chaos.
3. Social Conformity — errors arising from abandoning one’s own judgment in favor of the group’s opinion.
All variables are normalized in the range [0,1]. Coefficients \alpha1, \alpha2, \alpha3 are weighting multipliers that can vary depending on the context (e.g., in a solitary logical puzzle task, \alpha3 -> 0, while at a rally, \alpha3 dominates).
The G Equation: Operational Logic
To ensure reliability for publication and ease of understanding, we divide the model presentation into two levels: conceptual (described below) and formal (mathematically rigorous).
Visual Anchor of the Model
Fig. 1. Full formal specification of the G model. The mathematically exact version of the formula is available in the digital appendix.
Logical Structure (Plain Text Formula)
In simplified form, the dynamics of cognitive vulnerability are described by the following equation:
G = [Internal Fragility] + [Environmental Load] + [Social Context]
Where each block is calculated as follows:
1. Internal Fragility = (Processing Errors / Intelligence) + Motivated Beliefs
2. Environmental Load = (Effective Noise) / Attention Control
3. Social Context = (Social Pressure × Suggestibility Coefficient) / Emotional Regulation
Technical Specification
For researchers and data specialists, the full mathematical derivation of the formula, description of the exponential penalty function D {eff}, and Monte Carlo simulation results are available in the original scientific paper at:
https://aifusion.ru/en/research/theory-of-stupidity/index.html
Detailed Breakdown of Components
Component 1: Internal Fragility
In this block, we see a fundamental paradox. Ordinary information processing errors (B {err}) are effectively suppressed by General Intelligence (I). The higher your IQ, the fewer random glitches you commit.
However, Motivated Beliefs (B {mot}) stand apart in the formula. They are not divided by intelligence. This means that ideological rigidity or “faith in dogma” is an additive vulnerability. Moreover, as research shows, high intelligence is often used to construct more complex defenses around these beliefs, without reducing the overall G indicator.
Component 2: Environmental Load
This is the dynamic part of the model. In the numerator is Effective Noise — a function that grows non-linearly. Up to a certain threshold (0.7), the brain copes with the data flow, but after that, a “phase transition” occurs, and the load grows exponentially.
The only “shield” here is Attention (A). It stands in the denominator: if your attention resource drops (due to fatigue, stress, or interface fragmentation), the stupidity indicator rushes to infinity, regardless of how smart you are.
Component 3: Social Context
This component describes our vulnerability to the group. Effective pressure is reduced by Critical Thinking (C) and Cultural Intelligence (CQ), which act as filters.
The denominator here is Emotional Regulation (E). If the agent loses emotional control (fear, anger, euphoria), social pressure becomes absolute, and individual rationality shuts down in favor of collective behavior.
Interpretation of G Values
Since the formula yields a dimensionless quantity, it is important to define value ranges for interpreting the system state.
0.0 \le G <0.3: Stability Zone. The agent controls the situation, is capable of rational analysis and noise filtering. Errors are rare and random.
0.3 \le G <1.0: Risk Zone. Agency is preserved but becomes intermittent. The agent is vulnerable to manipulation and periodically “slips” into automatic reactions. External control or load reduction is required.
G \ge 1.0: Singularity Zone. Cognitive collapse. The system completely loses the ability to process information autonomously. Decisions are made exclusively based on external algorithms, impulses, or crowd pressure. In this state, the agent is functionally indistinguishable from an object.
Interactive Simulation: Visualizing the G Model
For a deep understanding of the model’s dynamics, the reader is invited to use the interactive simulator available at:
aifusion.ru/research/theory-of-stupidity/simulation.html
The simulator visualizes the G equation as a three-dimensional surface where column height reflects the level of cognitive vulnerability. The X-axis corresponds to Digital Noise (D), and the Z-axis to Social Pressure (S). The color scale transitions from blue (rationality zone) through orange (risk zone) to red (singularity zone).
Simulator Control Parameters:
Section I. Environment (Load)
— D (Digital Noise): 0.0–1.0. When D> 0.7, the exponential penalty activates.
— S (Social Pressure): 0.0–1.0. Amplifies the effect of D via a multiplier.
Section II. Agent Cognitive Profile
— I (Intelligence): 0.0–1.0. Reduces stochastic errors.
— A (Attention Control): 0.0–1.0. The main denominator of the environmental component.
— C (Critical Thinking): 0.0–1.0. Filters social pressure.
— E (Emotional Regulation): 0.0–1.0. Stress buffer.
— B {err} (Processing Errors): 0.0–1.0. Random cognitive glitches.
— B {mot} (Motivated Beliefs): 0.0–1.0. Ideological rigidity.
Exercise 4.1: Observing Stupidity Singularity
1. Open the simulator and set initial parameters: D = 0.5, S = 0.3, A = 0.7.
2. Slowly increase D (digital noise) to 0.8, observing the growth of G.
3. Note the sharp acceleration of growth when crossing the threshold D = 0.7.
4. Now lower A (attention) to 0.3 and observe the phase transition.
5. Compare the result with the “Digital Addict” preset in the scenarios menu.
Question for reflection: which parameter (D or A) had a greater impact on the final G indicator?
The simulator contains five preset scenarios corresponding to the archetypes from section 3.4: Ideal Rationality, Smart Fanatic, Digital Addict, Bureaucrat, and Resilient Operator. Each scenario demonstrates a unique configuration of variables and allows for visually comparing theoretical profiles with their mathematical expression.
Conclusions for Protection Strategies
The mathematical structure of the model suggests non-obvious strategies for reducing stupidity:
1. It is useless to increase Intelligence (I) if the problem lies in the plane of Motivated Bias (B {mot}). In this case, intervention should be aimed at reducing B {mot} (de-ideologization), not at teaching logic.
2. In conditions of ultra-high noise (D> D {thresh}), a linear increase in Attention (A) is ineffective. The only way to return the system to a controllable state is the physical reduction of the incoming data flow (lowering D below the threshold).
3. Critical Thinking (C) works as a multiplier. It is not just added to defense; it blocks* the passage of a harmful signal. Developing C yields a higher ROI (Return on Investment) than training Emotional Regulation (E), as C reduces the load S {eff} itself.
Formalizing the G model allows us to see the skeleton of cognitive vulnerability, but behind dry variables lies a living and extremely aggressive reality. Mathematics has shown that when a certain entropy threshold (D_ {thresh}) is reached, the system inevitably loses control. The next chapter will detail exactly how the modern attention economy and digital environment exploit these mathematical vulnerabilities to turn information noise into a tool for suppressing human agency.
PART II: MECHANISMS OF FAILURE
Chapter 5. Attention Economy and Environmental Dominance
The transition from the theoretical G model to the analysis of real cognitive failure mechanisms requires a detailed examination of the environment in which the modern agent functions. While classical psychology focused on the internal abilities of the individual, within the cybernetic approach, the load created by the external information system becomes the dominant factor. This chapter examines how the economic incentives of the digital age have led to the formation of an environment that is by default toxic to the human attention control apparatus.
The Core Idea: Attention as a Scarce Resource (A)
In the cognitive vulnerability equation, the Attention resource (A) occupies a critical position. Unlike Intelligence (I), which is responsible for the complexity of processed operations, attention acts as a filter and bandwidth regulator of the channel. It serves as the sole divisor in the environmental load component:
\text {Environmental Load} = \frac {D {eff}} {A}
The mathematical structure of this relationship indicates that any degradation of resource A leads to a non-linear growth of vulnerability G. Loss of attention turns background information noise (D) into an active factor of agency destruction. Under conditions where signal filtering demands exceed the filter’s capabilities, the system switches to random response mode, which we define as functional stupidity.
Modern cognitive science highlights the phenomenon of “switching cost.” Gloria Mark’s research shows that after an interruption, the brain needs an average of 23 minutes to return to a state of deep concentration. Each notification is not just a loss of a second of time; it is a reset of the cognitive context, requiring colossal metabolic expenditures for restoration. Attention fragmentation in the digital environment leads to chronic depletion of resource A. As a result, the agent proves incapable of engaging Critical Thinking mechanisms (C) even with high intelligence, as the control channel is constantly overloaded.
Attention Markets as a Mechanism for D Growth
The growth of information noise (D) in the 21st century is not an accidental side effect of technological progress. It is a direct result of the functioning of the attention economy, where user time and engagement are the primary monetizable assets. Optimization algorithms underlying platforms (YouTube, TikTok) maximize Retention metrics by exploiting vulnerabilities of the human dopamine system.
The key mechanism here is Variable Ratio Reinforcement — the “slot machine” principle described by B.F. Skinner. The unpredictability of reward (a like, an interesting post) causes a dopamine spike that reinforces the habit of constantly checking the feed. This turns information consumption from a conscious choice into a compulsive reaction.
Attention retention tools work as “entropy factories.” They increase the input data stream to levels significantly exceeding the biological processing threshold. In terms of the G model, this is the artificial inflation of the variable D. When noise reaches the critical mark D> 0.7, the exponential penalty function activates, and the system’s capacity for rational inference is suppressed by the volume of the incoming signal.
Why Intelligence Doesn’t Save You: “Intelligence Without Ecology”
One of the most dangerous misconceptions is the belief that a high level of General Intelligence (I) is a reliable defense against environmental pressure. The G model formalizes the reasons why this is not the case. Intelligence effectively reduces only Stochastic Errors (B {err}), i.e., internal glitches in logic or calculations. However, I has practically no effect on the external variable D and is only indirectly linked to attention protection A.
Here, the principle of “Cognitive Decoupling”, described by Keith Stanovich, comes into play. To make a rational decision, an agent must separate (decouple) the abstract logic of the task from the context and intuitive impulses. This operation is extremely energy-intensive and requires full use of working memory. In a quiet laboratory, a person with high I easily copes with this. But under conditions of digital noise (D), when working memory is clogged with processing incoming stimuli, the decoupling mechanism simply does not launch. A smart person turns into a “cognitive miser,” saving resources on what matters most.
Moreover, high intelligence can create a dangerous “Illusion of Competence.” An agent accustomed to relying on their mind does not notice the moment when the environment begins to manipulate them. Instead of admitting a mistake, a powerful intellect begins to work on protecting the ego, generating complex and convincing (but false) arguments in favor of the wrong decision. This is a phenomenon where I becomes the servant of B_ {mot}, reinforcing rather than weakening irrationality.
Emergence of Mass Effects: D Amplified via S
In the digital environment, information noise (D) rarely exists in isolation from the social context (S). On the contrary, they form a powerful positive feedback loop. Social triggers — likes, reposts, public discussions — increase the “effective noise” for each individual agent. Group pressure is not just added to the load; it multiplies it.
The mechanism of this amplification often works through information cascades. Agents, lacking resource A for personal fact-checking, begin to copy the behavior of others, assuming that “the crowd cannot be wrong.” The effect of pluralistic ignorance arises: each group member may internally doubt the correctness of the general course but publicly supports it, seeing the (false) unanimity of the others. Under conditions of algorithmic filtering, when social networks show us only what we want to see (“filter bubbles”), this effect becomes total.
Content virality is ensured by the mechanism of “Amygdala Hijack.” Emotionally charged information (fear, anger) is processed by subcortical structures faster than the prefrontal cortex can engage Critical Analysis (C). This reduces the availability of resource A for rational filtering. Under such conditions, conformity becomes not just a social choice but a biological survival strategy of the brain attempting to reduce cognitive load.
The Imperative of Interventions: Reduce D and Protect A
The analysis of environmental dominance leads to the conclusion that traditional methods of combating irrationality — such as simple informing or classical education — are insufficiently effective in modern conditions. If the environment is toxic by default, then the first step must be cognitive hygiene and a revision of choice architecture.
At the individual level, this means transitioning to practices of protecting the attention resource. It is not just about “digital detox,” but about the conscious design of one’s own environment (Epistemic Architecture). This is the creation of conditions where the correct cognitive action requires less effort than the incorrect one (Richard Thaler’s Nudge concept). For example, physically removing the phone from the workspace or using a grayscale screen mode reduces the “affordance” (attractiveness) of the device, automatically reducing D without expending volitional resources.
However, individual efforts are insufficient, as the system pressure is too great. Institutional measures are required, aimed at regulating “attention capture design” and limiting the most aggressive mechanisms of entropy generation. Future education must be built around the “Attention + Critique + Emotions” triad (A+C+E) as the fundamental infrastructure of resistance. Only by understanding the mechanics of one’s own cognitive failure can one hope to preserve agency in the era of total dominance of the information environment.
Chapter 6. Stupidity as a System Regulator
In previous chapters, we viewed stupidity as a systemic failure or a side effect of information overload. However, it would be naive to assume that this process is exclusively spontaneous. In this chapter, we investigate stupidity (G) as a purposefully produced product.
Macro-subjects — from institutional structures to corporate giants — are not interested in the population becoming “dumber” in the literal sense, but in ensuring its manageability. If our G formula describes the mechanics of cognitive failure, then systemic and marketing technologies are methods of actively exploiting this mechanics. Stupidity here becomes not a bug, but a feature ensuring system stability through the predictability of the masses.
Information Design and Reflexive Control
Traditional control methods focused on selective filtration — limiting access to data. In the internet era, the strategy has changed: the system now focuses not on what you think, but on which mode your brain is operating in.
1. Engineering of Consent (Edward Bernays): The founder of PR realized that masses can be managed by seizing their irrational impulses. In terms of our model, this is a direct attack on parameter B {mot} (motivated beliefs). By linking a specific narrative to deep emotions or identity, the operator achieves a complete shutdown of the critical filter (C). Rational intelligence (I) does not disappear but turns into a “lawyer” who retroactively justifies any absurd actions.
2. Reflexive Control (Vladimir Lefebvre): This is a technology of conveying such data to the control object that will force it to make the “needed” decision based on its own logic. This is hacking parameter I through falsification of input data. You are not forced to make a mistake; a model of reality is constructed for you in which your “logic” guarantees the result needed by the system.
Information design today is not lying; it is managing Social Pressure (S). By creating the illusion of a total majority (through bots, controlled media, and “opinion leaders”), the system activates the conformity instinct. The growth of S exponentially increases G, as the fear of becoming an “outcast” paralyzes the ability for independent analysis.
The Ghost of the Experimenter: Coercion to Tradition
To understand how external violence turns into voluntary stupidity, let us consider the classic parable of the five monkeys experiment. It is an ideal model of the institutionalization of irrationality.
The Essence of the Experiment:
A ladder with bananas at the top is placed in a cage with five monkeys. As soon as any monkey tries to climb up, all others are doused with ice water. Soon, the group begins to beat anyone who reaches for the bananas.
Then the water is turned off, and one monkey is replaced with a new one. The newcomer goes to the ladder and immediately gets a thrashing from their peers. They don’t know why, but they learn the rule. Gradually, all monkeys are replaced. Eventually, no one is left in the cage who was ever sprayed with water. But they continue to beat anyone who reaches for the bananas — because that’s how it’s done in this cage, because according to the cage traditions, that’s what monkeys before them and monkeys before them did.
Analysis via the G Formula:
In this system, the stupidity indicator (G) becomes critical and self-sustaining even when the external source of threat (the operator with the hose) has long vanished. We observe three stages of cognitive decay:
1. Dictatorship of Parameter S (Social Pressure): In this model, the social component completely suppresses the individual one. For the “new” generation of monkeys, the threat of being beaten by peers becomes more real and significant than the biological stimulus (hunger/bananas). Parameter S turns from a behavior regulator into a despotic perception filter: the agent’s cognitive system no longer processes environmental signals; it processes only threat signals from the group. When the social penalty for dissent becomes prohibitive, rational behavior (G <1.0) becomes biologically disadvantageous — being “stupid together with everyone” is safer for survival than being “smart alone.”
2. Atrophy of Parameter I (Intellectual Resource and Understanding Essence): A critical substitution of “knowledge” with “dogma” occurs. The first generation possessed understanding (I): “ladder = water.” In subsequent generations, understanding of cause-and-effect relationships is replaced by blind adherence to an algorithm. When knowledge of the root cause is lost, the rule turns into a “black box.” Hidden entropy (D) grows in the system, as group behavior no longer corresponds to physical reality (there is no water!), but no one possesses the intellectual resource to realize this. Stupidity here is not an error in logic; it is flawless logic working on the basis of false or missing data.
3. Blocking Epistemic Vigilance (C) and Seizure of Attention Resource (A): Critical Thinking (C) is an energy-intensive process requiring time and safety. In the aggressive environment of the “cage,” where any manifestation of curiosity is followed by a blow, the cost of engaging C becomes fatal. The attention resource (A) of all system participants is redistributed: they watch not the goal (bananas) and not environmental changes (water turned off), but each other’s behavior. Attention is spent on “police functions,” turning the group into a self-regulated prison. The system enters a mode of cognitive homeostasis, where any attempt to verify reality is perceived as an existential threat to group stability.
“The Ghost of the Experimenter” is a state of the system where stupidity ceases to be a consequence of external pressure and becomes the foundation of identity. The operator no longer needs to be present in the cage: they have delegated suppression functions to the group itself. In such an environment, G (stupidity) turns into “social glue,” and any attempt to manifest I (intelligence) or C (vigilance) is automatically classified by the system as an act of aggression subject to immediate containment by the forces of the group itself.
Cognitive Hacking and Attention Economy
Modern systemic structures and corporations have moved from direct pressure to architectural environmental design. Information platforms form a space where the raw material is our attention (A).
We face cognitive hacking:
Delegation of Cognitive Functions: Recommendation algorithms make choices for the user. This reduces current cognitive load but leads to atrophy of decision-making skills. In terms of the G formula, this is a voluntary refusal to use the Attention resource (A) and Epistemic Vigilance (C).
Echo Chambers as a Homeostasis Mechanism: Algorithms feed us what we like, endlessly fueling parameter B {mot}. The system hides any data capable of causing dissonance. As a result, the brain unlearns how to process contradictory information, and any external signal that does not fit into the worldview is perceived as “noise” (D), causing aggression (E).
Exploitation of System 1: Social media design (infinite scroll, likes) is oriented toward instant dopamine reactions. This keeps the brain in “fast thinking” mode, where the Attention resource (A) is fragmented, making it impossible to engage the analytical apparatus (I).
The result is an agent with a chronically depleted attention resource, a disabled critical filter, and high emotional lability. This is the ideal substrate for any manipulation. We get a state of induced cognitive vulnerability characterized by the systemic breakdown of three protective levels of cognition:
1. Attention Deficit (A \downarrow) and Stream Fragmentation: Inability to maintain focus on complex cause-and-effect relationships makes the agent a hostage of the “current moment.” Without stable A, building deep models of reality (I) is impossible. The agent stops seeing the system and begins reacting only to isolated events. This turns perception into a series of unconnected flashes, where each new “information flash” completely erases the context of the previous one.
2. Vigilance Deficit (C \downarrow) and Epistemic Capitulation: Atrophy of the verification mechanism (C) leads to the criterion of truth becoming not reliability, but “smoothness” of delivery and correspondence to the current emotional background. When cognitive load exceeds the agent’s capabilities, they commit “epistemic capitulation” — a voluntary refusal to attempt to understand the essence, delegating this right to algorithms or authoritative sources.
3. Emotional Regulation Deficit (E \uparrow) and Affective Capture: Chronic attention fragmentation deprives the agent of the resource for emotional control (E). This makes them extremely reactive. Any complex question is instantly translated into the plane of affect (anger, fear, hatred). In this state, the brain is physiologically incapable of analytical thinking, which fixes the G indicator at maximum values.
Бесплатный фрагмент закончился.
Купите книгу, чтобы продолжить чтение.