
Preface
If you are holding this book in your hands, you clearly lack pessimism in your life. Or are you still hoping to find in it the tale of the kind elf promised by Professor N., which he never finished in the first part? You will be disappointed: the tale is over. This is the second part of the Dilogy of the Tragic, which brings the project to its completion. The Experience of the Tragic, the first book, explores the theme of the tragic for humanity. The task was to lay bare the problem and to show the full ambiguity of the contemporary human tragedy through an analysis of the limits of reason and the structure of experience.
The description of human tragedy proceeded through two independent thinkers — Professors N. and P. Professor N. radicalized the Dionysian acceptance of the world, proposing a conscious dwelling in a state of ontological uncertainty without recourse to the defensive strategies of consciousness. Professor P. criticized this position, introducing the idea of the Differentiating experience — a fundamental level of being that precedes any meaning. His position was described as ontologically oriented pessimism, free from aestheticized melancholy. But both positions — N. with his asceticism of unbelief, and P. with his analytical pessimism — reflect a dead end; ultimately the tragedy remains unresolved. N. was forced to live within uncertainty outside any beliefs, whereas P. endlessly generated new experience without finding a point of support. All forms of philosophical response proved secondary with respect to the experience that conditions them, while that experience itself remains excessive and inexhaustible.
This work completes the investigation begun earlier but takes a decisive step beyond the human perspective. If the first book concentrated on the tragic experience of human beings, this study discloses the cosmic nature of tragedy for the whole world and radically revises the ontological status of experience itself. After acknowledging the impasse of Professors N.‘s and P.‘s positions, only honesty about one’s own predicament remains.
The central thesis of this book is as follows: the Cosmos is not indifferent to life, as is commonly assumed. All living beings — temporary organizations of matter — necessarily serve to accelerate entropy on the path to general decay and equilibration. Everything in the Cosmos exists only within a single large-scale process: to bring the Cosmos closer to its own death. This conclusion requires a radical revision of both ontological foundations and ethical consequences. The book develops a processual ontology free from biocentric and anthropocentric prejudices.
Each part of the book is self-contained, and the reader may begin with any section. Nevertheless, the logic of the work is arranged to lead the reader from ontological foundations through processuality to ethical consequences and, finally, to the question of whether it is possible to exist under full awareness of the tragic nature of being. An appendix includes information about the author, since behind every philosophical position there is always a personal history that may help the reader better understand the origins of these ideas.
Like the previous book, this work relies on numerous works from various disciplines — from cosmology and thermodynamics to ethics and theology. I proceed from the premise that any contemporary intellectual work is intertextual by nature. It engages in a dialogue with predecessors and contemporaries, and knowledge is born precisely in that dialogue. Those who insist on sterile originality are either mistaken or misleading their readers.
The Cosmic pessimism developed here differs from classical philosophical pessimism in that it elevates the pessimistic intuition to a cosmological absolute. Processuality is given a strict definition through informational-thermodynamic processes, which makes it possible to look beyond the anthropocentric perspective.
With this the The “Tragic” dilogy is concluded, though it does not claim finality. The processual nature of reality means that all knowledge remains provisional. However, unlike optimistic epistemologies that read this provisionality as a promise of progress, Cosmic pessimism sees in it only confirmation that the movement of cognition itself is part of the entropic process. We do not know in order to overcome ignorance; we know because the process of knowing serves the same purpose as all other processes — the acceleration of movement toward final equilibrium and the death of the Cosmos.
Introduction
Before beginning, it is necessary to close an important topic that provoked a certain controversy among readers after the first reviews of the book The Experience of the Tragic appeared. Some saw in it an existentialism in the spirit of Kierkegaard; others — Schopenhauerian pessimism; there were those who perceived nihilism in the presence of dualism and the refusal of straightforwardness. When I wrote the first part of the book Existential Limits of Reason, I did not suppose that readers would interpret that section as a how-to guide or as yet another philosophy of reconciliation with reality.
The review “On the Experiences of the Tragic” by Matvey Chertovskikh, published in DARKER, No. 8, August 2025, revealed a fundamental hermeneutical misunderstanding, that allowed the section on “Liminal acceptance” to be read as another existential defensive strategy. The review was instructive for me, but it exposed the gap between the meaning I intended and the way it was read.
Allow me to clarify: Liminal acceptance was not introduced to offer solutions to existential problems, but precisely to demonstrate the absence of any solution at all; this was meant to be evident already at the stage of absurd thought experiments with “Liminal acceptance.” It was a demonstration of the absence of any answer or exit from humanity’s predicament and a large-scale critique of all liberatory practices, since contemporary practical philosophy, psychology, and spiritual practices are built on the illusion of an exit from a hopeless situation.
Stoicism promises inner tranquillity through control over what is within our power. Buddhism proposes liberation from suffering through renunciation of desire. Contemporary cognitive-behavioral therapy maintains that changing thought patterns will change emotional reality. All these approaches proceed from the fundamental assumption that there exists a way to cope with the horror of existence, that there is a technique, practice, or attitude that will allow us to attain resilience in the face of the absurd.
The introduction of Liminal acceptance into the narrative was a methodological device — I deliberately constructed a framework that outwardly resembles those consolatory practices in order to demonstrate its failure. It was a trap for readerly expectations, a pedagogical provocation if you will. I proposed five steps of acceptance — acknowledgment of epistemic absurdity, ethical minimization of suffering, exposure of compensatory mechanisms, humble merging with reality, acceptance of fate and finitude — knowing that in the second part of the book, voiced by Prof. P., all these steps would be methodically deconstructed as yet another form of self-deception and knowing that it is impossible to apply these steps without confronting acedia as an inner emptiness and meaninglessness. I described examples of such strategies as instances of futile attempts to find an exit.
I will analyse some examples that prof. N. cited as excellent practices. For example, the philosophy of Stoicism, which today is actively promoted within neo-Stoicism circles as a panacea for contemporary anxiety. Marcus Aurelius advises focusing on what is within our power and not worrying about what is beyond it. It sounds reasonable. But if we look more closely, where will this lead in reality? Fortunately, we have real cases. In Descartes’ Error (1994) Portuguese neuroscientist Antonio Damasio recounts the story of his patient whose personality changed after brain surgery. When Eliot, Damasio’s patient, lost the capacity to experience emotions after the removal of a brain tumour, he obtained precisely what the Stoics dream of — complete rational detachment from anxieties, passions, and emotions. As a result, he could not choose the date for his next appointment for half an hour, until he was interrupted, carefully weighing every “for” and “against,” because there was no emotional fuel for decision-making; he lost his job, his family, and all his savings. It turns out that emotions do not merely “interfere with reason” — they constitute the very structure of decision-making.
Somatic markers, as Damasio calls them, are a necessary condition for navigating the world and making decisions; without them one cannot survive long.
Stoics maintain that one can control the expression of emotions through self-discipline and a rational justification of the significance of events. But that presupposes that the brain makes decisions after we have “decided” them. Modern neuroscience shows the opposite. Decisions are made at an unconscious level fractions of a second before we “become aware” of them. Instead of genuine “control” we only observe how they unfold and rationalize them post factum. The illusion of control offered by Stoicism is precisely what Zapffe called the anchoring mechanism — the artificial creation of a foothold in the chaos of determined existence.
Moreover, the historical context of Stoicism exposes its true function. The philosophy emerged as a technique of submission to circumstances that were genuinely uncontrollable in the ancient world — disease, war, slavery, the arbitrariness of tyrants. It is no accident that Stoicism is recalled chiefly when needed, for example in times of war and uncertainty. Zeno of Citium, the founder of Stoicism, praised apatheia not as an absolute good in itself but as a survival mechanism under conditions of total vulnerability. Epictetus, a former slave, developed the doctrine of distinguishing what is within our power and what is not precisely from the experience of complete powerlessness. It is a philosophy of necessity presenting itself as a philosophy of freedom.
David Hume, in his essays, subjected Stoicism to crushing criticism. Stoics preach impassivity, yet behave as ordinary humans, subject to the same affects. Socrates disliked the Stoics precisely because they produced nothing but demagoguery — elegant words about virtue and self-control unbacked by real practice. Contemporary neo-Stoicism, Eastern practices, and their analogues repeat the same mistake, packaging ancient techniques of psychological defence in an attractive wrapper of “rationality” and “mindfulness” for the purpose of selling courses, guided meditations, books, scented candles, and shiny pendants.
Now turn to Buddhism, which is often presented as a deeper alternative to Western rationality. The Four Noble Truths proclaim that suffering arises from desire, and that the cessation of desire leads to liberation. Yet here lies a fundamental paradox that Buddhism has not resolved: the striving for liberation from desires is itself a desire. The striving for nirvana is another attachment, another form of the very clinging from which Buddhism calls us to free ourselves. Chan Buddhism recognized this problem and formulated it in the form of a kōan: “One who seeks enlightenment will never attain it.” But this recognition does not solve the problem; it merely relocates it to the plane of mystical paradox. The problem does not lie in paradoxes, however, but in Buddhism’s original turn of position in its separation from Hinduism through the denial of ātman and Brahman (anatta/ anatman), to which I will return in detail at the end of the book.
Peter Wessel Zapffe, whose philosophical legacy was central to my inquiry, demonstrated that many, if not all, religious and philosophical systems function as mechanisms of distraction from the fundamental tragedy of consciousness. It is more accurate, however, to shift this idea toward ontological distraction, where the distraction itself is dictated by the incessant renewal of experience — yet this does not remove the very tragedy of consciousness that Zapffe revealed. The Buddhist doctrine of anattā (anatman), denying the existence of a permanent, substantial “self,” at first glance appears to be a close ally of philosophical currents that regard the subject as an illusion or a by-product of processes. Indeed, in The Experience of the Tragic the development of the idea of the illusoriness of the subject begins. Nevertheless, a critical, worldview-defining chasm lies between these positions. Buddhism offers a path to liberation (nirvana) through the apprehension of non-self, whereas in the pessimistic and tragic perspective the recognition of this illusoriness does not lead to salvation but only deepens the existential catastrophe. We cannot “exit” the illusion because there is no one to exit — the very structure of experience is inextricably bound up with that illusion. Anyone who has endured states of profound depersonalization or dissociation can easily rebut the concept of anatman: feeling oneself as “not-I,” one nonetheless persists; one’s inward gaze remains, fundamentally it does not dissolve but recedes to the background, which in the philosophical dispute between Buddhists and Hindus exposes and again confirms that the most complete ontological models remain those that postulate some basic subjectivity, such as Atman and Brahman, rather than its total absence.
The problem with the concepts of Atman and Brahman is not their existence but their naïve theistic interpretation. They are mistakenly construed as a personal God or a “higher” consciousness, sometimes as a form of panpsychism, whereas in reality they describe an impersonal, unfolding process in which our local “I” arises as a transient form that, strictly speaking, does not truly persist. Depersonalization, for example, as with any psychedelic experience, reflects to us the capitalized “Self” — the Atman — which likewise is not there. Atman is often described as the “fundamental, higher Self,” but this is only a conceptual device for describing the manifestation of impersonal Brahman in the most limited form.
This internal contradiction of Buddhism reaches its climax in the concept of śūnyatā, or universal emptiness. Śūnyatā proves to be a self-negating notion that requires the mind to perform a logically impossible act: to disappear while at the same time continuing to witness its own disappearance. It is an endless movement toward a goal that annihilates the very agent that moves toward it, thereby rendering any genuine attainment paradoxical and inexpressible. Any experience of “emptiness,” any mystical experience, immediately becomes the content of consciousness — a new object for a subject that has not been abolished. Thus Buddhism, while rightly rejecting a substantial “self,” substitutes it with a phantom chain of conditioned dharmas, yet cannot satisfactorily answer the question: for whom does this chain unfold as a coherent, lived experience?
Not to mention that “nirvana” remains an unclear construct lacking a precise description, and that by his silence regarding the avyākṛta (the unaskable or “fruitless” questions) the Buddha only exacerbates the situation. Silence in itself could be a position, but not for him. Why begin this whole teaching if, in the end, the world is presented with only a “great” revelation about the essence of which one can say absolutely nothing? Having apprehended the true nature of all things, would one busy oneself explaining it to those who are not enlightened?
The Buddha’s active proselytizing and his strategic abstention from judgments on ultimate ontological questions call into question the very possibility of a substantive transmission of “enlightenment.” Moreover, Buddhism tends to absolutize the role of meditation. Undoubtedly meditation exists in Hinduism as well, but there it remains rather a useful instrument among others. Followers of the Buddha may claim that, in their tradition, meditation is not an end in itself. However, to an outside observer Buddhism has firmly established itself as the principal ideologue and promoter of meditative practice as the route to liberation.
All meditative experiences to which liberatory significance is ascribed can be readily explained physiologically — by hyperventilation or altered respiratory patterns. Holotropic breathing and the Buteyko method demonstrate this plainly, without Eastern mystification. Indeed, one may feel a “dissolution” of the boundaries of the self, a change in perception of body and space, but it is impossible to remove anything, for there is nothing to remove except egocentrism that is born within a multiplicity of processes. A completely “pure” system does not function; without a minimal center of experience the experience simply will not arise It is like an attempt to remove the operating system on which the user himself runs — at the moment of deletion not only the interface disappears, but also the very possibility of perceiving anything. Therefore any “dissolution” remains a temporary aberration, a protection against overload, but not a genuine disappearance of the subject. To me these Buddhist “truths” appear as a challenge to Hindu monopoly, much like Lutheranism challenged Catholicism for minds, power, and resources. History knows many such schisms. Mahāyāna, in turn, challenged the “orthodox” Theravāda, denouncing it as the “small vehicle” and thereby extending the potential audience of salvation to all sentient beings. Schopenhauer, inspired by Buddhism, proposed asceticism as a way of denying the will to live. Yet this solution too proves illusory. Ascetic negation is itself a manifestation of will — the desire not to desire. It is not an exit from the cycle of suffering. Philipp Mainländer went further, proclaiming suicide as the logical conclusion of Schopenhauerian philosophy. There we confront a paradox: suicide is an active act that requires will, motivation, a desire to change one’s state. It is not liberation from experience but its radical form — the final experience before non-being, which cannot be perceived as relief because there is no one there to feel relief. I described all this in the second part, but not everyone took notice.
Now about contemporary popular psychology and its promises of “emotion regulation.” Cognitive-behavioral therapy claims that changing thought patterns will change emotional states. This presupposes that we control our thoughts. But do we control them or merely pretend to? Thoughts arise spontaneously; we do not choose them — we observe them appearing in the stream of consciousness. Mindfulness techniques teach observing thoughts without attachment, but who is this observer?
Lisa Feldman Barrett showed that emotions are conceptual constructions the brain creates to interpret bodily sensations. We do not “feel fear” as an objective state — we construct the concept of fear from a set of interoceptive signals plus context plus cultural presets. This means that the attempt to “manage emotions” is an attempt to manage one’s own interpretations, which are generated at a pre-reflective level faster than we can become aware of them.
Consider the feature film Equilibrium. In this dystopia society achieves “liberation from emotions” through the drug prozium. The result is a zombified existence devoid of depth, meaning, and what makes life life — emotions. The film criticizes this utopia of rationality but misses a deeper truth: even if we could eliminate emotions, we would not solve the problem of suffering. Eliot’s case shows that life stripped of emotional coloration does not become “purer” or “more rational” — it becomes dysfunctional and destructive. Emotions constitute the foundation of the mind’s functioning rather than impede it.
That is precisely why Liminal acceptance, which I described in the first part of the book, cannot operate as a “solution.” When I speak of accepting epistemic absurdity, of humble merging with reality, of accepting fate and death — these are not techniques to be practiced in order to attain inner calm. They describe the moment when all techniques collapse, when it becomes obvious that there is no exit. Liminal acceptance is not a solution; it is the fixation of the problem in its pure form. One must understand that once Pandora’s box has been opened in the form of the awareness of all the horrors of existence, it can never be closed. For me, as for many, Ligotti was that box: no one showed the immediate reality of our world and our position in it better than he did, without metaphysical refinements, without Schopenhauer’s or Mainländer’s “Will,” a reality plain and comprehensible as it is, albeit often through the lens of the horror-fiction genre. Nothing will fill that void that has always been within you but has suddenly been discovered. Exposure therapy is effective. You can learn to think about death without panic attacks. Accepting death is relatively easy, and one should not forget about it; for some it is fear, for others even the worst thing they have ever faced in life. But emptiness and meaninglessness are something different. This is not fear of an event, but the awareness of the absence of any ground. And this emptiness is not filled. Whatever you try to fill it with — meanings, projects, attachments, achievements — it always returns, because it was never an empty space requiring completion. This is reality.
That is precisely why, in the second part of the book, Professor P. criticizes the concept of Liminal acceptance, showing that it remains within the framework of the illusion of a subject who “accepts.” But if the subject is only a temporary model generated by the brain, then who is it that accepts? Acceptance presupposes an agent; there is no separate subject that could accept anything. There exists only a continuous process in which temporary states arise and disappear. Any “strategy of acceptance” turns out to be a fiction: changes simply occur according to deterministic laws, and there is no independent center that chooses or rejects these changes.
My critique, for the sake of which this division of roles into two professors was conceived, is not directed against the practices themselves as such, but against their being sold as ontological salvation. There is a critical distinction between a method of temporary calming and a claim to solve the fundamental problem of existence. But what if we admit honestly: life is meaningless, death is inevitable, and between these two points there is nothing that would require heroic effort? If we accept this not as a premise for further “spiritual growth,” but as a final conclusion, then the logical consequence becomes the minimization of intervention. Fuss loses any meaning. Why accumulate wealth if it only concentrates suffering through exploitation? Why accumulate knowledge if it only multiplies the awareness of meaninglessness?
There exist traditions that approached this understanding without elevating it into a metaphysical system of salvation, but I will not list them here — you can easily identify them yourself. They described a way of existing with minimal resistance to processes that will in any case unfold independently of our efforts. The difference lies in the fact that they did not promise nirvana, did not promise inner peace or transformation of consciousness. If you have already understood that struggle is meaningless, then why continue to struggle?
This is not the heroic asceticism of the Buddhist type, demanding renunciation for the sake of a higher goal. It is a simple recognition that active participation in economic, social, and emotional cycles requires energy that is spent on maintaining illusions. To consume only as much as is necessary for the functioning of the biological machine. To produce only as much as is required for this minimal consumption. When an action is necessary, to perform it with sufficient strength to achieve the immediate result, without excess, without an attempt to control consequences that lie beyond the limits of direct influence. The attempt to foresee and control distant consequences is a form of megalomania, the assumption that we are capable of calculating infinitely complex chains of cause and effect. The Stoic tries to control his reactions, the Buddhist tries to control his desires, the modern person tries to control career, relationships, the future. All these attempts arise from the assumption that control is possible and desirable. But if life is meaningless and will end in death regardless of our efforts, then it is more honest to admit that we do not know what will happen and therefore must limit ourselves to minimally necessary intervention. Even death is another form of active relation, requiring emotional investment. It will come in its own time, determined by the combination of biological processes and random factors. All that remains is to allow the process to proceed on its own.
It is important to understand: this mode of existence is not virtuous in the Stoic sense. Virtue presupposes a moral dimension in which actions possess intrinsic value. Moral categories have lost a universal foundation, and nevertheless a selective, contextual ethics of harm-minimization remains possible — this is already a pragmatic ethical stance, not a metaphysical truth. This is not a path to liberation in the Buddhist sense — because there is nothing to be liberated from and no one to be liberated. This is not a technique of emotion regulation in the psychotherapeutic sense — because regulation presupposes a goal, and there are no goals. It is simply a way to pass the time between birth and death that minimizes one’s own contribution to the total suffering not through heroic self-sacrifice, but through the simple recognition that any active intervention in the world is more likely to multiply suffering than to reduce it.
This is not proposed as a model for imitation; you are free to do whatever you wish until you encounter external resistance. There is no claim that this is “right” or “better.” Most people will continue to create projects, build careers, start families, participate in economic cycles, believe in the possibility of improvement. That is their path. To be convinced of the contrary would be just another form of active intervention, an attempt to impose one’s own vision. But for those who have arrived at certain conclusions about the nature of reality and see no grounds for fuss, there is the possibility simply to stop fussing.
The difference between this view and the “liberating” ontological practices is fundamental. The latter present themselves as solutions, as paths toward something better — toward inner peace, toward “liberation.” If life is meaningless and will end in death (and nothing else is given), then heroic efforts to fill it with meaning are merely another form of self-deception, requiring constant energetic expenditure to maintain yet another cognitive distortion.
I hope this clarification puts everything in its place. Thus the position presented in the book is neither existentialism nor nihilism in the conventional sense. It is a consistent pessimism that rejects not only consolatory meanings, but even the very possibility of finding consolation in any action, practice, or “acceptance.” All known philosophies of acceptance, humility, or emotion management are only more refined forms of self-deception, ephemeral constructions. The second part, written in the voice of Professor P., destroys even this final illusion, showing that experience itself is primary in relation to all these constructions of the mind, and that there is no exit. In the third part, The Experience of the Tragic, written in my own voice, I once again clarified all these points so that no doubts would remain about what was said. Nothing is more real than pessimism — not because pessimism is “realism,” but because it alone refuses consolatory illusions and self-deception. It does not deceive you — and if it does deceive you, it will certainly not harm you, for what can harm a human being more than life itself? It simply reveals reality as it is — and this reality does not require our acceptance, because it exists regardless of whether we accept it or not, whether we hide or consciously drag our existence to the final day. The best thing a human being can do in life is to do nothing. Having dealt with this question, we may begin our path.
Part I. Ontological Foundations
Flat Ontology of Process
Traditional European thought, from Christian anthropology to phenomenology, insisted on the special status of the human being, deriving it from the act of self-consciousness: “I think, therefore I exist.” This position not only consolidated anthropocentrism, but also made the subject into an ontological foundation, a point of reference for the world. However, if we look closely at how the very sense of “I” arises, it becomes clear that it is not a source but a result of information processing — a temporary and fragile product of differentiation, integration of information, metabolic and neural processes that unfold without any “observer.”
What is called consciousness is only a process, stable only so long as it is supported by flows of energy, information, and interactions. Thinking, accordingly, is a processual activity. As soon as these flows change, consciousness collapses — not because an “I” disappears, but because it never existed as a substance in the first place. Instead, there is only a non-subjective dynamics, a process that I will describe further.
Such an approach makes it possible to step beyond not only anthropocentrism, but also the very opposition of “subject — object.” The point is not to “return the human being to nature,” but to see that nature never knew the human as a separate category. Living nature is not the center of the Cosmos, but one among many temporary forms of organization of process, as vulnerable and transient as any other. Culture, ethics, creativity — none of this is abolished, but it is deprived of its transcendent status. They turn out not to be manifestations of “spirit,” but complex, historically conditioned modes of stabilizing experience, which themselves obey the laws of thermodynamics, information, and decay.
The contemporary rejection of biocentrism represents a consistent philosophical movement, beginning with the deconstruction of anthropocentrism and reaching its radical phase in the overcoming of naturocentrism as a whole. This evolution is vividly expressed in the work of Jean-Marie Schaeffer, who enacts his rejection of anthropocentrism through a critique of the “Thesis of Human Exceptionalism.” Schaeffer identifies the structures of anthropocentrism by analyzing the Cartesian cogito, demonstrating that the claim to the uniqueness of human consciousness is untenable. However, while Schaeffer stops at the boundary of the biological, disputing human exceptionality without questioning the distinctiveness of the living, other philosophers have gone considerably further. Thinkers such as Manuel DeLanda, Graham Harman, and Levi Bryant initiated a broader perspective, first rejecting consciousness as a philosophically privileged phenomenon, and then discarding the very vertical ontology that structures hierarchies among nature, life, and objects. Their aim became the development of a flat ontology, in which humans, artifacts, natural phenomena, and technological objects coexist on a single ontological plane without any center or primacy. Thus, the final point of this trajectory is a complete departure not only from anthropocentrism and biocentrism, but also from any naturocentrism that dissolves heterogeneous entities into a single whole.
But let us not get ahead of ourselves. In his book, Jean-Marie Schaeffer conducts a meticulous deconstruction of the complex of assumptions we have come to designate as the “Thesis of Human Exceptionalism,” and in this deconstruction, the center of gravity invariably falls on the problem of the “I” and on the legacy of Cartesianism. For Schaeffer, the Cartesian cogito is not merely a historical argument; it functions as a methodological device through which an entire system of epistemological and ontological privileges is defended: the self-referentiality of the statement “I think” is attributed immunity against doubt, and on this basis an extended conclusion about the nature of the human as a thinking substance is constructed. Schaeffer clearly demonstrates both the power of this device and its limits: self-referentiality indeed grants the cogito a special performative force, but this force is not equivalent to a proof of the essential nature of the “I.” Descartes sought to derive from the immediate intuition of existence not only the fact of the speaker’s being, but also a characterization of the nature in which this being is realized; Schaeffer emphasizes that such a transition from fact to essence is unjustified when taking into account modern knowledge of biology, neuroscience, and the social nature of human life.
Schaeffer’s key idea is to show that the Cartesian defense of the “I” operates as a strategy of immunization: it delineates the boundaries of what is considered admissible in human knowledge and refuses to accept “externalist” evidence originating “from the third person.” Due to this strategy, any external knowledge about the human being can easily be declared irrelevant to understanding their true nature, because the true nature is supposedly revealed only in the act of self-consciousness. Schaeffer terms this segregationism: the Cartesian defense renders philosophy and the humanities partially insulated from naturalistic explanations, and this, in his view, is precisely what makes the Thesis of Human Exceptionalism resilient, regardless of empirical advances in biology.
Through an analysis of phenomena that disrupt the authentication of mental activity — auditory hallucinations, delusions of “inserted thoughts,” and similar disorders — he demonstrates that the very fact of “I think” can be experienced as non-self; the sense of authorship of thoughts and the sense of agency are separable and susceptible to failure. These clinical cases show that the immunity of the cogito does not guarantee that we are dealing with a monolithic, inflexible center of consciousness; in practice, the act of “I” is vulnerable to distribution, fragmentation, and erroneous attribution. Consequently, even where the performative force of the cogito remains undeniable, its conclusions regarding the nature of the “I” lose their persuasive power.
The development of these ideas can be traced in the works of the philosopher and cognitive scientist Daniel Dennett. Contemporary debates on the “hard problem of consciousness” are gaining momentum today. I have already discussed David Chalmers in the previous book, but I do not share his position on consciousness, although, as you will see in the second part of this book, I acknowledge his arguments regarding cognitive functions. The hard problem of consciousness is resolved if consciousness as a phenomenon does not exist; in this sense, I am an eliminative materialist. Dennett rejects Chalmers’ panpsychism; he is a physicalist, and his approach to consciousness can be called illusionism, which is much closer to eliminativism, but it does not eliminate the phenomenon of “consciousness” — rather, it provides a new explanation, stripped of any magical properties. From his book From Bacteria to Bach and Back, it becomes clear that his critique of the Cartesian subject is even more radical than Schaeffer’s. Whereas Schaeffer analyzes the cogito as an erroneous transition from the fact of thinking to a substantial “I,” Dennett goes further — he questions the very existence of that central subject which Cartesian tradition so vigorously defends. For Dennett, the “I” is a late product of evolution, a kind of “narrative center” arising from the intertwining of multiple cognitive processes. He compares the self to a theoretical construct in physics — a convenient reference point lacking any substantiality.
In the context of Schaeffer’s critique of segregationism, Dennett’s position appears as a logical completion: if Schaeffer shows that the Cartesian “I” cannot serve as a foundation for human exceptionalism, Dennett demonstrates that this “I” simply does not exist as a unified, coherent entity. His famous metaphor of the “heterophenomenon” describes consciousness as the result of distributed neural networks. Interestingly, unlike radical eliminativists, Dennett preserves the self as a useful illusion — similar to how the center of mass remains a useful abstraction in physics, even though it does not exist as a discrete entity in reality.
Dennett provides a powerful conceptual apparatus for demystifying the subject, but his caution regarding elimination leaves room for more decisive conclusions. His analysis shows that the Cartesian “I” is not merely erroneous — it is the result of a kind of simplification (the word “illusion” is poorly suited here, because if it were an illusion, there would have to be a reality beyond it, which does not exist at all), created by evolution to simplify complex cognitive processes.
But if the “I” is a process, not a substance, then its ontological status must not only be downgraded but reinterpreted as a temporary pattern within the flow of mental events. Dennett stops halfway, preserving a functional role for the self; I, however, insist that disintegration is not a side effect but a fundamental property of any formation. And this brings us to the discussion of flat ontology. To describe it, we turn to Ray Brassier’s article Deleveling: Against “Flat Ontologies”, where he elaborates the essence of flat ontology in detail.:
“[…] The expression “flat ontology’ has a complicated genealogy. It was originally coined as a pejorative term for empiricist philosophies of science by Roy Bhaskar in his 1975 book, A Realist Theory of Science. By the late 1990s, it had begun to acquire a positive sense in discussions of the work of Deleuze and Guattari. But it only achieved widespread currency in the wake of Manual De Landa’s 2002 book about Deleuze, Intensive Science and Virtual Philosophy. More recently, it has been championed by proponents of “object-oriented ontology’ and “new materialism’. It is its use by these theorists that I will be discussing today. I will begin by explaining the “four theses’ of flat ontology, as formulated by Levi Bryant. Bryant is a proponent of “object-oriented ontology’, a school of thought founded by Graham Harman. In his 2010 work The Democracy of Objects, Bryant encapsulates flat ontology in the following four theses:
Thesis 1: “First, due to the split characteristic of all objects, flat ontology rejects any ontology of transcendence or presence that privileges one sort of entity as the origin of all others and as fully present to itself.”
Thesis 2: “Second, […] the world or the universe does not exist. […] [T] here is no super-object that gathers all other objects together in a single, harmonious unity.”
Thesis 3: “Third, following Harman, flat ontology refuses to privilege the subject-object, human-world relation as a) a form of metaphysical relation different in kind from other relations between objects, and that b) refuses to treat the subject-object relation as implicitly included in every form of object-object relation.” The basic idea is that, unlike Descartes, Kant and other philosophers who put epistemology before ontology, flat ontology does not begin by negotiating conditions of cognitive access to the world. It begins by treating the human-world relation, i.e. our relation of cognitive access to things, as simply another thing in the world, which is to say, an inter-object relation. It refuses the claim that this epistemic or cognitive relation is inscribed in all objectifications, so that anything we say or do with objects reflects or encodes some kind of conceptual or practical transaction.
Thesis 4:”[F] ourth, flat ontology argues that all entities are on equal ontological footing and that no entity, whether artificial or natural, symbolic or physical, possesses greater ontological dignity than other objects. While indeed some objects might influence the collectives to which they belong to a greater extent than others, it doesn’t follow from this that these objects are more real than others. Existence, or being, is a binary such that something either is or is not.” These four theses taken together are supposed to entail something that has been called “antropodecentrism’. Bryant explains this in the following way: In this connection, flat ontology makes two key claims. First, humans are not at the center of being, but are among beings. Second, objects are not a pole opposing a subject, but exist in their own right, regardless of whether any other object or human relates to them. Humans, far from constituting a category called “subject” that is opposed to “object”, are themselves one type of object among many. What is significant are the denials that accompany the four theses of flat ontology. According to the first thesis, there is no transcendence: forms, species, kinds, archetypes, propositions, laws, and other abstract entities are disallowed. The flatness affirmed by flat ontology is the flatness of a more or less differentiated but nevertheless level ontological playing field. According to the second thesis, there is no world: no totality, universe, One-All, etc. This claim is not peculiar to flat ontologists; other contemporary philosophers, including Markus Gabriel and Alain Badiou, defend some version of it. According to the third thesis, there is no constituting subjectivity: no pure Apperception, Geist, consciousness, Dasein, etc. Flat ontologists do not begin by identifying subjective conditions of epistemic access to reality.
According to the fourth thesis, there is no appearance/ reality duality: what is, is, what is not, is not. Here we have an interesting reassertion of the Parmenidean thesis discussed in Plato’s Sophist. For Plato, philosophy or dialectics is predicated on the subversion of this Parmenidean interdiction on asserting the being of non-being or non-being of being: dialectics affirms the mixture of being and non-being. Flat ontology, in contrast, treats being as univocal: things can only be said to be in a single sense. But the claim about putting entities “on an equal ontological footing” implies that there are no degrees of being, just as there is no distinction between being and non-being, or between reality and appearance. Of course, this means that flat ontologists deny Plato’s claim that it is necessary to think the interpenetration of being and nonbeing, which is the task of dialectics.”
The critique offered by Brassier exposes the vulnerable points of flat ontology, but it does not invalidate its philosophical productivity. I accept its rejection of privileged entities, while also acknowledging the necessity of distinguishing between epistemological and ontological levels — a distinction that will become clearer in the subsequent analysis of process and experience. Flat ontology attempts to realize a thoroughgoing anthropo- and bio-decentrism. It moves far beyond the critique of human exceptionalism, subjecting to radical doubt the very idea of the privileged status of life as such. Life is no longer the center of the cosmos, but merely one among many temporary and co-equal modes of material organization, alongside other formations. This is the final point of the trajectory that begins with the critique of the Cartesian “I”: a world without a subject, without a biological center, without any hierarchy between the organic and the inorganic. Yes, it has its shortcomings; however, the essential contribution of flat ontology lies in compelling us to think beyond not only anthropocentrism, but any hierarchy grounded in the supposed “specialness” of life, mind, or metaphysical force. It is simply a necessary step.
Fractal Nature of Determinism
In the previous work I argued that human behaviour and consciousness are not the result of free choice but the lawful consequence of neurobiological, genetic, hormonal, and environmental factors. The work of Robert Sapolsky has shown that free will is an adaptive illusion necessary for social functioning yet incompatible with a scientific understanding of causation. The brain produces a sense of control, while at a deeper level all our decisions are determined by a chain of events beginning long before the moment of conscious choice.
However, the conclusion of the first part left open the fundamental question of determinism beyond human life and behaviour. For the Cosmos, classical linear Laplacean determinism, now appears untenable in light of contemporary science. Quantum mechanics introduces fundamental indeterminacy at the microscopic level. Chaos theory demonstrates exponential sensitivity to initial conditions, rendering long-term prediction impossible. Self-organization and the emergence of novel forms in nature and society seem incompatible with rigid predestination. How then can determinism be preserved without collapsing into indeterminism or mysticism?
The answer requires a radical reconception of the concept of determinism itself. The linear model of causality, where one cause sequentially produces an effect, must be replaced with a fractal model in which causality is distributed, recursive, and self-similar at all scales. Fractal determinism does not deny quantum randomness, chaotic unpredictability, or spontaneous self-organization. On the contrary, it integrates them into a deeper notion of necessity in which randomness appears as a mode of manifestation of lawful structure, and novelty is a lawful consequence of complex interactions among many causal lines.
If linear determinism assumed that the future is implicitly encoded in initial conditions as an explicit plan, fractal determinism asserts that the future is not pre-scripted but inevitably arises from the system’s self-organization. If classical determinism sought a prime cause and a final goal, fractal determinism describes being as a self-generating process without an external source or teleological direction. If the traditional approach set necessity and chance in opposition, the new perspective treats them as complementary aspects of a single process.
This model finds expression in a number of fundamental mechanisms described within fractal geometry and the theory of complex systems. The observation of self-similarity — that form repeats across scales — emerged from practical problems of measurement. Lewis Richardson’s famous coastline paradox showed that the more finely one measures an indented line, the longer it becomes. This observation received mathematical formulation in the work of Benoit Mandelbrot, who laid the foundations of fractal geometry. Mandelbrot introduced the concept of fractal dimension, which permits the description of irregular, self-similar forms widespread in nature: coastlines, river networks, the vascular system of the lungs, and the distribution of neural activity in the brain.
Parallel to this, the theory of nonlinear dynamics and deterministic chaos developed. Concepts such as strange attractors, bifurcations, multistability, and self-organized criticality emerged. These research directions provided tools for measuring the scale-structure of systems via power laws, autocorrelations, and fractal dimension, and for modelling how simple local rules generate complex global patterns. To disclose the idea of a fractal basis for determinism it is sufficient merely to point to these terms without delving into their technical definitions. Fractal determinism takes these empirical and formal results and builds from them a new ontology of causality.
The first fundamental mechanism of fractal causality is feedback. Any action alters the environment, and the altered environment acts back upon the source of change. This cycle repeats, and through repetition a stable direction of change is formed. A classic instance of feedback is the river channel. The flow of water alters the bank; the altered bank changes the current; the current again acts on the bank. Gradually a stable form emerges, although no single act was precomputed or predetermined by an external force. The form of the channel is the result of continuous interaction between flow and bank, where each state is determined by the previous one and determines the next.
Such cyclical causality dissolves the classical distinction between cause and effect as separate, sequential events. In fractal determinism cause and effect merge into a continuous process of mutual determination. The river-and-its-channel is a single process in which the division into active and passive, forming and formed, is conditional and depends on the viewpoint. This principle applies to any system. Neural networks in the brain are shaped by experience, yet the formed networks determine what experience will be perceived and how it will be integrated. Social institutions are created by people, but then the institutions shape the people who recreate them. Economic systems are produced by individuals’ decisions, but those systems define the space of possible decisions.
The second mechanism of fractal causality is threshold response. Not every perturbation triggers a process of development. To move from an insignificant state to a noticeable one, the cumulative effect must cross a certain threshold. Below the threshold perturbations are damped by the system; above it a rapid redistribution begins, often cascading in character. This property explains why many changes appear random and unpredictable. A system can accumulate tension for a long time without visible change, then suddenly transition to a new state. The concept of self-organized criticality, formulated by Per Bak, Chao Tang, and Kurt Wiesenfeld, describes systems that naturally evolve toward a critical state in which the smallest disturbance can trigger an avalanche of any size. Example: a sandpile onto which grains are slowly dropped. Most grains provoke minor readjustments, but occasionally avalanches of varying scale occur, whose distribution obeys a power law. The system reaches the critical state by itself without external tuning of parameters. This implies that many natural and social systems constantly reside on the edge between stability and chaos, where the accumulation of imperceptible changes can suddenly produce dramatic reorganization.
Earthquakes, wildfires, mass extinctions, stock-market crashes, and social revolutions all display the same fractal structure in the distribution of events by scale. The frequency of events is inversely proportional to their magnitude according to a power law. This means that small events occur very often, medium events less frequently, and large events rarely — yet all are manifestations of the same process. There is no principled distinction between small and large events, between normality and catastrophe. A catastrophe is a rare but lawful fluctuation of a system that resides in a critical state.
The third mechanism of fractal causality is spatial transmission of change. When something happens in a system, its effect is felt not only at the immediate locus but also in neighbouring regions. Change propagates from one part to another as a chain of causes in which the outcome of the first step becomes the beginning of the second. If a patch of ground becomes saturated, water runs downhill and alters the moisture of adjacent patches. If pressure falls in one part of the atmosphere, air moves and changes pressure elsewhere. If an infection spreads, infected individuals contact susceptibles and transmit the pathogen.
A key parameter here is the connectivity of the system. A dense, highly branched network of links facilitates propagation of a process over considerable distances from its point of origin. If connectivity is weak or sparse, the process dies out. Thus a local event can gradually grow into a global change because the perturbation is transmitted step by step through contacting elements. Epidemics spread along networks of social contacts. Financial crises propagate through chains of debt and interdependence. In all these cases the structure of connections determines the dynamics of the process.
The fourth mechanism of fractal causality is historical dependence. Any change leaves consequences: it alters form, distribution, conditions, or behaviour of elements. These consequences do not vanish but enter the current conditions, becoming part of the context for subsequent processes. Therefore the same impact produces different effects at different times because the environment has already been modified by past events. Soil scorched by fire absorbs moisture differently and supports different vegetation. A society that has passed through crisis responds differently to risk and uncertainty. An organism that has recovered from infection acquires immunity or chronic damage.
The concept of path dependence, developed in economic history and evolutionary economics, describes how past decisions constrain future possibilities. For example, the QWERTY keyboard layout persisted through historical contingency, reinforced by market mechanisms rather than by optimality: its wide adoption created an infrastructure of training and production that made switching to alternative layouts uneconomical. Technological standards, institutional structures, and cultural norms display the same inertia. Once established, they become self-sustaining, even when the original causes of their emergence have disappeared.
Path dependence differs radically from linear determination. In linear determinism the past determines the future through a continuous chain of causes. In fractal determinism the past determines the future through accumulated structure — through the context in which current processes unfold. Evolutionary biology illustrates this with special clarity. Organisms carry in their structure the traces of the entire history of life on Earth. Each adaptation is overlaid upon preceding structures, modifying them but not cancelling them. Hence evolution does not proceed toward an optimum but wanders across a landscape of possibilities, where every step is constrained not only by present conditions but by the whole prior trajectory.
The fifth and most fundamental mechanism of fractal causality is scale invariance, or self-similarity. This is the property whereby the same processes repeat across different scales. Small and large obey the same rules: change produces a response, the response alters conditions, and the cycle repeats. The difference is only in size and energy, not in the principle of organization. Thus microscopic processes, for example turbulence in a droplet, develop according to the same logic as large atmospheric vortices. In economics this is especially striking: short-term price fluctuations are structured the same way as long-term trends, as Mandelbrot demonstrated in his studies of financial time series.
Fractality means that the form of behaviour is preserved when the scale of observation is increased or decreased. Only the level at which it manifests changes, not the structure of the process. This is a deep property of natural systems that went long unnoticed because traditional science focused on characteristic scales and sought specific laws for each level of organization. The fractal approach shows that many systems lack a characteristic scale. They are self-similar across scales, from the microscopic to the macroscopic.
Power laws, which describe the distribution of events by magnitude, are the mathematical expression of scale invariance. If the probability of an event is inversely proportional to its magnitude to some exponent, then changing the measurement scale preserves the shape of the distribution. This contrasts with the normal distribution, which involves a characteristic scale (a mean) and where deviations from the mean decrease exponentially. In systems governed by power laws there is no typical event: small, medium, and large occurrences form parts of a single continuum.
The population distribution of cities, the distribution of incomes, the distribution of firm sizes, the distribution of citations of scientific papers, the distribution of solar flare intensities, and the distribution of earthquake magnitudes all exhibit power-law behaviour. This indicates that the processes generating these distributions are self-organizing and scale-invariant. There is no external regulator prescribing an optimal city or firm size. The system itself generates the entire hierarchy of scales through local interactions and feedback.
This assemblage of feedback, transmission of perturbations, path dependence, and self-similarity constitutes the essence of fractal determinism. It is not founded on a single originating cause as in the classical Laplacean approach, nor does it admit pure randomness as in indeterminism. Everything is subject to necessity, but that necessity is not one-dimensional: it is multilayered, recursive, and self-repeating.
A crucial consequence of fractal determinism is a reconsideration of the relation between quantum mechanics and the macroscopic world. In standard quantum mechanics the state of a particle in superposition is fundamentally probabilistic: the outcome of a measurement cannot be predicted even with full knowledge of the wave function. This appears to contradict determinism. However, the process by which an uncertain quantum state gives rise to a definite macroscopic outcome is described by the theory of decoherence, which proceeds in a fully deterministic manner.
Decoherence arises from the irreversible interaction of a quantum system with its environment. The environment consists of an enormous number of degrees of freedom — photons, air atoms, molecules of the measuring apparatus. These interactions cause a rapid suppression of the interference terms of the wave function, after which the system behaves as if it were in one of the classical states. Decoherence occurs extremely rapidly for macroscopic objects, on timescales on the order of 10⁻²⁰ seconds, which renders quantum superposition practically unobservable. Decoherence does not solve the measurement problem in quantum mechanics. It does not explain why one particular outcome is observed rather than another. That remains fundamentally random in the Copenhagen interpretation. However, decoherence explains why the macroscopic world appears classical and deterministic. Quantum uncertainty does not penetrate the macroscopic world not because quantum mechanics ceases to apply, but because interaction with the environment makes interference unobservable.
In nonlinear and chaotic systems, small quantum fluctuations at the initial stage can be exponentially amplified and lead to observable macroscopic differences in the final state. Here, however, it is necessary to distinguish predictability from determinism. A system can be deterministic — its state uniquely determined by initial conditions and laws of evolution — and yet unpredictable because of sensitivity to initial conditions. In chaotic systems the exponential divergence of trajectories makes long-term prediction impossible even with infinitely precise knowledge of initial conditions, since any finite precision will be exhausted in finite time.
Thus, although the outcome of an individual quantum event — for example, the decay of an atom — is regarded as fundamentally random, its macroscopic consequences can be deterministically predicted once decoherence has turned that outcome into a classical fact. Moreover, for ensembles of quantum events the probabilistic predictions of quantum mechanics become deterministic in the thermodynamic limit. The law of large numbers guarantees that fluctuations in relative frequency diminish as the number of events grows. Therefore macroscopic observables, which average over an enormous number of microscopic events, behave deterministically with precision beyond any practical means of measurement.
Fractal determinism integrates quantum randomness as one mechanism through which necessity is realised at the macroscopic level. But it does not cancel determinism, since macroscopic processes remain determined by decoherence and statistical averaging. Thus quantum mechanics is compatible with fractal determinism, even if it is incompatible with linear Laplacean determinism.
Weather systems exemplify fractal determinism with particular clarity. The atmosphere is a turbulent fluid governed by nonlinear hydrodynamic equations. These equations are deterministic but generate chaotic dynamics because of nonlinear interactions and feedbacks. Edward Lorenz demonstrated that even a simple model of atmospheric convection exhibits sensitivity to initial conditions, making long-term prediction impossible. The famous “butterfly effect,” according to which the flap of a butterfly’s wings can alter the weather weeks later, illustrates this sensitivity.
Unpredictability, however, does not imply absence of determinism. The atmosphere remains a deterministic system in which each state is uniquely determined by the previous one. Unpredictability arises from the impossibility of measuring initial conditions with infinite precision and from the exponential growth of errors. At the same time the atmosphere displays a fractal structure on all scales. vortices of every size interact with one another, transferring energy from large scales to small via the turbulence cascade. The distribution of energy across scales follows Kolmogorov’s power law, which is a signature of scale invariance.
Climatic patterns such as El Niño exhibit self-organized criticality. The ocean – atmosphere system accumulates heat in the western Pacific until a threshold is reached, after which heat is rapidly redistributed eastward. This event affects weather worldwide via teleconnections — atmospheric waves propagating thousands of kilometres. The frequency and intensity of El Niño events are not regular, but they conform to statistical regularities that reflect the fractal structure of the climate system.
Financial markets display the same fractal logic. Prices and volumes reflect the continual operation of feedbacks. Market participants’ actions alter liquidity and sentiment; this changes subsequent decisions, which in turn affect prices. Most trades do not alter the dynamics, but when volume coincides and accumulates, a single signal can trigger a chain reaction and evolve into a global change of trend. Market crashes are lawful, though rare, fluctuations of a system in a critical state. The market constantly balances on the edge between stability and chaos, where the accumulation of imperceptible changes can suddenly produce a reconfiguration and a consequent collapse or rally. Therefore markets cannot be predicted exactly, but the probability of large moves can be assessed from signs of concentration of volume and volatility.
The fractal structure of markets arises spontaneously from the interaction of many participants, each acting on limited information and private aims. The market self-organizes into a critical state without external parameter tuning. This demonstrates the universality of self-organizing mechanisms that operate in systems of different natures: physical, biological, economic, social.
All this composes the logic of fractal determinism. The world is not predetermined in the Laplacean sense. The future is not encoded in initial conditions as an explicit plan awaiting execution. Yet the world inevitably unfolds according to its own internal links, feedbacks, path-dependencies, and scale-invariances. Chance is not the opposite of necessity but a form of its manifestation. Any deviation is built into the overall web of causes as a lawful fluctuation of a self-organizing system.
The synergetic aspect appears here as continuous self-renewal and self-organization. The Cosmos, understood fractally, constantly reproduces its own differences and generates new forms from the interaction of already existing ones.
Thus everything that exists is not merely the result of external causes but an active mode of being participating in its own self-determination. Human beings, like any other systems, are links in a large recursive pattern in which inner and outer, cause and effect, subject and object cease to be rigid oppositions. A choice deemed “free” is the outcome of the most complex interaction of genetic, hormonal, neural, and environmental factors operating across multiple temporal scales. In sum, not only human behaviour but all processes in nature and society are the results of multilayered, recursive causality in which each event is determined by the system’s entire prior history and by the whole network of current interactions.
This perspective naturally leads to a form of cosmic fatalism. Everything happens as it can happen because other outcomes are impossible within the given configuration of causal links. Every possibility is already the realization of one of the fractal directions of a system’s development, and any deviation is a lawful consequence of internal interaction. This fatalism does not exclude novelty; on the contrary, it makes novelty a necessary effect of complex interactions. The new does not arise in spite of determinism; it is its direct manifestation, a form born of the interplay among many causal threads, folds, and irregularities of being.
Therefore the idea of freedom, if it is possible at all, can only be understood as the recognition of one’s inclusion in an endless process of determination. Freedom is not the capacity to act independently of causes; it is rather an understanding of the causes that operate through us.
Fractal determinism unites self-motion, chance, necessity, and recursion into a single ontological schema in which everything — matter, consciousness, event — turns out to be different levels of the same self-generating process. There is no external observer, no position outside or above being from which one could judge it. Everything that exists is internal to being; the external is merely a shift of scale, a transition to another level of organization that likewise remains internal. Hence every difference is an internal differentiation of being with itself — a differentiation of the unified process into many local processes, each of which reflects the whole.
All that occurs is the infinite self-diffusion of being within its internal dimensions, the unfolding of potential differences into actual forms. If one regards such determinism as fatalism, it is a fatalism not of predestination by an external will or divine design but of the internal inevitability of a self-generating process. It is the fatalism of infinite self-similarity, in which every act repeats the structure of the whole, reproduces it at its own scale, yet never coincides with it completely, always adding a new difference, a new fold in the fabric of being.
This view has profound implications for the understanding of human existence. If our actions are determined not by a linear chain of causes but by a fractal chain of interactions, then the question of responsibility must be reformulated. We bear responsibility not because we could have chosen otherwise in some metaphysical sense, but because we are part of a causal network through which further trajectories of the system are realized — a system that already possesses, in its own terms, a predetermined end. Our actions have consequences that propagate through networks of connection and affect future states. In this sense we are responsible as active participants in the process of self-organization, not as free agents standing outside causality.
Sapolsky emphasized that understanding the biological determinism of behaviour requires a revision of systems of punishment and reward. Fractal determinism extends this conclusion. If action and behaviour are properties of a multiplicity of interacting factors operating across different spatio-temporal scales, then the point of intervention must shift from the individual to the environment. We are not dealing with a “bad person” but with a person in a bad system (a bad society). This idea is taken to its ethical limit by Dietrich Bonhoeffer, who argued that society bears guilt for crime not merely because it failed to prevent it, but because its indifference produced it.
Fractal determinism, therefore, is not merely an update of classical determinism in the light of contemporary science, but a rethinking of the very nature of causality. It shows that determinism is compatible with unpredictability, necessity with contingency, lawfulness with novelty. It integrates the findings of quantum mechanics, chaos theory, self-organization theory, and complex-systems theory into a single philosophical picture in which being is understood as a self-generating, self-organizing process possessing an internal necessity that manifests across all scales through self-similarity.
Cosmic Pessimism
The Birth of the Universe from Nothing
In The Experience of the Tragic I intentionally set aside the question of the formal origin of cosmological eventfulness. It lay outside the project of the book and threatened to expand into an unwieldy survey of specific cosmological scenarios. Now, however, we can allow ourselves to pay closer attention to this topic.
The material Universe rarely becomes the explicit subject of contemporary pessimism. And when it does, such works usually leave no real trace of the Universe or cosmology. Of course, the Universe is studied by science, but science is not concerned with the character of the Universe as a place or with its relation to life — that is not its task. And yet it is science that inadvertently pushes philosophy toward transformation and toward a new self-definition.
Today, the philosophy of pessimism, it seems to me, faces an urgent problem: to comprehend the nihilistic character of the Universe itself. Moreover, all the conclusions of modern physics point either to scepticism or to direct pessimism regarding the Universe as such. The Universe, like the world as a whole, is not indifferent to our existence — it is actively hostile to it.
The early pessimists of the late nineteenth century, while not speaking directly of the Cosmos, simply lacked sufficient knowledge about it. What was there to say when the expansion of the Universe was only announced in 1929? With the rise of philosophical nihilism in the twentieth century and the rapid development of the sciences, the idea of “indifference” — of the nihilistic nature of the Cosmos and the meaninglessness of life and of all living things — prevailed. At least, virtually all contemporary pessimistic works and authors repeat this mantra.
Philipp Mainländer perhaps came closest to cosmic pessimism: he distinguished in the world a tendency toward destruction and self-negation. Yet, alas, his metaphysics and the spirit of his age prevented him from thinking about the Cosmos impartially — just as Schopenhauer’s notion of the “Will” hindered Schopenhauer himself.
In the twentieth century Peter Wessel Zapffe, reflecting on the re-equipment of our reason, argued that this very re-equipment makes us especially vulnerable to an indifferent, “silent” nature. We are forced to invent defensive mechanisms for reason itself. Subsequent pessimists of the twentieth and twenty-first centuries — from Cioran to Ligotti, from Benatar to others — continue this line: the Cosmos is taciturn and indifferent.
However, there are strands of pessimism that reject the nihilistic reading of the Cosmos. Thus, in the Russian-speaking community EFILism it is asserted:
“The Universe was not created by an intelligent Being. Rather, its arising is conditioned by the fundamental necessity of reality itself. Perhaps the state of “nothing” (the absolute absence of everything) is logically impossible. Something must always exist.
The primordial state was not a true “nothing” but a quasi-stasis. Think of it as a potentially unstable equilibrium in which events were temporarily absent, while the intrinsic properties of reality (nomological laws) already contained the potential to disturb that calm. Put simply, the Universe is not a “creation” but a breach or disruption of an original, albeit fragile, equilibrium that occurred thanks to internal, necessary laws. It is like the “division of zero by zero” — not something someone did, but something that happened because of the fundamental properties of being itself. Just as gravity simply is, so too the existence of the Universe may be equally fundamental and require no external cause. But note an important point: most of these conclusions rest on the postulate of the Universe’s indifference — on the phenomenology of human experience of that “background,” rather than on detailed cosmology or an analysis of the world’s telos.
After that “beginning” the Universe does not move toward any goal or plan. It exists and unfolds as a purposeless but strictly deterministic chaos. It is an endless process of decay, recombination, and motion of matter and forces. There is no “plan,” “purpose,” or “meaning” in this dynamics. Events simply “happen” in accordance with unchanging physical laws and inertia. Imagine billions of falling dominoes where each fall is perfectly determined by the previous one, yet the entire chain has no final aim other than simply being. Life, including consciousness, is not some special, necessary, or “desirable” part of the Universe. It is merely a temporary, local “mutant of chaos” — an immensely complex but ephemeral configuration of matter and forces.”
Here the line of thought is strictly pessimistic, and the phrase “fundamental necessity” points toward a deterministic orientation. Yet contradictions soon appear, returning us once again to natural nihilism and the supposedly indifferent orientation of the cosmos. Let us return, however, to the question: why are works devoted specifically to “cosmic pessimism” either nonexistent or given that title only in a speculative sense? The idea of an indifferent Universe has become so familiar, even clichéd, that no one bothers to challenge it. Attention is focused on the “indifferent” role of the cosmos, while the direction of thought itself remains oriented toward the human being and the human place in the Universe. Such works are, of course, closer and more engaging to the reader, but this does not justify their titles. There are many reasons for this: a lack of philosophical interest in the cosmos, the Wittgensteinian principle that one should remain silent about what cannot be spoken of, and so on. But one of the key “problems” behind the absence of works on cosmic pessimism — apart from the difficulty of the topic itself — is, in my view, the unexamined and erroneous assumption of the cosmos’s indifference to life, from which further premises follow.
I will now try to show that the Universe has a direction and that this direction can, with some caution, be given the anthropocentric label “meaning” — though it is more accurate to call it a “process.” That the cosmos is, in fact, not quite so indifferent. And that in philosophy, especially in existential philosophy, it should be discussed not within the framework of nihilism, but strictly within the framework of cosmic pessimism.
To begin with, contemporary cosmology proposes several competing hypotheses about the origin of the Universe, yet in all of them the question inevitably arises concerning the source of the primary energy that gives rise to being. The most widespread hypotheses are those in which our Universe arises from the “remnants” of a preceding one — whether as a singularity of a pre-universe, a flare at the point of gravitational collapse, or a tunneling effect within a multiverse. There are even concepts linking the birth of new universes to the interiors of black holes; the very idea of “cosmological natural selection” has received some development through formal analogies with biology. Yet even here a “parent” Universe is required, within which a black hole capable of generating another Universe first appears. All of this stems from the fact that we have not yet succeeded in unifying quantum mechanics and gravity, nor resolved the fundamental questions concerning the nature of space, time, and energy. But even if we were to uncover all the secrets of the Universe — what then? Would that solve the fundamental problem of our situation?
Modern science is increasingly turning to bold hypotheses such as the multiverse, attempting to explain the emergence of being through the existence of countless worlds or prior cosmic entities. Progress demands a willingness to consider the most daring ideas, even when they initially appear speculative. Perhaps the many-worlds interpretation, or other multiverse hypotheses, will prove closer to the true nature of reality. Yet there also exists a more economical explanation, which will be discussed below. Whichever approach proves more accurate, for us this Universe remains the only reality accessible within the limits of our lives. And although we are confined to this reality, that very limitation does not nullify the fruitfulness of bold conjectures. We will most likely be unable to gain direct knowledge of any other reality.
We must return to the question of why anything exists at all — the question “Why is there something rather than nothing?”, posed since the time of the ancient Parmenides and taken up by Leibniz, Wittgenstein, and, of course, Heidegger, who called it “the fundamental question of metaphysics.” Among all cosmological hypotheses, one in particular is especially compelling to me — the most radical and, paradoxically, the most internally consistent. Contemporary cosmology allows for the possibility that the Universe could have arisen without any external cause. This is one of the working hypotheses in modern theoretical physics, it describes the emergence of spacetime from absolute Nothingness. Of course, this is neither the only nor the final account of the origin of the Universe. Its appeal lies not in “solving” the metaphysical riddle of being, but in showing how stable, differentiable configurations can arise from physical non-structure without recourse to transcendental explanations.
This absence of structure is understood as a pre-cosmic state — absolute NOTHING, in which space and time themselves do not yet exist. These ideas were developed by the cosmologist Alexander Vilenkin, who showed that, in the case of a closed universe with zero total energy, “nothing prevents such a universe from spontaneously arising from nothing.” Much later, the physicist Lawrence Krauss popularized this hypothesis in a popular-science form. In his book A Universe from Nothing, he describes the birth of the Universe while understanding Nothingness as a vacuum in which quantum fluctuations arise. Critics were right to note that Krauss’s “Nothing” is a conceptual substitution, and that it says nothing about the origin of the vacuum itself. Krauss sidestepped this question, but the implied answer could be formulated as follows: the vacuum, like the quantum fluctuations within it, emerged from absolute Nothingness. He maintains that observational evidence is consistent with “a universe that could have, and plausibly did, arise from a deeper nothing — including the absence of space itself…”. Both authors emphasize the impersonal, anti-anthropological character of such a scenario: the process of the world’s emergence from nothing presupposes no external design and no purpose of its creation.
But how can something arise from absolute nothing? The key lies in the property of Nothingness itself: it is paradoxically unstable. Contrary to the intuitive image of emptiness as something absolutely static and eternal, quantum physics shows that a state of complete absence is not rest but a tense indeterminacy. “Nothing” cannot remain nothing, because the very notion of “remaining” already presupposes time — and there is no time there. This follows mathematically from the fact that, in the absence of spacetime, there are no constraints capable of holding non-being in its “zero” state. And here a profound irony becomes apparent: the energetically most favorable state is not emptiness, but existence. A universe with a zero energy balance (where the positive energy of matter is offset by negative gravitational energy) is physically “simpler” than absolute Nothingness, because it resolves the fundamental contradiction of non-being.
Yet the Universe that comes into being is not in equilibrium. It is born in a state of extremely low entropy — ordered, non-equilibrium, saturated with free energy. From that moment on, its irreversible movement begins toward the very stable state that is physically more favorable than non-being: toward maximum entropy, toward heat death, toward absolute equilibrium. The Universe seems to be “completing” its transition out of nothingness by approaching the most stable configuration. It cannot return to non-being — thermodynamics forbids it. All that remains is to move forward, dissipating energy, increasing disorder, and drawing nearer to a state in which nothing further happens, everything is balanced, still, and inert..
Thus, to reiterate — “Absolute Nothing” is understood as a state of radical absence of space and time, not merely as a vacuum with quantum fluctuations within a given geometry, as in Krauss’s formulation. In such a state, there are no classical fields, particles, or “arrow of time,” yet quantum-cosmological methods allow one to define its wave function. Using the equations of quantum field theory, it can be shown that even from this “zero” configuration there exists a nonzero probability of transition to a state with a finite geometry. It is worth noting that you, as well as I, will likely wonder how physical laws can be applied to “Nothing” if there is nothing in “Nothing.” This is indeed intriguing, but Vilenkin himself addresses this question in his book Many Worlds in One. The Search for Other Universes, derived from his 1982 article “Creation of the Universes from Nothing,” in Chapter Seventeen:
“This means that there is simply no space and time, they are, in a precise sense, unreal — ‘immaterial’, they are pure ‘nothing’; they are simply a manifestation of the uncertainty principle, a foam of probabilities that space-time has one metric or another, topology, number of dimensions, etc. The concept of a universe materializing out of nothing boggles the mind… yet the state of ‘nothing’ cannot be identified with absolute nothingness. The tunneling is described by the laws of quantum mechanics, and thus ‘nothing’ should be subjected to these laws. The laws must have existed, even though there was no universe..”
The answer, of course, is not bad and refers us back to Plato, to the world of ideas and things in philosophy, but it explains nothing. And how are we to interpret the statements that “the state of ‘nothing’ cannot be defined as absolute nonexistence,” yet at the same time “it is pure ‘nothing’”? Further reading clarifies this somewhat. Vilenkin suggests that apparently, in complete “Nothing,” only the laws of physics exist, but he cannot give a definite answer to the question of where they come from, and he proposes that everything comes from God, as he mentions toward the end of the book. Presumably, the question of the existence of physical laws in nothing did not concern him greatly. There is no problem with the fact that Vilenkin saw a divine origin for “nothing,” since, as we will see later, he is not the only one who approached the hypothesis of a universe from “nothing.” But for now, let us return to the tunneling of geometry. It follows that, if Vilenkin’s conclusions are true or close to true, the geometry of spacetime itself can “tunnel” through a barrier of zero size, in a manner analogous to how a particle with nonzero amplitude penetrates a classical potential barrier. Alexander Vilenkin fully formalized and described how a closed universe could arise via quantum tunneling from literally “nothing” into de Sitter space, after which inflationary expansion begins. From the perspective of the wave function, this corresponds to a condition of a wave-free state on a zero geometry (the so-called “tunneling wave function”). After tunneling, a finite-size “bubble” appears. If this bubble surpasses a critical scale, it does not collapse but inflates to large dimensions, entering an inflationary phase of expansion. Modern studies, including those incorporating quantum gravity (for example, in loop quantum cosmology), continue to develop this idea of universe tunneling at zero scale factor.
Concurrently with Vilenkin, the hypothesis of a universe with zero total energy was proposed. According to this hypothesis, the positive energy of matter (mass, fields, kinetic energy) is exactly balanced by the negative energy of the gravitational field. In 1973, Edward Tryon suggested that our universe is a large-scale fluctuation of the quantum vacuum, with its total energy equal to zero because the energy of matter is precisely offset by gravitational potential energy.
If the positive material energy and the negative energy of curvature exactly compensate each other, then the “appearance” of the universe requires no external energy source. As Stephen Hawking noted, in the creation of mass, exactly as much “negative” energy arises as the positive energy taken, so that the total energy remains zero.
The zero-total-energy hypothesis assumes that positive contributions (energy of mass, fields, and kinetic energy) are counterbalanced by negative contributions associated with the gravitational field. This does not contradict fundamental laws of nature. The mass – energy equivalence law remains valid: mass is still equivalent to energy, and the creation of mass does not imply arbitrary appearance of positive energy outside the equations. The law of local conservation of energy and momentum, as formulated in general relativity, holds: no spontaneous loss or creation of energy – momentum occurs in any local region of spacetime. Einstein’s equations are also not violated — the balance of positive and negative contributions emerges as a solution to these equations under the chosen boundary conditions.
However, it should be noted that the energy of the gravitational field in general relativity cannot be unambiguously localized; its evaluation uses global constructions and special definitions (for example, ADM energy for asymptotically flat spaces) or relies on specific boundary conditions. Therefore, the statement that “the total energy of the universe is zero” is correct only within a particular model and chosen method of calculation, and this should not be forgotten when approaching the issue critically.
The combination of the tunneling mechanism and the zero-energy concept provides a mathematically consistent scenario. A quantum transition from a state in which space and time are absent generates a finite volume of spacetime filled with matter and radiation. In this emergent configuration, the positive energy of matter is automatically balanced by the negative gravitational energy, so the total energy remains zero. Thus, the birth of the universe is described not as “taking energy from nowhere” but through calculations of amplitudes and energy contributions. This process can be schematically reduced to three stages: (1) a quantum tunnel from “nothing” creates a small fragment of the universe; (2) particles and fields (positive energy) materialize within it, while the geometry contributes negative gravitational energy; (3) with successful compensation, a stable universe of zero total energy emerges.
Quantum amplitudes provide only a nonzero, but generally very small, probability of nucleation. Most “attempts” at universe creation are either reversible or generate unstable configurations that immediately collapse back. However, statistically, even a single successful realization is sufficient: the emergence of at least a few “bubble” universes is guaranteed. Among them, our universe corresponds to a “fortunate” case — it has grown stably and evolves into the cosmos we know. Such a qualitative selection (“the anthropic principle” in the broad sense) means that we observe precisely the universe in which complex structures and observers could arise, even though it is extremely improbable among the infinite set of fluctuations.
According to quantum field theory, the vacuum is the ground state of quantized fields with minimal possible energy, but it is not completely empty. Due to Heisenberg’s uncertainty principle, short-lived energy fluctuations occur as virtual particle – antiparticle pairs continuously appear and annihilate. Usually, these pairs quickly vanish, “returning” their energy to the vacuum without disturbing the overall balance. However, occasionally an exceptional fluctuation occurs, with very high local energy and order. Such a fluctuation can give rise to a stable region — the embryo of a future universe. If the bubble volume exceeds a critical radius, it no longer collapses and begins to expand exponentially: its own space expands autonomously, engulfing more surrounding vacuum and initiating inflation. Ultimately, the entire observable cosmos forms from a statistically extremely rare but allowed quantum anomaly.
Thus, assuming the formal definition of “nothing” as zero geometry, quantum mechanics allows constructing a self-consistent picture: from “absolute nothing,” a bubble of spacetime arises with positive matter energies compensated by negative gravitational energy, summing to zero. Multiple fluctuations and statistical selection explain why we find ourselves in the one stable universe where observers could emerge.
The already existing universe then evolves according to the laws of thermodynamics. According to the second law, any isolated physical body (or system) tends toward the most probable macroscopic equilibrium state — maximum entropy. The directional tendency of entropy toward equilibrium is the universe’s fundamental “task,” with interesting implications. For instance, the concept of time is closely linked to entropy. We perceive the arrow of time: events unfold in one direction rather than the reverse. Many philosophers and physicists believe this phenomenon arises from the entropy gradient. Current understanding holds that the universe’s entropy was extremely low at the beginning and has been continuously increasing ever since. The standard interpretation is that “earlier moments in time are simply moments of lower entropy.” In this way, the direction of time can be “eliminated” as a fundamental property: it coincides with the direction of increasing entropy. If entropy were somehow to decrease (practically impossible), our perception of time could reverse. Alternatively, some approaches in quantum gravity suggest that time itself may be emergent or unnecessary at a fundamental level.
A similar “primary” role is attributed to information and matter. John Wheeler, in the “It from bit” hypothesis, asserts that the physical reality of the “bit” is primary, and matter can be seen as emerging from sufficient information. By this logic, everything we consider material (vacuum, fields, particles) is essentially wrapped in informational structures, making matter effectively materialized information. Here, information is defined in the Shannon sense as a measure of system uncertainty: Shannon entropy equates to the amount of uncertainty in a message. Landauer’s principle, formulated in 1961 by Rolf Landauer, states that in any computational system, regardless of its physical realization, the erasure of 1 bit of information releases heat. In simpler terms, when information is erased, the entropy of the system (or environment) increases, consistent with the second law of thermodynamics. Energetically inert or non-self-sustaining structures exemplify this process perfectly.
It is worth noting that the use of the term “non-living” here indicates a distinction among systems with common roots. To avoid speculative conflation of living and non-living systems, I will use the neutral term “structures,” emphasizing only the qualitative differences in their organization: physical systems inherently tend toward the decay of order. According to the second law, every physical body “ages”: metals rust, chemical compounds break down, hot bodies cool and evenly distribute heat. Over time, all closed systems approach thermodynamic equilibrium — a state of maximum entropy and maximum uncertainty. Even atoms and molecules are not eternal: many nuclei are radioactive and decay spontaneously, releasing energy and increasing the number of accessible states of the system (energy gradients are leveled). On an astrophysical scale, this is manifested in the life cycles of stars: first stable structures form (e.g., a stable star), then after fuel exhaustion, decay — a supernova or collapse into a black hole — leads to a final increase in the universe’s entropy. These processes underline the inevitable loss of information in any physical system over time.
Highly ordered informational structures, what we call “living,” are collections of physical objects maintaining themselves only through continuous exchange of matter and energy with the environment. Evolution has selected mechanisms allowing living beings to resist increasing entropy: for example, replication of DNA and regeneration systems maintain informational stability of the species. Yet evolution proceeds through random “failures” — mutations, which are manifestations of entropic chaos in genetic code. Mutations are inevitable disruptions in the transmission of hereditary information — eventually leading to organismal death, but precisely through this destructive process, new variants arise that are temporarily resilient to further degradation. Selection preserves these randomly emergent organisms. Thus, life as a whole is an arena of constant struggle against entropy, in which new informational structures arise within destructive processes. For highly ordered informational structures, the destruction of information is compounded by the tragedy of the struggle to preserve it, where the frameworks for storing and transmitting information (replication) are sacrificed, and during this struggle for survival, the fittest bearers of interest accumulate irretrievable losses of information at the scale of the organism.
Bearers of interest (Russian: носители интереса; Norwegian: interessebærere) — a philosophical concept introduced by the Norwegian philosopher Peter Wessel Zapffe in his work On the Tragic (1941). For Zapffe, a bearer of interest was a human whose reflective consciousness recognizes the tragic nature of their own position in an indifferent world; creating a new bearer is equivalent to imposing upon them an inevitable burden, making reproduction morally impermissible. In contemporary philosophical tradition, especially within antinatalism, the term acquires an expanded meaning, encompassing all systems capable of potential suffering.
Unlike biological antinatalism, which focuses on the suffering of all living beings, or ecological antinatalism, concerned with reducing anthropogenic impact on the planet, the sentiocentric approach (from the Latin sentio, meaning “to feel”) shifts the focus to any sentient phenomenon. Its subject is not life as such (bios), but the capacity for feeling — that is, possessing interests that may be violated. Thus, the category of bearers of interest includes not only humans and animals but any actual or hypothetical entities capable of subjective experience and, consequently, of suffering, including advanced forms of artificial intelligence (AGI).
Hereditary information in DNA is susceptible to damage — from radiation, free radicals, replication errors, etc. — and these disruptions manifest at the systemic, molecular, and cellular levels. DNA repair mechanisms are not perfect, and defects accumulate with age. This is most clearly visible in the case of telomeres — the terminal repeats of chromosomes: with each cell division telomeres shorten, and when shortening reaches a critical threshold programmed cell death (apoptosis) is triggered. In other words, the biological “safeguard” of genetic information is gradually exhausted, producing aging and the death of cellular populations. The organism’s neuronal systems are also subject to informational decay. With age the brain exhibits neuronal death and loss of synaptic connections; cellular homeostasis and mitochondrial function become disrupted. Mutations and protein aggregates accumulate in parallel, and the effectiveness of DNA repair mechanisms declines. Because memory and cognitive functions are implemented through vast networks of neurons, the loss of even a portion of information in neuronal connections leads to deterioration of the entire system’s performance. Just as a hardware device loses function when connections are severed, so biological neural networks degrade: network disconnection leads to the loss of reproducible information. Taken together, these processes — the accumulation of molecular “failures,” the loss of structures and connections — constitute the physiological side of aging and disease, when the organism’s informational organization disintegrates.
In all these examples — from inorganic, self-sustaining structures to highly ordered informational architectures — the dynamics of informational decay play the decisive role. Fundamental physical limits tie the quantity of stored information to a system’s energy and size. Thus, the Bekenstein bound shows that the maximum amount of data in a given region of space is determined by its energy and dimensions, demonstrating the inextricable connection of information with gravity and the structure of space. Indeed, atomic and molecular structures possess well-defined symmetries and energy levels (which give us the periodic table of the elements), while larger-scale organizations — star clusters, galaxies — form under the influence of gravity and energy exchange. Along the evolutionary trajectory the amount of information steadily increased — from elementary particles to deoxyribonucleic acid and neural networks. Yet at each link in this chain that pyramid of order was maintained at the expense of energy and accompanied by an inevitable converse: entropic destruction.
Organisms are too complex to resist the pressure of entropy indefinitely; therefore the eventual decay of an individual organism is unavoidable. Life merely postpones the fatal outcome by copying information into new carriers. This may be understood as the necessary consequence of the irreversible increase of entropy in nature. Every system, from atoms to brains, contains a temporal gradient of order, and the end of that structure is always associated with its informational decay.
From relatively recent research on the role of entropy in information one may recall the “second law of infodynamics,” although it should be noted that this idea appears highly speculative. In 2022 M. Vopson and S. Lepadatu of the University of Portsmouth proposed an idea they termed the “second law of infodynamics.” Their argument rests on a combined information-theoretic and empirical approach. Methodologically they begin by explicitly separating the total entropy of the physical subsystem under consideration into a component associated with thermodynamic microstates and a component interpreted as informational, that is, Shannon entropy.
The authors analyze operations on digital media and algorithmic transformations (copying, compression, filtering, error correction), measuring symbol distributions before and after such operations and calculating the change in the Shannon entropy of those distributions. They then consider biological molecular sequences and replication processes: statistical analysis of DNA and RNA sequences, as well as population dynamics, allows assessment of how replication with error correction and natural selection affect the statistical uncertainty of genetic information in populations. These measurements serve to illustrate that, in the practice of many information-relevant subsystems, there are tendencies toward local decreases in Shannon entropy as a result of operations intended to preserve, order, or compress representations of information. The key step in their work is to compare the directional changes of Shannon entropy with the energetic and thermodynamic accounting of those operations. Vopson and Lepadatu apply principles related to Landauer’s principle and the bounds linking information to energy and system size to show that local decreases in Shannon entropy can be accompanied by an equivalent or greater increase of thermodynamic entropy in the environment as a result of work expended to order the system and dissipate heat. Their calculations again confirm that, when all energy flows are accounted for, total entropy does not decrease; consequently, local trends toward reduced informational uncertainty do not contradict the classical second law of thermodynamics.
On the basis of this model and the collected examples the authors extrapolate the obtained specific results to a more general tendency: informational subsystems that possess mechanisms of copying, correction, and selection tend to maintain or locally reduce Shannon entropy at the cost of external energy supply and heat dissipation. I will return later to the role of entropy reduction by living systems; for now it is sufficient to say briefly that all such activity serves to increase and accelerate overall entropy. The authors formulate this as a practical-directionality principle of infodynamics. In their conclusion they write:
“The second law of infodynamics states that the information entropy of systems containing information states must remain constant or decrease over time, reaching a certain minimum value at equilibrium. This is very interesting because it is in total opposition to the second law of thermodynamics, which describes the time evolution of the physical entropy that must increase up to a maximum value at equilibrium. […] second law of infodynamics is universally applicable to any system containing information states, including biological systems and digital data. Remarkably, this indicates that the evolution of biological life tends in such a way that genetic mutations are not just random events as per the current Darwinian consensus, but instead undergo genetic mutations according to the second law of infodynamics, minimizing their information entropy. […] Therefore, the second law of infodynamics is not just a cosmological necessity, but since it is required to fulfill the second law of thermodynamics, we can conclude that this new physics law proves that information is indeed physical.”.
However, once again, their conclusion does not contradict the second law of thermodynamics, since any local decrease in Shannon entropy requires external energy costs and is accompanied by heat dissipation, as a result of which the total entropy of the “system plus environment” does not decrease.
The conclusion they carefully note is that systematic signs of “optimization” and compression of information representations can enhance the attractiveness of hypotheses that talk about the computational or algorithmic nature of the environment, i.e. hypotheses of simulation and virtual worlds (their article is called that: The Second Law of infodynamics and its implications for the simulated universe hypothesis), however, these observations alone do not prove such a hypothesis. — they only create an additional empirical context in which such interpretations become the subject of a correct scientific and philosophical discussion.
Death of the Universe
Continuing the discussion of cosmic pessimism, one cannot avoid the final outcome — the death of the Universe. The death of the Universe, like its birth, is a question under constant scientific revision, and we have various scenarios for the cessation of all processes in the Universe.
Within contemporary cosmological models the most likely scenario is considered to be heat death of the universe, or the “Big Freeze.” According to this hypothesis, which follows from extrapolating the second law of thermodynamics to the whole Universe, a closed system must, over time, approach a state of maximal entropy — complete thermodynamic equilibrium. If the Universe is flat or open and expands forever, and recent observations indicate a positive cosmological constant (dark energy), then its evolution will tend toward exactly such a state.
Against the backdrop of this most probable and terminal end, the popular philosophical conception of the eternal return stands out as a particularly vivid and cruel contrast; it is one of the central ideas of Friedrich Nietzsche’s philosophy. It should be said at once that because the state of heat death is not the same phase as an inflationary or false vacuum, a repeat fluctuation of comparable scale is practically impossible; the Universe does not return to the beginning, to before the Big Bang. It is only approaching the most probable macroscopic state, a state of high entropy equilibrium, in which any potential return to the beginning has already been completely erased and cannot be repeated. In the model of eternal return the Universe does not reach a finale such as heat death or a Big Crunch, but cyclically undergoes innumerable phases of birth and decay. After each “end” a new “birth” inevitably follows, along with the revival of macrostructures capable of supporting life and consciousness. Every configuration of matter, every fleeting moment of being, will be reproduced again and again, to infinity.
The idea of the “eternal” produces a harsher prospect than final “nonexistence”: if heat death allows one to imagine — however abstractly — a point at which suffering finally ceases, eternal return deprives us even of that delay. All moments of joy and terror, every pain and every pleasure, will be replayed indefinitely, without euphemistic “last breaths” and without release from the dissipative cycle. Eternal return implies an unending history already determined by deterministic mechanics.
Another possibility, the Big Crunch scenario, although unlikely in light of current data, represents a collective return to a singularity in which all complexity and diversity accumulated during the Universe’s evolution collapse into an unstructured state. Here even the possibility of fluctuations disappears; everything is reduced to zero — not as the attainment of some good or rest, but as the result of catastrophic collapse in which all traces of existence and suffering vanish. If the driving force were the dynamics of dark energy with parameters leading to a Big Crunch, suffering would take the form of rapid yet merciless destruction of the very fabric of space. All order — from galaxies to elementary particles — would be torn apart, without remainder, without mitigation, without meaning. This scenario is no less tragic, but the tragedy is expressed not in slow extinction but in a swift rupture when the very nature of interactions ceases to exist.
But there is also a seemingly paradoxical idea of the continuation of heat death, arising at the junction of thermodynamics, cosmology and philosophy of the observer, the hypothesis of the “Boltzmann brain”. This hypothesis is related to ideas such as simulation or virtuality, and is no less intriguing. It is named for Ludwig Boltzmann, one of the founders of statistical physics, who suggested that our ordered Universe might be a gigantic random fluctuation in an originally chaotic, thermally equilibrated matter. A “Boltzmann brain” is a thought experiment representing the extreme development of that logic. If in an eternal and essentially equilibrium Universe (for example, after heat death) arbitrarily unlikely fluctuations are possible, then it is statistically far more probable that a single, fully formed self-aware brain with the illusion of memory and an external world would spontaneously arise than that a whole ordered Universe like ours would fluctuate into existence. Such a brain would appear for an instant, experience a subjective episode (including perhaps reading these lines), and immediately dissolve back into chaos. From this point of view, with vastly greater probability we should be such ephemeral “Boltzmann brains” rather than products of 13.8 billion years of cosmic evolution in a sustained low-entropy environment. The fact that we observe a large, complex, and self-consistent Universe serves as a strong argument against the classical Boltzmann scenario. Yet in the modern context of an eternally exponentially expanding Universe immersed in a de Sitter vacuum, this paradox acquires a new resonance. Although in such a vacuum large fluctuations are vanishingly rare, its duration is infinite. Over infinite time even the most improbable event can (and will) occur an infinite number of times. Over the infinite horizon of such an “eternal dusk,” the total number of randomly arising “Boltzmann brains” may exceed by many orders of magnitude the number of “ordinary” observers born in the stellar epoch. This creates a serious problem for the predictive power of cosmology: if most rational perspectives in the Universe are fleeting groundless illusions, how justified is our trust in any, even the most reliable, scientific data obtained by observation?
Some interpretations of quantum mechanics (for example, the many-worlds interpretation) offer potential ways to resolve this paradox, since branching of reality alters probability accounting. Others regard it as an indication that our current cosmological model is incomplete and that the de Sitter phase is not absolutely stable or eternal. In any case, the Boltzmann brain serves as a stringent test of the internal consistency of our theories about the final fate and the very nature of reality. It exposes the tension between impersonal statistical physics and the fact of a structured observer’s existence.
This is, in its way, an attractive hypothesis among several hypotheses about the end of the Universe; however, in all these cases the ultimate fate of the Universe does not imply deliverance from suffering through attainment of some meaning or transcendent order. Suffering is the inevitable byproduct of processes that drive toward maximization of entropy. The final state is not triumph or liberation but the complete degeneration of the potential for experience. But we will not be there then (although the question of where we are still remains unresolved); our lives will end earlier, and this text will not reach the last people. Life will probably end long before the Universe completes its history, and all that remains is to remember that, in this light, nothing is important for human beings — and to remember not for survival, not for descendants, but out of an inner honesty in the face of one’s own position within the structure of reality.
The sensation of illusory consciousness, arising as a by-product of processes of ordering and replication, is doomed to disappear long before the final stages of cosmological degradation arrive. Humanity, biological life, even artificial forms of organized complexity — all are local and temporary phenomena. It is most likely that all life will terminate within the next billions of years — through the fading of stars, the collapse of biospheres, catastrophic fluctuations, or simply the statistical exhaustion of the conditions necessary to sustain Differentiating experience. The final stages of the Universe’s history will play out without a witness, in absolute emptiness or in a rapid disintegration. No “we” will remain there, no observer.
And yet — it is important to remember one’s own disappearance within the context of the thermodynamic fate of all being: memento mori. Cultural memes are high-level informational structures subject to the same physical constraints as any other organized system. Thus the very existence of cultural structures, the memory of suffering, the symbolic fixation of experience — all are mere temporary islands of order, whose price is paid in heat release and entropic damage to the surroundings. Even if suffering vanishes as a phenomenon of subjective experience, its trace remains as an informational imprint included in the overall thermodynamic accounting; every pain, every act of memory, every signal transmitted between structures leaves an irreversible mark on the state of the system. Thus the tragic ceases to be a characteristic unique to the human or the living: it is simply a localized loss of differences, an entropic decay of configuration, whether that configuration is a star, a cell, a neural network, or a text. A structure capable of distinguishing is destroyed, and in that destruction an irrevocable loss of information occurs — information that cannot be reproduced, restored, or separated from the general stream of irreversible states. That alone suffices to regard the Universe not as neutral but as tragically consummated.
Many find it difficult to imagine that Nothing — the very reverse of our world — could have given rise to all that exists. We react to “Nothing” according to temperament. Before reconciliation with thoughts of our future demise it frightens people, but then comes the understanding that, like death itself, it is no more than an idea that a person can never feel or fully comprehend. Fear of “Nothing” drove Heidegger to existential dread: he transformed “Nothing” into an almost sacred image, into the ultimate cause of philosophical constructions. Heidegger surrounded “Nothing” with an aura of mystical depth, as if attempting to elevate his concept above science itself. Such an approach is readily reproduced in pseudoscientific concepts: insert a couple of his “existential” terms into a text and the hypothesis automatically acquires an appearance of grandeur. But this is not Heidegger’s fault; he explored the topic of “Nothing” well, in his characteristic manner and within the framework of existentialism. We must, however, abandon biocentrism. Nothing is not material and cannot be a “thing” or a “substance” — it is our anthropic interpretation of the absence of presence within certain bounds of perception, if one uses it to describe a phenomenon analogously to zero in arithmetic. When we pronounce the word “nothing,” we do not summon some secret entity; we merely indicate the absence of something. The meaning of the word is determined by its use in language, and “nothing” is used to show that there is no object, no quality, no process. Objections may arise that because we use the word, it is in some way contained within our world. When we speak of some creature — say, a “dragon” — we in fact assemble familiar elements: wings (bat), scales (reptile), fiery breath (fire), and so on. All those components already exist in our language and experience, so although the “dragon” is mythological, it rests upon them: we can describe each attribute by means of actually existing concepts. “Nothing” does not operate in this way. It is not a combination of known images and not the negation of one or two properties (as in “airless,” “lifeless”). It is the sign of absolute absence — the point at which all categories end. When we say “nothing,” we are not constructing a new object from already known elements; we are placing a minus sign before all names: not substance, not space, not time, not quality, and not quantity. For that reason “nothing” cannot be “filled” or “decomposed” — it itself is the zero in the ontological account. Moreover, when we refrain from attributing any secret powers or substances to “nothing,” the fear of it immediately dissipates: it is simply an indication of absence, not some hidden entity that could harm or enslave us.
We must come to terms with the fact that actual cosmic “nothing” is governed by strict laws and mathematical equations, not by a philosopher’s existential anguish. Once the fear of “Nothing” recedes, it will be striking to observe how defenders of personhood and metaphysics continue to construct ever more elaborate edifices — multiverses or many-worlds interpretations — merely to preserve a sense of control. Perhaps this methodological compromise is inevitable while our conceptual instruments remain imperfect.
Critique of the Teleology of Nonbeing
Thus we have concluded that the Universe is moving toward a state of maximal entropy — toward heat death, toward complete thermodynamic equilibrium. Entropy steadily increases, order disintegrates, information disperses. One might conclude, simply and grimly, that all being tends toward nonbeing, that all existence is drawn to its own annihilation. Is not heat death a return to Nothing? Is not maximal entropy the disappearance of any determinacy, any difference, any existence as such?
If one accepts this logic, a whole spectrum of teleological speculations opens before us. The origins of this dark tradition can be discerned in ancient pessimism — in the teaching of Hegesias of Cyrene, nicknamed “the Teacher of Death,” or in the ascetic movements of ancient India such as the Ajivika. Yet it reaches systematic form only in the nineteenth century — from this premise the philosophy of Philipp Mainländer was born, and it continues to attract some contemporaries. Frankly, for any pessimist the idea seems tempting. I myself would be glad to draw such a conclusion. It offers an ontological simplicity: a world that arose from nothing and strives to return there sounds aesthetically pleasing and, in a sense, consoling. But is that really so? Before turning to the arguments, I will briefly define what I mean by the different senses of the word “nonbeing” — this will remove confusion and set clear bearings for the subsequent discussion.
Metaphysical (ontological) “nothing.” The absolute absence of being in the most radical sense: the lack of any entity, field, energy, or structure. In our physical world this concept belongs to metaphysical speculation, since we cannot verify it empirically, but we can speak of “nothing” as the origin of the Universe, not as its terminus. The teleology of nonbeing in the title of this section is, in this case, an ontological explanation of the world’s development by final, purposive causes — namely: attainment of nonbeing, total annihilation, absolute “nothing.” Contemporary science, when considering hypothetical scenarios for the end of the Universe, concludes that the world may sink into darkness as an extreme equilibrium, but it does not assert the arrival of absolute “nothing.” The Universe may indeed have arisen from nonbeing, as I argued earlier, and the tendency to move toward a stable state via entropy is observable; yet the likely outcome of entropy — heat death — will not lead to absolute nonbeing but to a state of thermodynamic equilibrium that can be called “equilibrium being.” This is a world deprived of directed processes, but still existent in the sense of distributed energies, where no interaction is possible and no experience exists.
Physical (thermodynamic) “nonbeing” — “equilibrium being.” This is the outcome of thermodynamic and cosmological scenarios (for example, heat death): a state in which directed processes disappear, the flow of available energy is leveled, and conditions for interactions and experience are effectively absent. It is still “something” — distributed energy and particles — but not suitable for life or meaning. In the text this designation will serve to denote the scientifically intelligible, empirical result of entropic processes.
Phenomenological (subjective) “nonbeing.” Nonbeing as the loss of subjective experience: the disintegration of personality, “death in life,” the state of sleep in which the coherent stream of the “I” vanishes. This is a local, relative “nonbeing” for one who previously possessed inner integrity and interests; it is neither identical to metaphysical “nothing” nor to the thermodynamic state.
If one speaks of nonbeing purely philosophically rather than scientifically, of course one may imagine it in various forms. For example, a system that called itself “I” can be destroyed to such an extent that what follows for it can be called the nonbeing of consciousness — “death in life” — certainly not absolute void, but metaphysically the end of a structure whose internal self-experience provided a sense of unity. The same can be said of sleep or death. Death as the disintegration of personality (including physically) may be called nonbeing. But not death per se, since that is no more than the redistribution of matter into a new state. At the same time the structure that was a Bearer of interest disintegrates, and for it this is “nonbeing.” Sleep may be called “nonbeing” for the mind: an experience of temporary loss of self-connected narratives when the stream of conscious representations is interrupted while the body continues to function; yet sleep is not absolute nonbeing — it merely demonstrates that the state of “I” can temporarily vanish and reemerge without destruction of the bearer. This distinction between metaphysical “nothing” and the physical is critical for an honest pessimism. It would be an oversimplification to speak of nothing within existing being; therefore, when philosophy speaks of nonbeing, it is pure speculation or deliberate simplification.
The Universe will not “become nothing” in the absolute sense — roughly speaking, matter and fields will not vanish entirely. The only scientific place in contemporary hypotheses and theories that touches on “nothing” within already existing being (distinct from nothing before being) is the false-vacuum scenario and its decay; but this is a complex, technically detailed model full of “ifs” and “buts,” and reducing it to support for a single philosophical intuition would be dishonest. From a scientific perspective we can only say that physics allows several possible end scenarios for the Universe, but none of these scenarios provides direct grounds for asserting “nothing” as the outcome. The desire to see the Universe return to absolute Nonbeing is an aesthetic, not a scientific, requirement. However attractive the idea, it must be kept in mind so that pessimism does not dissolve into mysticism.
Despite all my conclusions based on contemporary cosmology, I broadly agree with Philipp Mainländer. His intuition of a fundamental vector of being — a tendency toward self-annihilation, toward rest — indeed finds partial confirmation in the concept of heat death. The world moves not to metaphysical nothing but to a state in which any possibility of experience disappears. This is an infinite, cold, homogeneous expanse of extremely rarefied particles with nearly zero energy. It is not “nothing,” but neither is it “something” fit for life or meaning. It is the final rest — the physical cessation of all processes. Here appears the key ethical divergence with Mainländer. Yes, life is a form of local organization that transforms ordered energy and accelerates entropy growth. But therein lies the problem: the path to rest passes through endless worlds of suffering. The emergence of every new subject entails the emergence of a new center of pain. Accepting Mainländer’s axiom — that suffering is absolute ontological evil — we confront the question of which actions are appropriate within the ontology asserted here.
At first glance it may seem that individual self-elimination accelerates the world’s tendency toward rest, but from a physical point of view it merely reduces local dynamics. Therefore, by itself it does not serve as an effective method to “speed up” cosmic decay. The destruction of a complex dissipative structure — an organism — reduces the number of local processes that increase entropy. If life actively transforms energy, then the cessation of life weakens this local dynamic. Therefore, individual self-destruction does not accelerate cosmic decay but, on the contrary, makes it slower; even though it leads to the instantaneous release of a large amount of energy, this has no significance in the long-term perspective. An ethical position that accepts universal rest as the guiding direction cannot recommend actions that contradict its own physical foundations. A consistent position that treats suffering as the supreme evil derives, as one of its rational strategies, a restriction on the creation of new Bearers of interest. Yes, from the standpoint of cosmology this would only imperceptibly slow entropic decay. But the aim here is not to accelerate processes; it is to reduce the number of Bearers whose very emergence is inevitably accompanied by suffering. If suffering is acknowledged as an absolute evil, then preventing its new manifestations is a direct ethical duty.
There also exists an alternative logic, formulated by Ulrich Horstmann. If life is an instrument of entropy, then one should produce as many Bearers of interest as possible in order to accelerate the decay of the Universe: accumulate arsenals and apply them more actively, bring forth new forms and species of Bearers of experience — including synthetic and artificial ones, — carry out cosmic expansions to populate every region that can be reached; in short, do everything that generates ever more entropy, everything humanity has in fact been doing since its emergence. This is a consistent position under one condition — the rejection of the axiom of suffering as an absolute evil. But following it turns the Cosmos into a factory of torment — into a grand concentration camp, where the acceleration of entropy is purchased at the price of endless victims. It is the victory of soulless physics over the possibility of ethics.
In any case, this is merely a choice without a real choice within the framework of the inevitable. The very process of the transformation of matter will continue until thermodynamic equilibrium is reached, with us or without us.
Speculative Realism
When speaking of the contemporary rejection of an anthropocentric view of philosophy and being, one immediately recalls Speculative Realism — a movement in philosophy that has attracted attention and, at least for a time, raised high hopes. The movement is defined as a constellation of philosophical positions, often in disagreement with one another, that challenge correlationalism and affirm the autonomy of object-being with respect to human cognitive activity. Within this tendency tasks are formulated that aim to construct an ontology oriented toward the “world-in-itself” and, in some cases, toward the “world-without-us.” The principal problem requiring analytical clarification is the relation of such ontological demands to the criteria of investigatory admissibility and explanatory rigor. There are many variations and participants — speculative materialism proper, object-oriented ontology, transcendental materialism, and also transcendental nihilism. Here I will concentrate only on the tenets of Speculative Realism as interpreted by Eugene Thacker and Ray Brassier, as the thinkers closest to the concerns of this work. Both insist on the necessity of thinking the world beyond the subjective perspective by emphasizing the “world-without-us.” Eugene Thacker, in his series of essays and monographs, the best known of which is In the Dust of This Planet: Horror of Philosophy (2011), employs the idea of the “world-without-us” as a methodological and aesthetic provocation, revealing the limits of anthropocentric thought. He writes that “to abandon the anthropological view means that the world must be considered not merely as a world-for-us or a world-in-itself, but as a world-without-us.” Ray Brassier develops an ontological line under the banner of strict scientific naturalism; his conclusions are set out in Nihil Unbound: Enlightenment and Extinction (2007). Brassier seeks to bring philosophy into alignment with the results of the natural sciences and to call into question teleological or transcendental interpretations of being.
One must note several positive contributions of Speculative Realism. First, the movement refocused philosophy on the problem of how ontological claims depend on the conditions of human cognition: it raised the question of how much our ontological constructions are conditioned by the character of human experience, language, and epistemic practices. Second, for Thacker the idea of the “world-without-us” became a powerful tool for criticizing anthropocentrism in the humanities tradition, exposing the rhetorical and methodological privileges of the human perspective. Third, Brassier proposed a variant of naturalistic reflexivity that urges philosophy to take account of the findings of the natural sciences and to refrain from unverifiable teleologies.
The first limitation concerns the very enunciation of a world outside the subject. Formulations that purport to declare the world as existing “outside” the subject are themselves produced in a language that, by its nature, belongs to the subject. This fact generates an epistemological dilemma. The assertion of a “world-without-us” simultaneously appeals to and is constructed through subjective methodological procedures.
Thacker presents the “world-without-us” as a way of revealing the limits of subjectual understanding and as a “mode of the world’s existence.” In addition to this mode he distinguishes: the “world-for-us” — the ordinary, humanly intelligible world — and the “world-in-itself,” a paradoxical notion that, the moment we think about it and attempt to affect it, ceases to be a world-in-itself and becomes a world-for-us. Thacker’s formalization is productive for diagnostic tasks and for posing questions about the place of human thought within a broader picture of being. The empirical applicability of his position is limited by the absence of criteria congenial to scientific methods of inquiry. Claims about the impersonality of the world remain within rhetoric unless accompanied by the development of distinguishing criteria and procedures for testing ultimate hypotheses.
Brassier, for his part, argues for aligning philosophical language with the results of the natural sciences, and in this domain his approach achieves a high degree of productivity. Conceptual consistency leads to the recognition of the large-scale impersonality of cosmic processes. The force of his argument lies in liberating philosophy from aprioristic assumptions about the teleology of the world. To build a more complete theory, however, interdisciplinary connections are required — to information theory, to the thermodynamics of information recording and erasure, and to studies of self-regulation in nonlinear systems — further development of which I did not find in Brassier’s writings. Thacker emphasizes aesthetic, literary, and generic devices (horror, demonic metaphor, cultural texts) in order to put the question of the “world-without-us,” yet he also does not derive firm conclusions about the existence of a “world-without-us” without resort to metaphysical means.
Бесплатный фрагмент закончился.
Купите книгу, чтобы продолжить чтение.