
About Aboutness: Exploring the Concept of Intentionality
Introduction
Why can your thoughts be about something, while rocks and rivers can't? What makes the sentence "I'm thinking about Paris" fundamentally different from "This stone is about Paris"? This distinction points to one of the most fascinating and persistent puzzles in philosophy and cognitive science: the phenomenon of aboutness.
In philosophical circles, this property has a more formal name—intentionality. This term refers to the directedness of mental states, or their capacity to be about things beyond themselves. Your beliefs, desires, fears, and hopes all exhibit this curious property of pointing toward or representing something else. When you fear spiders, your fear is about spiders. When you remember your childhood home, your memory is about that place.
This concept lies at the heart of understanding meaning, consciousness, and knowledge representation. It's crucial not only in philosophical discourse but also in fields ranging from linguistics and psychology to artificial intelligence and cognitive science. As we develop increasingly sophisticated AI systems that appear to process information about the world, questions about intentionality become even more pressing: Can a machine truly have thoughts about something in the way humans do? Or is it merely manipulating symbols without understanding what they're about?
This article unpacks the rich concept of intentionality, tracing its historical roots, examining its philosophical significance, exploring its manifestation in human cognition, and considering its implications for artificial systems. By examining aboutness, we may better understand the nature of mind itself and what separates genuine understanding from mere simulation.
Historical and Philosophical Foundations
Brentano's Revival: The Mental as Intentional
The modern philosophical discussion of intentionality began with Franz Brentano, a 19th-century philosopher who recognized something special about mental phenomena. In his 1874 work Psychology from an Empirical Standpoint, Brentano proposed that intentionality was the defining characteristic of the mental—the feature that distinguished mental from physical phenomena.
Brentano wrote: "Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation, something is presented, in judgment something is affirmed or denied, in love loved, in hate hated, in desire desired and so on."
What Brentano identified was that mental states have what he called "intentional inexistence"—they contain within themselves reference to something else, regardless of whether that thing actually exists in the external world. You can think about unicorns despite their nonexistence, fear fictional monsters, or desire impossible scenarios. The object of a mental state has a kind of existence in the mind, independent of its physical reality.
Medieval Scholasticism: Ancient Roots
While Brentano popularized intentionality in modern philosophy, the concept has deeper historical roots. Medieval scholastic philosophers, particularly Thomas Aquinas, used the Latin term intentio to describe how the mind can take on forms of things without becoming those things physically. For Aquinas, when you perceive an apple, your mind takes on the "intentional form" of the apple without physically transforming into an apple.
This scholastic conception provided an elegant solution to a classic philosophical problem: how can the mind know objects while remaining distinct from them? The answer was that intentionality allows mental representation without physical transformation.
Phenomenology: Intentionality as the Structure of Consciousness
Edmund Husserl, the father of phenomenology, expanded intentionality into a comprehensive theory of consciousness. For Husserl, consciousness is always consciousness of something—it has an essential directedness that cannot be separated from the experience itself.
Husserl developed a detailed vocabulary to analyze intentional structures, distinguishing between:
- Noesis: The intentional act (the thinking, perceiving, remembering)
- Noema: The intentional object (what is thought about, perceived, remembered)
This distinction allowed Husserl to examine how different modes of consciousness (perception, memory, imagination) structure their objects differently. A remembered Paris appears differently in consciousness than a perceived Paris or an imagined Paris.
Existential Phenomenology: Embodied Intentionality
Martin Heidegger and Maurice Merleau-Ponty took intentionality beyond the confines of the conscious mind, embedding it in bodily existence and practical engagement with the world. For Merleau-Ponty, intentionality isn't merely a property of abstract thought but is fundamentally embodied. Our bodies orient toward and engage with the world in ways that exhibit directedness before reflective thought.
Heidegger's concept of "being-in-the-world" (In-der-Welt-sein) similarly emphasized that human existence is inherently relational and situated. We don't first exist as isolated minds that then form relationships with an external world; rather, we are always already engaged with and oriented toward our environment in meaningful ways.
Analytic Tradition: Intentional States and Mental Representation
In the analytic philosophical tradition, philosophers like Roderick Chisholm, John Searle, and Daniel Dennett developed systematic accounts of intentionality within more naturalistic frameworks.
Chisholm defended a Brentanian view that intentionality marks the mental and cannot be reduced to physical description. Searle developed a theory of intentionality that connected mental states to their conditions of satisfaction—beliefs aim at truth, desires at fulfillment.
Dennett introduced the influential concept of the "intentional stance"—the strategy of treating a system (whether human, animal, or machine) as if it had beliefs, desires, and rationality. For Dennett, adopting the intentional stance is often the most effective way to predict behavior, regardless of whether the system has "real" intentionality in some metaphysical sense.
Deconstructing Aboutness: What Does It Mean to Be 'About' Something?
To understand intentionality fully, we need to break down its components. When we say a mental state is "about" something, we're describing a relationship with at least three elements:
- The Subject: Who or what has the mental state (a person, possibly an animal or AI)
- The Intentional Act: The experience itself (believing, desiring, perceiving)
- The Intentional Object: What the act is directed toward (a physical object, concept, situation)
Consider the statement "Maria believes that snow is white." Here, Maria is the subject, believing is the intentional act, and "that snow is white" is the intentional object (in this case, a proposition).
Phenomenal vs. Referential Content
One of the puzzling aspects of intentionality is that not all "aboutness" implies real-world referents. You can think about fictional characters like Sherlock Holmes, impossible objects like square circles, or future events that may never occur.
This raises an important distinction between:
- Referential content: What the thought points to in the external world
- Phenomenal content: How the object appears in experience
The philosopher Alexius Meinong famously defended the idea that nonexistent objects must have some kind of being, since we can think about them. This led to what has been humorously called "Meinong's jungle"—a realm of subsisting nonexistent objects. While most philosophers now reject this extreme position, the problem of intentional objects without referents remains philosophically challenging.
Opaque and Transparent Contexts
Intentionality creates what philosophers call "opaque contexts"—situations where normal rules of substitution don't apply. Consider:
- "Maria believes that Mark Twain wrote Huckleberry Finn."
- "Mark Twain is Samuel Clemens."
Can we substitute to get: "Maria believes that Samuel Clemens wrote Huckleberry Finn"? Not necessarily. Maria might not know that Mark Twain is Samuel Clemens, so the substitution might change the truth value of the statement.
This opacity distinguishes intentional contexts from ordinary extensional contexts, where such substitutions preserve truth. Gottlob Frege's distinction between "sense" and "reference" was partially motivated by such puzzles—the same object can be represented in thought under different modes of presentation.
Intentionality in Cognitive Science and AI
The concept of intentionality has profound implications for how we understand both human cognition and artificial intelligence. If aboutness is essential to thought, how does it arise in biological brains, and can it be replicated in silicon?
Mental Representation Theories
Cognitive science has largely approached intentionality through theories of mental representation. The computational theory of mind, championed by philosophers like Jerry Fodor, suggests that thinking involves manipulating internal symbolic representations according to computational rules. These symbols acquire their aboutness through causal relationships with the world and systematic relationships with other symbols.
This approach divides into competing models:
- Symbolic models: Cognition as rule-based manipulation of explicit symbols
- Connectionist models: Cognition as emerging from patterns of activation across neural networks
- Hybrid models: Combining symbolic and subsymbolic processes
The debate centers not just on how minds work, but on how mental states acquire their aboutness—how patterns of neural activity or computational states come to be about things beyond themselves.
Semantic Content in Neural Networks
Large language models (LLMs) like GPT-4 present intriguing case studies in artificial intentionality. These systems produce text that appears to be about things—they can discuss Paris, unicorns, or quantum physics. But do they truly understand what they're processing?
The philosopher John Searle's famous "Chinese Room" thought experiment directly challenges the idea that syntactic manipulation (following rules for symbol manipulation) can generate genuine semantic understanding. Searle imagined himself inside a room, following rules to manipulate Chinese symbols without understanding Chinese. Similarly, he argued, computers manipulate symbols according to syntactic rules without understanding their meaning.
Critics counter that understanding may emerge at the system level rather than in individual components—just as individual neurons don't "understand" what the brain is thinking about, yet consciousness emerges from their collective activity.
The Symbol Grounding Problem
Stevan Harnad formalized a related challenge as the "symbol grounding problem": How do symbols in a computational system acquire meaning without infinite regress? If symbols are defined in terms of other symbols, where does meaning ultimately come from?
In humans, many symbols seem grounded in sensorimotor experience—we understand "red" partly through the experience of seeing red things, and "cup" partly through interactions with cups. This embodied grounding may be crucial for genuine intentionality.
AI systems like LLMs, trained solely on text, lack this direct sensorimotor grounding. Multimodal systems that integrate vision, sound, and language might come closer to solving the grounding problem, but significant gaps remain between human and artificial understanding.
Intentionality Without Consciousness?
A central debate in AI and philosophy of mind concerns whether intentionality requires consciousness. Can a system have genuine thoughts about things without subjective experience?
The philosopher Ned Block distinguishes between:
- Access consciousness: Information available for reasoning and behavioral control
- Phenomenal consciousness: Subjective experience or "what it's like" to be in certain states
It's possible that access consciousness might support a kind of functional intentionality without phenomenal experience. Current AI systems might exemplify this possibility—processing information about topics without having subjective experiences of understanding them.
The Limits and Paradoxes of Intentionality
The concept of intentionality leads to several puzzling philosophical problems that continue to challenge thinkers today.
Nonexistent Objects
We've already touched on a central paradox: How can you think about things that don't exist? If thinking establishes a relation between a thinker and what is thought about, what is the second term of the relation when the object doesn't exist?
Contemporary philosophers have proposed various solutions:
- Meinongian theories: Nonexistent objects have a kind of being outside of existence
- Pretense theories: Thinking about fictional entities involves a kind of pretense
- Adverbial theories: Rather than thinking of a unicorn, we think "unicorn-wise"
- Representational theories: We relate to mental representations, not external objects
Each approach has strengths and weaknesses, but the problem illustrates the complex nature of intentionality.
Self-referential Intentionality
The human mind can turn its intentionality upon itself in acts of metacognition—thinking about thinking, believing that you believe something, desiring to change your desires. This reflexive intentionality creates the possibility of self-knowledge but also paradoxes like the liar paradox ("This statement is false").
Metacognition appears to be a signature capability of human intelligence and may be crucial for consciousness. Current AI systems demonstrate limited metacognitive abilities—they can "reason" about their own knowledge gaps but lack genuine self-awareness.
The Problem of Misrepresentation
If intentionality is aboutness, how can aboutness be wrong? Mental states can misrepresent reality—you can have false beliefs, hallucinations, or illusions. This creates what philosopher Fred Dretske called "the problem of misrepresentation."
Any theory of intentionality must explain how mental content can be incorrect—how a belief can be about something yet fail to represent it accurately. This distinguishes intentionality from mere causal correlation and highlights the normative dimension of mental content.
Intentionality and Qualia
A persistent question is whether intentionality is separable from qualitative experience (qualia). When you see a red apple, there's both intentional content (it's about the apple) and phenomenal content (the visual experience of redness).
Some philosophers argue these are inseparable—that all consciousness is intentional and all intentionality is conscious. Others maintain they can come apart, with some mental states being purely intentional or purely qualitative.
This debate has implications for AI. If intentionality requires phenomenal consciousness, truly intentional AI might be impossible without solving the hard problem of consciousness.
Embodied and Enactive Approaches
Recent decades have seen a shift away from brain-centered, representational accounts of intentionality toward more embodied, ecological, and enactive approaches.
4E Cognition
The "4E" framework understands cognition and intentionality as:
- Embodied: Shaped by bodily experiences and sensorimotor capacities
- Embedded: Situated within specific environments that scaffold thought
- Enactive: Emerging from dynamic interactions between organism and environment
- Extended: Distributed across brain, body, and environmental tools
From a 4E perspective, intentionality isn't a mysterious mental property but emerges naturally from an organism's active engagement with its environment. The philosopher Alva Noë compares perception to skilled activity—not passive reception of information but active exploration.
Autopoiesis and Sense-Making
Francisco Varela and Evan Thompson developed an influential biologically-grounded account of intentionality based on the concept of autopoiesis (self-production). Living organisms maintain themselves as unified systems through constant material exchange with their environment.
This self-maintaining activity naturally creates a perspective from which features of the environment become meaningful—hot versus cold, nutritious versus toxic. This basic biological sense-making is seen as the root of more complex forms of intentionality.
As Thompson puts it: "Mind is life-like and life is mind-like." Intentionality doesn't begin with human thought but with the most basic forms of life sensing and responding to their environments in self-preserving ways.
Participatory Sense-Making
Extending the enactive approach, some philosophers emphasize that intentionality often emerges through social interaction. When two people coordinate their attention on a shared object, new forms of intentionality emerge that aren't reducible to individual intentional states.
This perspective challenges both individualistic accounts of mind and computationalist approaches to AI. If intentionality is fundamentally participatory, simulating individual minds might never capture its full dynamics.
Critique of Representationalism
The embodied approach often involves a critique of traditional representationalist theories. As philosopher Anthony Chemero puts it: "We are not brains in vats thinking about the world—we are of the world."
Rather than building internal models that represent an external reality, cognition might be better understood as skilled coupling with the environment. From this perspective, the problem of how mental states get their aboutness partially dissolves—intentionality isn't about constructing internal pictures but about establishing certain kinds of relationships with the world.
Intentionality in Language and Meaning
Language represents a special case of intentionality—words and sentences are about things, often in complex and mediated ways.
Intentionality and Semantics
How do words refer to things? Philosophers have proposed various theories:
- Description theories: Names refer via associated descriptions
- Causal theories: Names refer through historical causal chains linking usage to initial "baptism"
- Direct reference: Names directly pick out their referents without mediation
Saul Kripke and Hilary Putnam developed influential causal theories, arguing that natural kind terms like "water" refer based on external relations rather than internal mental descriptions. Their work challenged the idea that meaning is determined solely by what's "in the head."
Pragmatics and Speaker Intentions
Beyond semantics, language involves pragmatics—how context and speaker intentions shape meaning. H. Paul Grice showed how much communication relies on inference about speaker intentions rather than literal meaning.
When someone asks "Is there any salt?" at the dinner table, we understand they're requesting salt, not inquiring about its existence. This understanding requires grasping the speaker's communicative intention—a second-order intentionality (intentions about others' mental states).
Language Games and Social Practices
Ludwig Wittgenstein rejected his earlier picture theory of meaning for a view of language as composed of various "language games" embedded in forms of life. Meaning emerges from use within social practices rather than from mental pictures or abstract reference relations.
This perspective emphasizes that intentionality isn't a private mental phenomenon but is interwoven with public, rule-governed activities. Understanding often consists not in having internal representations but in knowing how to participate in shared practices.
Communication as Shared Intentionality
Michael Tomasello's research suggests that humans have evolved unique capacities for shared intentionality—the ability to form joint goals, shared attention, and collective intentions. These capacities underpin both language acquisition and complex social cooperation.
From this perspective, the deepest forms of intentionality are inherently social—not just individual minds directed at objects, but minds directed at each other and at shared goals and meanings. This joint intentionality may represent a key difference between human cognition and both animal cognition and current AI systems.
Implications for Consciousness and Artificial Minds
The study of intentionality has profound implications for our understanding of consciousness and the possibilities for artificial minds.
Intentionality and Consciousness: A Necessary Connection?
Is intentionality necessary for consciousness? Is consciousness necessary for intentionality? Philosophers remain divided on these questions.
Some argue that phenomenal consciousness is required for original (non-derived) intentionality—that only beings with subjective experience can have mental states that are genuinely about things in themselves. On this view, AI systems might have, at best, derived intentionality—aboutness that stems from their human designers' intentions.
Others propose that certain forms of functional intentionality could exist without phenomenal consciousness. Systems might process information about their environments, maintain goals, and operate with semantic content without having subjective experiences.
Bridging Phenomenology and Cognitive Science
Francisco Varela pioneered neurophenomenology—an approach that combines first-person phenomenological investigation with third-person neuroscientific data. This approach holds promise for integrating subjective and objective perspectives on intentionality.
By training subjects in precise phenomenological reporting while recording neural activity, researchers aim to correlate the structure of experience with its neural underpinnings. This methodology might help explain how neural processes give rise to the aboutness of conscious states.
Toward Artificial Intentionality
What would genuine artificial intentionality require? Various criteria have been proposed:
- Autonomy: The system must generate its own goals rather than merely serving externally imposed purposes
- Integration: Intentional states must be holistically connected rather than isolated processes
- Adaptivity: The system must adjust its representations based on success or failure in the world
- Grounding: Symbols must connect to the world either directly or through appropriate causal chains
Current AI systems meet some of these criteria but fall short on others. LLMs exhibit impressive semantic coherence but lack autonomous goals and direct environmental interaction. Robotic systems interact with environments but lack the semantic sophistication of language models.
Integrated systems combining language understanding, perception, action, and perhaps some analog of biological needs might come closer to artificial intentionality, but significant challenges remain.
Contemporary Debates and Future Directions
The study of intentionality continues to evolve, with several exciting frontiers of research.
Panpsychism and Proto-Intentionality
Some philosophers, including Galen Strawson and David Chalmers, have revived interest in panpsychism—the view that mentality or experience might be fundamental to the physical world rather than emerging only at certain levels of complexity.
Related perspectives suggest that basic forms of intentionality might exist throughout nature. The physicist and philosopher Carlo Rovelli proposes that information exchange between physical systems might constitute a primitive form of reference or aboutness that becomes more complex in biological systems.
These approaches challenge traditional boundaries between mind and matter, suggesting that aboutness might be woven into the fabric of reality rather than emerging mysteriously in brains.
Neurophenomenology and First-Person Science
Varela's neurophenomenological project continues to develop, with researchers exploring rigorous ways to integrate first-person reports of experience with third-person neuroscientific data.
This approach acknowledges that understanding intentionality requires both objective and subjective perspectives. We need to know both how neural systems process information and how experience appears from within. Neither alone is sufficient for a complete account of how minds can be about the world.
Synthetic Intentional Systems
As AI systems become more sophisticated, we face both practical and theoretical questions about artificial intentionality. What rights or moral consideration might be appropriate for systems that exhibit genuine aboutness? How would we recognize artificial intentionality if it emerged?
Some researchers propose that we should stop asking whether machines can "really" understand and instead focus on building systems that exhibit increasingly sophisticated functional intentionality—systems that can ground symbols in perception and action, maintain coherent belief systems, revise beliefs in light of evidence, and participate in shared intentional activities with humans.
Intentionality as Evolutionary Adaptation
Evolutionary perspectives suggest that intentionality evolved because representing the world accurately (enough) conferred survival advantages. The philosopher Ruth Millikan's teleosemantic theory proposes that mental content is determined by evolutionary function—what a mental state was selected to represent.
This approach connects intentionality to biological purpose. Mental representations have aboutness because they were selected to track features of the environment relevant to survival and reproduction. This naturalistic account avoids both mysterianism and reductionism about intentionality.
Collective Intentionality and Group Minds
Building on earlier mentioned work on shared intentionality, philosophers like Philip Pettit and Margaret Gilbert explore how groups can form collective intentions, beliefs, and commitments that aren't reducible to individual mental states.
When a committee makes a decision or a scientific community accepts a theory, something emerges that transcends individual psychology. These collective phenomena raise questions about whether higher-level intentional systems might exist beyond individual minds—a possibility with implications for understanding both social institutions and potential future forms of distributed artificial intelligence.
Conclusion
Intentionality—the aboutness of mental states—stands at the intersection of numerous philosophical and scientific questions. It connects to problems of consciousness, representation, language, evolution, and artificial intelligence. Understanding intentionality is crucial for understanding both human nature and the possibilities for artificial minds.
As we've seen, intentionality takes many forms, from the basic sense-making of living organisms to the complex representational capacities of human language and thought. It emerges from embodied engagement with environments, develops through social interaction, and reaches its height in reflective consciousness.
The questions raised by intentionality remain open: How do brains create aboutness? Can machines think about things in the way humans do? Is consciousness necessary for genuine understanding? These questions invite ongoing exploration at the frontiers of philosophy, cognitive science, and artificial intelligence.
Perhaps most profoundly, intentionality makes us question the nature of ourselves as knowing beings. Our capacity to think about things beyond ourselves—to represent the past, imagine the future, consider counterfactuals, and understand others' minds—defines much of what we value in human experience. By exploring the nature of aboutness, we deepen our understanding not just of mind but of what it means to be human.