Journal ArticleVolume 72024

Can Turing Machines Possess Intrinsic Intentionality?

Zhen Wang

PDF

Suggested Citation

Zhen Wang. “Can Turing Machines Possess Intrinsic Intentionality?.” A Priori, vol. 7, 2024, pp. 39–52.

Abstract

This paper explores the question of whether Turing machines, particularly artificial intelligence (AI) systems, can exhibit intrinsic intentionality—defined as the capacity to interpret internal processes and generate meaningful outputs. This paper then discusses Searle's Chinese Room Argument (1980), which challenges the possibility of machines' intrinsic intentionality, as well as the syntactic theory that suggests otherwise. This theory suggests that internalized syntactic processes suffice for creating intrinsic intentionality. Rapaport used Helen Keller's experience to illustrate how the internalization of symbols may create intrinsic intentionality (2007). Finally, this paper raises objections to syntactic semantics as a solution to Turing Machines to acquire intrinsic intentionality. It argues that AI symbols can only be about intrinsically meaningless tokens without phenomenon experience. Drawing on Jackson's Knowledge Argument (1982), the paper contends that intrinsic intentionality requires a mental process to be about a phenomenal experience.

For humans, our mental activities have meaning. To say that all raccoons are mammals is not merely a logical proposition that all As are B. For us, a raccoon means a bandit-looking furry creature with four limbs and various other characteristics. We can visualize one with our mind's eye and imagine how it moves or sounds. Computers are Turing machines that manipulate inputs based on sets of instructions. An artificial intelligence system may contain a class called mammal which has a subclass called raccoon in its storage. But does a raccoon mean anything to this system when it processes a raccoon? I will first discuss Searle's Chinese Room Argument as a negative answer to this question. Then, I will present and evaluate the theory of syntactic semantics which argues that internalized syntactic processes are meaningful on their own. Finally, I will argue against the syntactic semantics theory by arguing that the grasp of meaning requires intrinsic intentionality, which requires phenomenon consciousness.

Two Types of Intentionalities

In keeping with influential works in Philosophy of Mind, I use the term intentionality to mean "the power of a process to be directed at or about certain things like objects, properties, and states of affairs." There are two types of intentionality: original intentionality and derivative intentionality. A book, for example, can refer to many objects or concepts through its texts. However, it only does so when a reader interprets it. So, the book only has derivative intentionality that affords its interpretability. Such intentionality was given by the author of the book and reconstructed by its readers. Original intentionality is the capability of delegating representations to objects and interpreting objects from representations. Therefore, original intentionality exists only in the interpreters of the book. For the purpose of this essay, I will refer to original intentionality as intrinsic intentionality.

The Chinese Room Argument and Intentionality

The problem of machines and meaning is not about derivative intentionality. The outputs of machines like a calculator or a large language model (LLM) can usually sustain human interpretation. This is because their symbols can be translated into a human language, and their syntax can be defined to only allow interpretable outputs. If you take care of the syntax, the derivative intentionality will take care of itself. However, it is far from clear whether a machine can possess intrinsic intentionality — the power to interpret its internal processes and produce sensible output that is also meaningful to itself. A famous argument against the possibility of artificial intelligence (AI) having intrinsic intentionality is the Chinese Room Argument proposed by Searle (1980). He wondered whether the human mind works like a Turing machine, a purely formal system. He concludes that if we work like that, we would not be able to even interpret our own languages.

Syntactic Semantics

In response to Searle (1980), Dietrich et al. (2021) discuss the syntactic semantics theory of intentionality. Proponents of syntactic semantics believe that a formal system is sufficient to generate intrinsic intentionality. Rapaport (2007) uses the life story of Helen Keller to argue that intentionality arises when all semantics are properly internalized. Helen Keller was both blind and deaf since childhood, yet she could learn to communicate using finger gestures, signs, and eventually English. Rapaport argues that Keller had been living in a version of Searle's Strange Language Room for almost her entire life. As she mastered the syntax, it was obvious that she does understand English, and English means something to her. Dietrich et al. (2021) summarize that the key to syntactic semantics is the internalization of external symbols. Once they are appropriately internalized by the agent, the symbols are intrinsically intentional.

Meaningless Symbols Do Not Produce Meaning

I propose that a mental process is intrinsically intentional if and only if it is about a phenomenal experience. A problem with Rapaport's (2007) analogy that Helen Keller lived in a Strange Language Room is that she lived the human experience. She experienced emotions and sensations. Her concepts of water, cake, coldness, and textures of objects were all grounded in the sensations that they cause. This is vastly different from a purely symbol manipulator such as an AI. All its symbols only refer to other symbols, whereas Keller's finger plays could refer to phenomenal experiences.

Conclusion

To build an AI that thinks like humans, intrinsic intentionality is an important feature that needs to be included. The human mind is intrinsically intentional because we can interpret what our own mental activities are about. Searle's Chinese Room Argument (1980) demonstrates that no amount of syntactic manipulation can give rise to intrinsic intentionality. He further argues that only biological brains are capable of generating intrinsic intentionality, but he does not give sufficient evidence for this claim. Therefore, it seems promising that the syntactic semantics theory could tackle the challenge posed by Searle (1980). Rapaport (2007) proposes that appropriately internalizing symbols into a system is sufficient to create intrinsic intentionality, regardless of human brains or Turing machines. I argue that Rapaport understated the importance of Keller's phenomenal experience as a human being. It was the human experience that provided something to ground her symbols onto. I propose that a process is intrinsically intentional if and only if it is about a phenomenal experience. I am not convinced that any Turing machine-based AI has phenomenal experience. Thus, they are not intrinsically intentional. However, I do not exclude the possibility of AI acquiring phenomenal experience someday.