This is my senior thesis on AI and human uniqueness from an Aristotelian-Thomistic perspective! It’s a bit messy at points, but it’s my first time writing something at this scale, and in my humble opinion the topic itself makes up for whatever issues it has. Enjoy.
“Although Alchemy has now fallen into contempt, and is even considered a thing of the past, the physician should not be influenced by such judgements.”
— Paracelsus
Introduction
The Artificial Intelligence developer is closer to his goal than the alchemist was to his. As lead cannot be brought close to gold, alchemy was found desperately wanting. The same is not true of AI. A new headline breaks every few days, claiming AI is encroaching ever closer to that faculty that sets humanity apart—intellect. However, this new artificial intellect is fools gold. Regardless of the size of the computer or the sophistication of the program, AI will never be rational.
There are few technologies that are as entrenched in popular discourse as AI. A recent study found that a “substantial proportion (67%) of people attribute some possibility of phenomenal consciousness to ChatGPT and believe that most other people would as well” (Colombatto & Fleming 4). This should not be misinterpreted as saying that most people believe ChatGPT has human-level consciousness. At the very least, it does show that current AI models simulate human behavior so convincingly that the distinction between real intellect and artificial simulation deserves thorough exploration. Before we begin to dissect the issue of AI and human uniqueness and establish the insurmountable difference between them, we must begin by defining several key terms. Both AI and human uniqueness are heavily loaded terms and deserve unpacking.
Artificial Intelligence
What is artificial intelligence? The layperson’s definition is likely along the lines of “a computer that does what people do” or “a machine that thinks,” neither of which are especially helpful to disambiguate the term itself. If we were to use the first definition, we might conclude that a washing machine is artificially intelligent, as it senses the amount of clothes it contains and then washes them, tasks that are historically unique to humans. However, this is certainly not the generative sort of AI dominating the current zeitgeist.
The second candidate is a bit more helpful. Its main issue is that it assumes computers can “think” in some way similar to a human, meaning this definition leaves the present thesis dead in the water. Some have tried to downplay the significance of “thinking.” Even the father of computer science, Alan Turing, believed that the question of whether machines could think was “too meaningless to deserve discussion” (Turing 442). From Turing’s perspective, once a machine could pass his “imitation game,” now known as the Turing Test, one could only conclude that the machine was thinking, thereby answering the initial question (Turing 442). This pragmatic sort of framing is common but rather unhelpful when trying to answer the question at hand. In fact, recent developments in generative AI, especially Large Language Models (LLMs), show that Turing’s Test is inadequate to determine whether machines can think (Jones and Bergen). At best, passing the Turing Test is a necessary but not sufficient condition for determining whether a computer can think. At worst, it adds confusion by equivocating a machine’s performance with its essence.
Where does this leave us? The man who coined the term “Artificial Intelligence” in 1955, Stanford professor John McCarthy defined it as “the science and engineering of making intelligent machines” (Manning). Dario Amodei, CEO of Anthropic, recognizes that intelligence is difficult to define in his 2024 paper Machines of Loving Grace. He defines it as “a general problem-solving capacity” that “includes abilities like reasoning, learning, planning, and creativity” (Amodei). This assumes that AI can actually do these things rather than simulate them, but it does provide a clear picture of what those at the forefront of AI development are striving to create. A concise definition, not dependent on what tech CEOs would like AI to be, is that AI is any sort of program designed to simulate the capabilities of the human intellect.
In this paper, the AI discussed largely fits under the umbrella of Machine Learning (ML), the process by which AI can be programmed to improve its ability to complete certain tasks. Within this umbrella, the most prominent type are artificial neural nets, which aim to replicate the behavior of neurons within the human brain (Manning). The AI that rose to prominence beginning with ChatGPT in December of 2022 are all LLMs, which are a specific type of neural net. With this sort of AI, the supposed threat to human uniqueness is quite clear: If AI does what humans do and operates like a human brain, AI seems very close indeed to moving man from a place of secure dominion over creation to competition with a technological brother.
Human Uniqueness
Many of the most contentious philosophical debates boil down to who or what humans are and what we ought to do in light of that. The school of philosophy that gives the most consistent and applicable framework to answer this question is the Aristotelian-Thomistic school of thought, developed by Thomas Aquinas through a synthesis of Aristotle’s thought and a Christian worldview. The primary reason I have chosen to defend human uniqueness Thomistically is not because it is explicitly Christian, but because it is an internally consistent philosophical system which is concordant with the human experience. Following this tradition, the human person is defined as a rational animal. Not only is the human being rational, but is also “a sensitive, living, and corporeal substance” (Eberl 2). “Sensitive” here means being able to perceive the world, not being emotionally flappable. Part of what makes humanity human are the corporeal, animal aspects. In fact, this is half the definition. However, the other half of the definition is rationality, which implies some level of immateriality, as will be explored later. The synthesis of animality and rationality, the corporeal and immaterial, is exactly what makes us human beings.
From a Christian perspective, humanity is distinct not only in rational capacity, but also in calling (Bjork 99). While this is true and ought not be discounted, minimizing capacity to focus on purpose builds a shaky philosophical foundation, as capacity informs purpose. As Aristotle writes, the “virtue of a faculty is related to the special function which that faculty performs” (Nicomachean Ethics VI 1.7). A “virtuous” plant will be one that uses its vegetative capacities well. It makes no sense to claim that a plant is a bad plant because it cannot play piano—the ability to play piano is far beyond the capacity of a plant. However, if a plant withers when it ought to grow, our natural intuition is to regard it as somehow defective. This intuition is correct and can be applied to the “rational animal” as well.
This may seem like an abstruse way to talk about what makes humans different from creation. However, it is necessary to analyze this specific intellectual capacity in order to delineate between humanity and the rest of creation. This capacity is unique to humanity among material creation. Making the argument for human uniqueness the other way around, starting with the assumption that only human beings can be relationship with God, is more difficult to defend. This view must hold that AI could be nearly indistinguishable (in true ability, not just appearance) from humanity, but would not have a personal relationship with God. This begs the question as to why this is the case. The argument provides no mechanism by which humanity is by nature unique or why God could not call an AI into a relationship with him. In a world where humanity and AI have the same capacity, we could not assume that they would not differ in purpose or calling. In summary, capacity and purpose are inextricably linked, meaning that an analysis of capacity is enough to establish a philosophical basis for purpose.
To clarify, the image of God is not reducible to having an intellect and does not mean being better at math than a calculator. Living into our identity as images of God includes virtuous exercise of all of our capacities, not just the intellect. This means loving and willing what is good, and exercising our animal capacities well. To be human means being more than animal, but not less. God is not a gnostic; He created humans corporeally for a reason. Possessing an intellect is also qualitatively different from coming up with a correct solution to a math problem, beating a chess grandmaster, or creating an image or poem. Simulating the abilities of humanity, even becoming better than humanity at certain tasks, is radically different from truly having the abilities of humanity. The argument for AI ought not be made pragmatically or through equivocation, and the definitions given provide a firm foundation on which to make the argument of this thesis. To summarize, AI aims to become rational by way of simulation, and what sets humans apart are our animality and rationality.
Definitions aside, unease about AI has grown rapidly. With the release of each new AI model, AI seems to encroach more and more onto humanity’s turf without showing signs of stopping. It is an incredibly powerful technology, and while overhyped, is in the process of altering our relationship with learning, art, work, and other people. Artificial advancement in any of these spheres of life is worth discussion. However, the area that is most pressing is that of rational life itself. Will AI ever become a rational animal? To put it simply, no. AI poses no threat to a proper Christian understanding of human uniqueness and personhood. To establish this truth, we must first establish more fully the Aristotelian-Thomistic position on the philosophy of hylomorphism, the soul, and rationality, as well as why this view ought to be preferred over the alternatives. Secondly, we will consider the implications of this view in the context of computation and AI. Finally, we will analyze how the impossibility of a rational, animal AI affects, or does not affect, a consistent Christian philosophy of personhood.
Personhood as Rational Soul
What does it mean for humans to be unique? I have already posited that the best definition of human uniqueness is an Aritstotelian-Thomistic one, but what does this conception actually entail? Answering this requires an exploration of hylomorphism, the Aristotelian theory that everything physical is made up of form and matter (Feser 178). The distinction between form and matter can be thought of this way: Take an acorn, for example. It is made up of matter, and that matter is currently actualized in the form of an acorn. However, it does not have to remain this way. That acorn contains the potential to become an oak sapling. Meaning, it can retain its matter but take on a different form. There is such a thing as an acorn, but whatever matter makes up an acorn will not remain an acorn forever. It is also possible for an acorn to remain an acorn, even if a few bits of it are broken off. In this way, form accounts for the “permanence, unity, and actuality” in the world, while matter accounts for the “changeability, diversity, and potentiality” in the world (Feser, 180). While the theory is not without challenges, it is an intuitive way to reconcile the notions of being and becoming that occupied early philosophers like Heraclitus and Parmenides (Sproul 20-22). A full defense of hylomorphism is beyond the scope of this project, but a more in-depth exposition can be found in Feser (2024).
An additional aspect of hylomorphism is needed to understand its relation to artificial intelligence: the distinction between substantial and accidental form. Substantial form has to do with that which is created from prime matter, or matter without form (Hochschild, 26). This is admittedly hard to imagine. Everything with substantial form (a substance) is made directly from prime matter, and not by combining other substances. Again, this is a counterintuitive statement on face value which actually makes sense upon inspection. What is the difference between a sperm and egg glued together and a zygote? The former has accidental form, and the latter has substantial form. The distinction in proper terms is that a thing with substantial form “has properties and causal powers that are irreducible to those of its parts,” while a thing with accidental form does not (Feser 183). A sperm and egg glued together are still just a sperm and egg. A zygote is no longer a sperm and egg. It has new properties and causal powers.
Most naturally occurring things have substantial form, while most man-made objects have only accidental form. Water, an oak tree, and a dog all have substantial form, because their attributes can’t be reduced to the sum of other substances. All of these things have a sort of existential inertia by which they remain what they are and function as themselves. A computer, on the other hand, has only accidental form—a computer’s causal powers can be reduced to the function of the other substantial forms contained within it (the materials within the semiconductors, LEDs, and other parts). There is no need to refer to causal powers beyond those of the materials to explain or define what is happening in the computer. The same is not true of the acorn. While the acorn contains cells with their own function, the ability to grow into an oak tree is not reducible to those cells and can only be explained by looking at the acorn holistically.
Let us consider the distinction between the form of non-living and living substances. It is intuitively clear that there is a difference between a rock and roly-poly, beyond the fact that they are composed of different chemical elements. The bug is alive. It has been common, as science has become increasingly materialist, to minimize the distinction between living and non-living things (Carroll). There is, however, a difference.
This difference, according to Aristotelian-Thomistic metaphysics, is that all living things possess a soul—not an ethereal, otherworldly blue shape superimposed over the physical living thing, but an animating principle. The soul is the form of a living thing (Feser 494). No more than this. The difference between something living and something dead is that the living thing has a certain form: that of a living thing. It is often difficult for the westerner living in a post-Cartesian world to think of the soul defined this broadly, but the Hebrew and Greek words nephesh and psychē often translated “soul” refer simply to the animating principle of a living thing, not our modern conception of the human soul (Feser 494). The most important distinction between the two views is that the Aristotelian soul possesses a “unity not a duality” (Boylan). It is not that a person possesses or inhabits a body, but that a person is a unity of body and soul. With our conception of the soul disambiguated, let us explore why the counterintuitive notion that every living thing possesses a soul has powerful explanatory power and is not as strange as it first appears.
The idea that any living thing possesses an animating principle (or soul) means that it is self-moving (Feser 210). This is the traditional distinction between the living and non-living. The best way to delineate this is by referring to whether a substance is self-moving (possessing immanent causation), or not (possessing only transeunt causation). In this case, “self-moving” does not necessitate movement through space, but rather development toward a certain end.
Edward Feser defines immanent causation as such: “A causal process is immanent when it originates within the agent and terminates within it in a way that tends towards the agent’s own self-perfection or completion” (210). Any living thing acts in such a way as to “complete” the living thing in some way. A complete acorn is a healthy oak tree, a complete wolf is one that exists in a pack and has enough to eat, and a complete human is a rational animal in community with others and God. The causal powers of a living thing stand in stark opposition to the causal powers of something like a boulder. It and other non-living things exhibit only transeunt causation, where an event or state of affairs brings about another event or state of affairs through the object (Chisholm 7). For example, the wind may cause an avalanche, but this is much different from a human beginning one with dynamite. The wind is not an agent and is itself the result of transeunt causation. The avalanche does not “terminate in anything like completion or self-perfection” (Feser 211). On the other hand, the development of the human or acorn will. This distinction between immanent and transeunt causation will become more important as we consider the position of AI within this philosophical framework. For now, we will look at the distinctions between classes of living beings and why these different beings have different standards of self-perfection.
Not all souls are created equal. Aquinas distinguishes three classes of souls, set apart by their causal powers in reference to the organ that performs the action and the object to which the action is directed (Summa Theologiae I q. 78 a. 1). At the very bottom is the vegetative soul, whose “vegetative power acts only on the body to which the soul is united,” according to Aquinas (ibid). Earlier he explains that this power includes those operations of the soul that are “performed by a corporeal organ, and by virtue of a corporeal quality” (ibid). This means that all things which possess a vegetative soul (e.g. plants or fungi) operate on a fully material level. That’s not to say they do not have form, but that their interaction with the world is fully material, and they cannot sense or experience the world in any way.
Animals possess not only vegetative powers, but also the animal powers: sentience, appetite, and locomotion. These powers are distinct from and irreducible to the vegetative powers. For our purposes, the most important is sentience, but any sentient animal has the other two powers to some degree (Feser 227). Sentience can be thought of as the ability to sense and have the sensation of sensing—essentially, consciousness. Animals have differing degrees of sentience, but each one has the experience of qualia (Feser 230). These include “hot and cold, wet and dry” and every other sense (ST I q. 78 a. 1). The exact number of qualia is not important. The most important idea for our purposes is that qualia cannot be adequately accounted for from a reductionist point of view.
Thomas Nagel gives a more modern perspective in his essay, What Is It Like to Be a Bat? He comes to the conclusion that the subjective experience of being a bat is unknowable with a reductionist perspective on mind. He conducts a thought experiment by imagining he was a bat:
In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat (Nagel 439).
This issue has come to be known as “the hard problem of consciousness,” a phrase originating from David Chalmers’ seminal work, Facing Up to the Problem of Consciousness. In it, he posits that many of the so-called problems of consciousness are relatively easy to explain, but the hard problem is “how it is that these systems are subjects of experience” (Chalmers 201). While being quite difficult to account for through reduction to underlying processes, it fits nicely into the Thomistic-Aristotelian context, where part of the definition of animality is having qualia. Thousands of pages have been written on this topic, but since the absolutely necessary parts of the animal soul have been covered, we ought to move on to an exploration of the rational soul.
If sentience is non-unique to human beings, what makes us distinct from animals? While we ought not beg the question and assume we are qualitatively different from animals, there are excellent philosophical arguments to substantiate this. Central to this idea is the third, highest genus of soul: the rational soul. Thomas writes that “there is yet another genus in the powers of the soul, which genus regards a still more universal object—namely, not only the sensible body, but all being in universal” (ST I q. 78 a. 1). What he means here is that not only can a being with a rational soul sense objects that can be hot or cold or red or blue, it can understand intellectually the concept of “hot” or “blue.” It can comprehend universal concepts, including those of math and logic, that aren’t tied to specific, sensible objects. While the human substance is composed of body and soul, the intellect is fully incorporeal (Van Dyke 190).
This can be shown true by evaluating the nature of thoughts. One of the stronger arguments for the immateriality of the intellect is James Ross’s argument from the indeterminacy of the physical, here articulated by Feser:
1. Formal thought processes can have an exact or unambiguous conceptual content.
2. Nothing material can have an exact or unambiguous conceptual content.
3. So, formal thought processes are not material (Feser 308).
Premise one can be explained like this: Whenever one uses addition, one can be completely confident that the result will be a valid result, as long as one does the math right. We can also assume that formal logic will always be valid. For example, if all men are mortal, and Socrates is a man, we can conclude with complete certainty that Socrates is mortal. While the individual premises could be wrong, the structure of the argument will always be valid. To say otherwise is to claim that formal thought processes are not determinate. Denying the determinacy of math and logic undermines essentially all systematized knowledge, or at the very least philosophy and science. Like many skeptical arguments, the argument against Ross’s first premise is self-defeating; to deny that math and logic are determinate assumes that there is something real that can be denied. Moreover, the act of denial is an act of logic (Feser 314-15). The most the skeptic can do is cast doubt on the argument, they cannot refute it logically.
Premise two is also relatively straightforward. As was just explored, math and logic have determinate conceptual content. The second premise claims that for this to be true, these concepts cannot be defined or carried out materially. If we attempt to use something material to define addition, we cannot know with certainty that we are actually referring to addition and not something else. It may be helpful to consider the inverse. If the human thought process was fully material, would we be able to define or access these universal concepts unambiguously? Premise two says no.
Imagine an adding machine. If it functions correctly, it will be doing what we call adding when two addends are inputted. However, we cannot assume it really is adding, rather than something else—“quadding.” Quadding would be indistinguishable from adding, except that when one of the addends reaches a certain magnitude, the result is 57, even when this would not be the answer using addition (Harris 845). The number 57 and the magnitude of the addend is irrelevant. The point is that there always exists some addend which would produce an answer that would show that one had not been adding but instead quadding. It is impossible to know which operation is being carried out unless one is looking at that machine in light of a determinate concept of addition. In the same way, it would be impossible to tell whether a fully material human mind is adding or quadding, even for the mind itself. This means that no concept could be determinate for that mind, leading to the issues discussed under the first premise.
Another instance proving this point is the written or spoken word. Consider the word “music” (the specific word is not important). The organization of letters on the page means something not because the letters themselves have determinate conceptual content, but because it is the agreed upon way to communicate the determinate concept of music. What is fundamental are the concepts of addition and music, as anything material can only represent the concepts, not define them unambiguously. To summarize, nothing material can possess truly determinate conceptual content.
These two premises are brought together in the conclusion. If we cannot be certain that a material machine is adding and not quadding, it does not have determinate conceptual content. The human mind possesses determinate conceptual content (e.g. it can be certain it is adding or reasoning validly; it knows what “music” is). Therefore, the human mind is not a material machine. There are several other arguments for the immateriality of the rational soul, but the argument from the indeterminacy of the physical is especially apt for a discussion of artificial intelligence.
While the Aristotelian-Thomistic conception of the soul seems most reasonable, there are plenty of competing theories. One objection from Christians is that Christian definitions and beliefs should not be tied to a theory formed outside the faith. This is a reasonable objection insofar as it is true that, looking purely at Aristotle, we do not find reference to the image of God or human purpose as image bearers. However, just as non-Christians can do excellent science, so can unbelievers do great philosophy. As Thomas writes, philosophy is the handmaiden of theology (ST I q. 1 a. 5). Truth is not contradictory. Thus, anything true grasped by Aristotle will be in accordance with Biblical truth, and may be used to gain a fuller understanding of the universe. As always, there is much more that could be said, but two main theories of the soul deserve refutation: Cartesian Dualism and denial of the soul altogether.
Cartesian Dualism (or Substance Dualism) holds that the body and mind are two separate substances (Robinson). According to René Descartes, the fully immaterial mind puppeteers the fully material body through the pineal gland in the brain (Robinson). Descartes was influenced by the increasingly mechanistic philosophy and science of his day, and sought to affirm the uniqueness of humanity by focusing on epistemology rather than metaphysics (Blum). While this is a worthy goal, dualism is deeply flawed. It alienates the mind from the world, meaning there is no valid mechanism by which we are able to trust our experiences and perception (Feser 262). Why should our perception of the world be intelligible when there is no reason for our mental image of “a car” to be tied to an actual car? On the contrary, if a person is made up of form and matter as soul and body, this removes that causal gap. As has been covered, soul and body are two necessary aspects of a single substance, not two separate substances that are linked together in some way (e.g. through the pineal gland). This view allows for the world to be knowable because the self is directly interfacing with reality and because perception of reality is tied—causally—to reality.
There are many routes the denier of the soul can take, but the most common view is that of reductionism, which denies hylomorphism and claims that the functions of the soul can be fully explained by material processes. It has already been shown that human rationality is only possible with an incorporeal soul, meaning rationality without such a soul is impossible. This means that the reductionist ought only trust their reasoning because it seems to have worked in the past rather than because it is exercising formal logic or entertaining determinate concepts. They thus have no business making arguments against the existence of the soul that advocate for stances beyond agnosticism on the issue. Even if the reductionist accepts this, the issues do not end there. Because reductionism holds that the processes of the mind are reducible to the transeunt causal processes of molecules within the brain, they must hold that there is no qualitative difference between what happens in a living, thinking human being and what happens in the water flowing down a stream. This entails no free will or moral responsibility, and is contrary to the conscious human experience. The issue of potentiality and actuality that hylomorphism puts to rest then needs to be answered in another way as well. Now that the rationality and animality of the human and immateriality of the soul has been established, we have everything needed to establish the impossibility of artificial rational animals and a material intellect.
The Impossibility of Artificial Souls
It is not only improbable for an artificial agent to possess a soul, but impossible under the framework established above. I do not mean to establish a tautology or frame AI out of the picture as a way to establish the impossibility of an artificial soul. It would be illogical to use Thomistic philosophy only because it best disproves artificial personhood. I am making the inverse argument. Thomistic philosophy best explains the soul, especially human rationality, so the same principles ought to be applied to AI. These principles show that an artificial soul and intellect are impossible.
One does not simply create an artificial soul. Remember that a soul exhibits immanent causation and is part of a substance as the form of the matter that makes up the substance. The animating principle of every living thing is the soul, so to create an ensouled artificial intelligence requires both creating a man-made object with substantial form and that the created object be alive.
Any AI system is an artifact, and so cannot have substantial form. In this context, an artifact is the combination of multiple substantial forms into an object with accidental form (Feser 182). A car is a great example of an artifact. Everything about it is reducible to the causal powers of the individual substances within it (the wheels, the tires, the gas). With few exceptions, everything man-made falls under the category of artifact (Feser 182).
While it may be possible to create an object with substantial form, it would be impossible to engineer an ensouled computer. First, anything with substantial form cannot be made up of parts fastened together. The form has priority, in that it “comes first” and actualizes the matter (Goyette 787). One cannot assemble independent substances together and expect that a single substance will emerge. In the same way that Frankenstein’s monster is science fiction, so is the idea that combining parts can yield something with substantial form. It is true that living things have parts. However, “the form does not result from assembling the parts” in the case of living beings (Goyette 787). Secondly, any living being by nature possesses immanent causation and thus has the ability to “self-perfect.” In this way, any part within it will be ordered by the form and does not on its own possess substantial form (Rooney 1). The form causes the parts to function together as one being, not like the transeunt causation of a computer where each function the computer performs is mechanistic and caused by what came before it (quantum computers are a bit different, but the same principle applies). To put it very simply: we cannot combine distinct, substantial parts and expect that thing to become self-perfecting. This is contrary to Thomistic metaphysics as well as the human experience. There is a difference between a live person and a dead body, even though all the parts remain. Something went wrong within the person such that the matter that made them up could not sustain a soul any longer. Every living being begins with the causal powers and organization of a soul, so we cannot put parts together and expect that it will then have a soul.
Suppose we try to bypass this issue and jump straight to the creation of an artificial intellect or a machine that truly thinks—the issues do not get easier. As was shown earlier, an artificial intelligence that would threaten human uniqueness would be one that does not appear rational, but is rational. Like stated above, rationality is a function of the soul, and so would not be possible for something that exhibits only transeunt causation. Feser gives three issues that any theory of artificial intellect must account for: “grasping concepts, putting concepts together into propositions, and reasoning logically from one proposition to another” (403). These three propositions build on each other, so we ought to begin with the first one.
We have no reason to assume an artificial intelligence can grasp concepts for the same reason I do not believe my laptop understands the words I am currently typing. While Large Language Models and other machine learning models work slightly differently from the typical computer, critiques of computational intelligence apply to them as well. A computer follows a series of if-then algorithmic rules using electrical signals. Take a calculator, for example. It is possible to run a complicated program to find the derivative of a polynomial, but that does not entail that the calculator following the algorithm understands derivation (Feser 406). In fact, AI is unintelligent to the degree that not only does it not understand what the program is doing, but it also cannot understand that it is running a program in the first place (Feser 407). If a human is asked to follow a series of steps, they may not understand where the steps are leading, but at the very least they understand the steps themselves and exercise their will to carry them out. A computer does not understand the steps and certainly does not have a will. It follows them passively and transeuntly. No matter how powerful the processor in the computer becomes, it will not suddenly begin “thinking” rather than “processing” once it reaches a certain threshold. Remember also quadding and adding. A computer is fully material, and nothing material can possess determinate conceptual content. Thus AI cannot possess an intellect for the same reason that our intellects must be immaterial.
In its hardware and software (and certainly in its perception), AI is distinct from the traditional computer. LLMs operate using neural nets, which look to simulate the processing of neurons in the brain and operate in parallel rather than serially (Feser 412). They are also trained on “large-scale datasets of text” in order to predict what ought to come next when writing a sentence or paragraph (Feuerriegel et al. 114). This sort of AI is impressive. It can generate incredibly plausible and often true responses to anything a user may ask. However, they operate through prediction rather than logical reasoning, meaning they are not truly “thinking.” This issue is not one that may improve with time, especially with LLMs (Feuerriegel et al. 117). Generative AI is probabilistic by nature, and so, just like any program, will not “become rational” once it reaches a certain size.
These issues raised are not novel. Professor Geoffrey Jefferson presciently addresses this AI optimism, which was prevalent even in 1949:
It is not enough, therefore, to build a machine that could use words (if that were possible), it would have to be able to create concepts and to find for itself suitable words in which to express additions to knowledge that it brought about. Otherwise it would be no more than a cleverer parrot, an improvement on the typewriting monkeys which would accidentally in the course of centuries write Hamlet (1110).
Although we live in a world where a fully AI generated paper can pass a peer-review process, said AI cannot and will not create concepts (Sakana AI). Jefferson continues, noting that even AI in 1949 needed “very intelligent staffs to feed them with the right problems,” predicting modern AI prompt engineering (1110). The reason truly intelligent machines have not appeared since Jefferson was writing is that it is not a technological issue preventing their creation, but an ontological one.
Because LLMs are not living substances, they cannot be sentient. Therefore, the question of whether an AI can be rational is shown invalid even before it can be raised, as substantive being is a prerequisite for rationality. If, for the sake of the argument, we allow for an intellect without this prerequisite, machine rationality can still be shown impossible through the fact that LLMs are material and predictive.
The argument could be made that these proofs claiming that an artificial intellect could not exist overstep the proper bounds of philosophical knowledge. This counter-argument aims to dismiss rather than refute the arguments raised. Turing’s analysis addressed earlier falls in this vein (442). He says in response to Jefferson's argument, which Turing names The Argument from Consciousness, that he “does not think these mysteries [of consciousness] necessarily need to be solved before we can answer the question” of whether machines can think (447). Turing claims that the only way we know anyone is conscious is by the fact that they appear conscious, framing Jefferson’s position as solipsistic. While Turing’s framing is a bit uncharitable and defensive, a softened version of Turing’s pragmatic view is quite common among AI optimists.
We have already seen the issues with this. A calculator simulates the behavior of the human intellect, yet it will never know what a number is. The alternative to Turing’s view, therefore, is not to be a solipsist as Turing claims. It is to see philosophy and logic as a valid source of knowledge. Turing and others make the assumption that someday the philosophical claims will be proven wrong, while at the same time leaving them largely unaddressed. As Feser explains, current quantum physics, biology, and neuroscience all support a Thomistic worldview, for reasons that go beyond the scope of this paper. Thomistic philosophical framework has proved strikingly resilient against past threats, so it is only logical to apply it to present and future concerns as well. However, if AI-personhood empiricists demand Cartesian certainty before they concede that AI cannot be a person, they will be disappointed. Every argument is by nature dubitable, so to require absolute certainty will only lead to radical skepticism. In light of this, the arguments presented ought to be enough to hold the view that AI cannot be a person.
Implications for Christianity
The central conclusion that can be drawn from this paper is that the proper Christian conception of the soul remains safe from philosophical and empirical threats. That is not to say that Christians should dismiss radical ideas about AI personhood out of hand. Turing presents an intense critique of an argument he names The Theological Objection (443). He summarizes the objection as follows:
Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think (443).
This objection seems to be framed in a Cartesian Dualistic way. Turing responds by saying that if a soul is something God implants into a being, there seems to be no issue with God giving a machine a soul if he saw fit. This is an issue for the Cartesian, but not for the Thomist. If the soul is viewed as an intrinsic animating principle and the form of the body rather than as the self implanted into a body-puppet, Turing’s analysis is invalid.
Later on, Turing appeals to the failure of theological proof-texting to refute Copernican theory (444). While he is correct to critique that sort of refutation, the position of the Catholic Church at the time of Galileo is hardly the paragon of the Christian relationship to scientific advancement. Galileo himself offers a lengthy rebuttal to the Church’s exegesis in his letter to Grand Duchess Christina of Tuscany. He first affirms “that the holy Bible can never speak untruth—whenever its true meaning is understood,” but that “if one were always to confine oneself to the unadorned grammatical meaning, one might fall into error” (Galilei). He continues by explaining that the Bible speaks in such a way, when describing the physical world, that was familiar to the common people of the time, so that they would not be turned from theological truth by anachronistic explanations of the natural world (Galilei). The central thrust of Galileo’s argument is that truth in one field will not contradict truth in another. This is as true for the development of AI, philosophy, and theology as it was for heliocentrism and theology. Although mysteries of faith ought not be minimized, the existence and nature of the soul is not a mystery, and thus can and ought to be defended by more than proof texts.
The current view among Christian denominations is relatively consistent with this paper. While many do not frame human personhood in the Thomistic way, each affirms human uniqueness. The Southern Baptist Convention’s Ethics & Religious Liberty Commission issued a statement in which they “affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation” (ERLC). They claim to be representative of evangelicals generally, and they do seem to be. The ecumenical World Council of Churches’ statement on AI focuses on the ethical implications of AI use rather than the nature of the technology itself, yet still implicitly affirms the dignity of humanity (WCC). As one might expect, the most comprehensive ecclesiastical defense of the Thomistic position on AI is that of the Catholic Church. It affirms that human beings are called to be rational, embodied, relational, truth-seeking stewards of the world, which includes rightly defining and managing AI (Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education). Each ecclesiastical view is generally consistent with Biblical truth and the principles presented here. So long as Christians do not repeat the errors of the Galileo Affair, a healthy dialogue between the sciences and theology is not only possible but probable.
Conclusion
In conclusion, the supposed threat of AI on the proper understanding of human uniqueness does not stand under scrutiny. AI ought not be minimized altogether. It has and will certainly continue to carry out tasks we once believed only humans could do. We live in a world in which people prefer artificially generated poems to those of human greats (Porter & Machery). It is probable that AI will become increasingly entrenched in our daily lives. We may reach a day where it is seriously difficult to affirm the uniqueness of human rationality. And yet, that rationality itself urges us to do so. But just as a many sided polygon will never be a circle, so too will AI never be truly human (Feser 216). We were created in the image of God, so the danger is not that machines may compromise that fact by rising to meet us at the throne of God, but that humanity may be dragged away from the foot of the cross. We must recognize that “the danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed from their childhood to desire only what machines can give” (Bernanos 829). Like any groundbreaking technology, it may augment humans, replace humans, or even alter humans, but it will never be human.
Some may say this leaves humanity in a place of unsteadiness. The reductionist would affirm that this is just as well, that the believer in Imago Dei has been deluded and is finally being proven wrong beyond a doubt. This is a hyperbolic framing. But just as the stock market occasionally goes through a correction, some of the current Christian unease can be attributed to a correction on human uniqueness from a point of pride to a more reasonable place. The point of this essay is not to encourage humanity to puff up its chest and deny that AI could threaten human ability. Human uniqueness ought not be equated to ability. If human worth was best measured by ability, social darwinism would be the norm. If one is not prepared to accept social darwinism, one ought not worry about humanity’s worth. AI itself does not even threaten to surpass the worth of an acorn. No matter how often or emphatically the alchemist boasts of his ability, he will never truly succeed in changing lead to gold, nor circuitry to a soul.
Works Cited
“The AI Scientist Generates Its First Peer-Reviewed Scientific Publication.” Sakana AI, 12 Mar. 2025, sakana.ai/ai-scientist-first-publication/.
Amodei, Dario. "Machines of Loving Grace." Oct. 2024, http://darioamodei.com/machines-of-loving-grace. Accessed 23 Mar. 2025.
Aristotle, and H. Rackham. Aristotle in 23 Volumes. Vol 19, the Nicomachean Ethics. Harvard University Press, 1934, http://data.perseus.org/citations/urn:cts:greekLit:tlg0086.tlg010.perseus-eng1:6. Accessed 21 Mar. 2025.
Bernanos, Georges. "La révolution de la liberté." Le Chemin de la Croix-des-Âmes, Rocher, 1987, p. 829.
Bjork, Russell C. "Artificial intelligence and the soul." Perspectives on Science and Christian Faith, vol. 60, no. 2, June 2008, pp. 95+. Gale Academic OneFile, link.gale.com/apps/doc/A179693744/AONE?u=anon~afed192d&sid=googleScholar&xid=31f8ec7e. Accessed 23 Jan. 2025.
Blum, Paul Richard. "Substance dualism in Descartes." Introduction to philosophy: Philosophy of mind, Rebus Press, 2019, https://press.rebus.community/intro-to-phil-of-mind/chapter/substance-dualism-in-descartes-2/.
Boylan, Michael. “Aristotle: Biology.” Internet Encyclopedia of Philosophy, https://iep.utm.edu/aristotle-biology/#H5. Accessed 4 Mar. 2025.
Carroll, William. “The Limits of Life: Biology and the Philosophy of Nature.” Public Discourse, The Witherspoon Institute, 26 Feb. 2014, www.thepublicdiscourse.com/2014/02/12102/.
Chalmers, David J. "Facing up to the problem of consciousness." Journal of consciousness studies 2.3 (1995): 200-219.
Chisholm, Roderick M. "Human freedom and the self." The Lindley Lecture. University of Kansas Philosophy Department, 1964.
Colombatto, Clara, and Stephen M. Fleming. "Folk psychological attributions of consciousness to large language models." Neuroscience of Consciousness 2024.1 (2024): niae013.
Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education. "Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence." Vatican: The Holy See. 28 Jan. 2025, www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html. Accessed 24 Mar. 2025.
Eberl, Jason T. "Aquinas on the nature of human beings." The Review of Metaphysics (2004): 333-365.
ERLC Staff. “Artificial Intelligence: An Evangelical Statement of Principles.” The Ethics & Religious Liberty Commission of the Southern Baptist Convention, 11 Apr. 2019, https://erlc.com/policy-content/artificial-intelligence-an-evangelical-statement-of-principles/.
Feser, Edward. Immortal Souls: A Treatise on Human Nature. Editiones Scholasticae, 2024.
Feuerriegel, Stefan, et al. "Generative ai." Business & Information Systems Engineering 66.1 (2024): 111-126.
Galilei, Galileo. “Letter to the Grand Duchess Christina of Tuscany, 1615 .” Received by Grand Duchess Christina of Tuscany, Modern History Sourcebook, Fordham University, 15 Feb. 2025, https://sourcebooks.fordham.edu/mod/galileo-tuscany.asp. Accessed 24 Mar. 2025.
Goyette, John. "St. Thomas on the Unity of Substantial Form." Nova et Vetera (English Edition) 7.4 (2009).
Harris, Joshua Lee. "Indeterminacy and the Immateriality of Thought." Revista Portuguesa de Filosofia vol. 80, Fasc. 3, 2024, pp. 841-862.
Hochschild, Joshua. "Form, Essence, Soul: Distinguishing Principles of Thomistic Metaphysics." Distinctions of Being: Philosophical Approaches to Reality. American Maritain Association, 2012, pp. 21-35.
Jones, Cameron R. and Benjamin K. Bergen. "Large Language Models Pass the Turing Test." arXiv, 31 Mar. 2025, doi:10.48550/arXiv.2503.23674. Accessed 7 Apr. 2025.
Manning, Christopher. Artificial Intelligence Definitions. Stanford University Human-Centered Artificial Intelligence, September 2020, https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf.
Nagel, Thomas. "What Is It Like to Be a Bat?" The Philosophical Review, vol. 83, no. 4, Oct., 1974, pp. 435-450, http://www.jstor.org/stable/2183914. Accessed 23 Jan. 2025.
Porter, Brian, and Edouard Machery. "AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably." Scientific Reports 14.1 (2024): 26133.
Robinson, Howard. “Dualism.” The Stanford Encyclopedia of Philosophy (Spring 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), https://plato.stanford.edu/archives/spr2023/entries/dualism/.
Sproul, R. C. The Consequences of Ideas. Crossway, 2000.
Thomas Aquinas. Summa Theologica. Translated by Fathers of the English Dominican Province. Westminster, MD: Christian Classics, 1981.
Turing, A.M. “Computing machinery and intelligence.” Mind, vol. 59, no. 236, 1950, pp. 433-460.
Van Dyke, Christina. "Not properly a person: The rational soul and ‘thomistic substance dualism’." Faith and philosophy 26.2 (2009): 186-204.
WCC Central Committee. "Statement on the Unregulated Development of Artificial Intelligence." World Council of Churches, 24 Mar. 2025, www.oikoumene.org/resources/documents/statement-on-the-unregulated-development-of-artificial-intelligence#_ftnref6. Accessed 24 Mar. 2025.