top of page

Theory of Mind in Human Infants

1 Introduction


Many non-human animals have the ability to use perceived behavior to subsequently alter their actions based on the observed. Humans, however, appear to go beyond merely reacting and actively attribute the observed behavior to an underlying ‘mind’, thus attempting to make sense of it (Fitch 2010). Understanding this mind opens the door to predictions of hitherto unobserved behavior on the basis of the previously observed - a strategy often referred to as the intentional stance (Dennett 1988). The capacity to even take such a stance may be termed Theory of Mind (Wellman 2018).


Once assumed to be found only in humans, theory of mind (ToM) is considered to be

the ability to attribute beliefs and desires to another individual. Crucial to various domains

relevant to our existence, language being one of them, it is also the capacity the intentional

stance is predicated upon (Jackendoff 2010; Fitch 2010; Dennett 1989). Thus, it is of special

interest to various disciplines when and how theory of mind develops ontogenetically, not least to evolutionary linguistics, for which a thorough understanding of human ontogeny is required as a basis for comparative efforts (Townsend et al. 2016). While initially thought to only be present in children from the age of four on, ToM has since been consistently pushed towards infancy - with infant ToM being a divisive focus of developmental cognitive science and remains hotly debated (Milligan et al. 2007; Sodian 2011; Rakoczy 2012; Wellman 2018; Poulin-Dubois et al. 2018). This review is an attempt at clarification of the issue and contextualization of the viewpoints involved in the current debate.


2 Intentionality and Theory of Mind


2.1 The Intentional Stance


First coined by philosopher Daniel Dennett, the concept of intentional stance fits into the

larger framework of three strategies (or stances), which humans use to predict other entities’

behavior. Observing the physical state of an entity and deducing its future actions by use of

knowledge about the physical world and natural law is termed the physical stance (Dennett

1971). This strategy is applied in a multitude of settings by everybody in daily life; simply

acknowledging gravity as a force and accordingly not placing an object which may roll on a

sloped surface (unless that movement is desired) constitutes an example of taking the physical stance. According to Dennett, the design stance denotes prediction based on knowledge of how the entity is designed to behave. Predicting a chess computer’s next move based on knowledge of the algorithms under which it operates would illustrate the taking of a design stance.


Finally, the third strategy proposed, the intentional stance, is characterized by Dennett as

follows:


“The intentional stance is the strategy of prediction and explanation that attributes beliefs, desires, and other “intentional” states to systems - living and non-living - and predicts future behavior from what it would be rational for an agent to do, given those beliefs and desires.” (Dennett 1988)


Central to this stance is attributing to the entity whose behavior is to be predicted, a mind capable of holding beliefs and desires - and thus the ability to harbor intentions derived from them. Furthermore, the object of observation must also be assumed to be a rational actor, behaving in a goal-oriented way (Dennett 1971). The actor is subsequently assumed to have intent, and this is the key to predicting its behavior.

On a side note, it may be added that the design stance includes an implicit attribution of

intentionality - not on the part of the actor, but on the level of the designer. Thus, taking

the design stance requires an indirect understanding that the entity, whose behavior is to be

predicted, is acting in accordance with the intention its designer endowed it with. Subsequently, it appears that the design stance cannot function without abstracted intentionality, whereas the physical stance can.

Revisiting the aforementioned analogy of chess, prediction of a next move can be done with

either of the three stances. A physical approach would involve the circuitry of the computer

and an understanding of the entire chain of causation leading to the decision for every move - clearly an impractical approach in such a case. The intentional stance, however, would involve assuming that the chess computer has goals, such as winning the game. To achieve that, it must pursue shorter-term goals, such as blocking the bishop’s advance in a defensive maneuver. Thus, in using the intentional stance to predict the next move, one ascribes desires and beliefs to the chess computer and subsequently infers its intent.


2.2 Attribution and Prediction


How the intentional stance and theory of mind are interrelated, if they are not indeed the same thing, is dealt with inconsistently in the literature (Rakoczy 2012). On the one hand, it appears these very closely linked notions are used as different terms for the same phenomenon (for example, Gergely et al. (1995) appears to employ the expressions interchangeably). Simultaneously, authors such as Dennett (1991) do not address ‘theory of mind’ per se, nor its relation to the intentional stance. In fact, Dennett has stated his dislike for the term ‘theory of mind’ and implied it being synonymous with, or replaceable by, his ‘intentional stance’ (Jahme 2013).

Returning to the seminal paper which introduced the term, Premack & Woodruff (1978)

state that “in saying that an individual has a theory of mind, we mean that the individual

imputes mental states to himself and to others (either to conspecifics or to other species as well)”. Here the distinction from the intentional stance seems clear: the latter includes the subsequent prediction and explanation of behavior based on the mental state attributed. However, the caveat phrase, which appears to be at the start of at least some of the confusion, can be found a few lines down: “It seems beyond question that purpose or intention is the state we impute most widely” (Premack & Woodruff 1978). According to Gärdenfors (2009), “adopting the intentional stance means that desires, beliefs and goals are allowed as causes. By ascribing these entities as causal factors to other individuals, we explain their behaviour”. This view clarifies how the two concepts can be reconciled in terms of how a rational actor’s behavior is explained. The subsequent question, which must undoubtedly be raised, is whether imputing intention qualifies as prediction.

The attribution of a mental state to a rational actor entails acknowledgment of consciousness. Even in the case of the chess computer, a certain consciousness is attributed, albeit not in the same thorough way as with conspecifics. To then predict behavior on the basis of imputed desires requires imagining oneself in the other’s place, assuming that which appears rational is equally seen as rational by the counterpart (Ayer 1956). The more different the base assumptions are, the more challenging this becomes (Nagel 1974; Csibra 2017).

Under the assumption that a rational actor will act a certain way, presupposing an intentional

mental state can be seen, all else being equal, as constituting a prediction. However, it

is highly probable to be an inaccurate and clumsy one in most cases, as attributing the mental state ‘X desires Y’ is likely a far cry from directly inferring ‘X will pursue Y’, as mitigating circumstances might be involved (Dennett 1978). The ceteris paribus clause appears to be, in this highly simplified case, too overpowering for the argument. Yet, provided more information about the world, the entity who’s behavior is to be predicted, and that entity’s view of the world is available to the individual attempting a prediction, it might be successful. What exactly is needed to make a prediction, varies from case to case.


A helpful framework to classify possibilities is provided by the orders of intentionality, beginning at the zeroth level and, in theory, being open-ended. A system lacking mental states all together, and for which the presupposition of a mental state is not necessary to predict its behavior, is termed a zeroth-order intentional system. This level includes a diverse range of natural objects, but also potentially many living organisms (Fitch 2010). To predict such an objects behavior, the physical stance might be better suited, such as when attempting to predict in which direction a rock will roll when dropped on a hillside. First-order intentionality, then, denotes a system “whose behavior is predictable by attributing (simple) beliefs and desires to it” (Dennett 2009). This corresponds to many other animals which have some sort of consciousness and a comparatively larger brain; they have mental states and cognitive ethology operates under this assumption. Holding a belief about another entity’s mental state constitutes second-order intentionality, a characteristic of human cognition (Fitch 2010). In this case, the agent, who’s behavior is to be predicted, is considered a rational actor with a mental representation of the world and an intentional mental state. If the actor’s information state about the world corresponds to reality, and the predictor’s representation of it and the actor’s intention is accurate, a successful prediction is possible (Sodian 2011). It is this that the aforementioned ceteris paribus clause de facto contains.

On a simple level, attribution of intention is sufficient to predict an agents actions. However,

on a deeper level of analysis, the agent has a mental representation of the world in which it

acts, which in turn interacts with the beliefs, desires, and intentional mental states she holds.

Yet, it must be noted that this is not a logical biconditional relation: the mere prediction of

behavior is no sure indicator of attribution of intention, as there are other possible stances

which enable prediction (Dennett 1971, 1978; Bloom & German 2000). Thus, to fully take

the intentional stance requires the kind of second-order intentionality exhibited by this case -

termed a representational theory of mind (Sodian 2011).


2.3 Theory of Mind


Already previously touched upon (see Section 2.2), the definition of ‘theory of mind’ is by no means obvious nor unilateral in the literature. While there is potential for inaccuracy and room for interpretation, which Premack & Woodruff (1978) exhibit in defining (and coining) the term (described above), this is later clarified by Premack (1988). Unfortunately, the statement is contrary to the distinctions made above:


“In 1978, we raised the question ‘Does the chimpanzee have a theory of mind?’ by which we meant, does the ape do what humans do: attribute states of mind to the other one, and use these states to predict and explain the behaviour of the other one.” (Premack 1988)


Unfortunately, the situation would be much clearer if Premack & Woodruff (1978) had plainly

stated their intention for, and usage of, ‘theory of mind’ - essentially equal to the intentionalstance. From a primatological point of view, it makes sense to include prediction in the definition, as without it, testing the hypothesis would be significantly more challenging. Thus, it is possible that the element of prediction is incorporated as a result of experimental practicality, while from a philosophical or psychological definitory perspective it need not necessarily be included. It might be more serviceable to terminologically separate the level of attribution of intention and the subsequent level of prediction. As it stands now, the inaccuracy remains, since a simple attribution of mental state is not congruent with making a prediction, as discussed in the previous section. In the literature, however, many authors either neglect to define their usage of ‘theory of mind’ (Fitch 2010; McMahon & McMahon 2012; Poulin-Dubois et al. 2018), avoid the term (Song et al. 2008), or clearly state their definition to be the attribution of mental states or beliefs, excluding prediction as a necessary characteristic (Wimmer & Perner 1983; Baron-Cohen et al. 1985; Byrne 1996; Mosconi et al. 2005; Carruthers 2020). Some authors might implicitly take prediction to be equal to attributing intent, while others clearly state (in line with Premack (1988)) that ToM encompasses both attribution and subsequent prediction (Blijd-Hoogewys & van Geert 2017). Due to the definition of the term used in Premack & Woodruff (1978), as well as the usage and adoption of it from anthropology to psychology (Wimmer & Perner 1983; Baron-Cohen et al. 1985), in the present paper ‘theory of mind’ is taken to mean the attribution of a mental state, excluding explicit prediction characteristics, however, including ‘belief’ and ‘intent’ as attributable mental states. Thus, ToM would be placed just short of Dennett’s intentional stance, a prerequisite for taking that stance, because the latter explicitly contains prediction.


2.4 Further Levels of Description


The (apparent) inconsistencies pertaining to the usage of terms mentioned in Section 2.2 are

not limited to high-level terms such as ‘theory of mind’ or the ‘intentional stance’. Rakoczy

(2012) suggests that there is, in reality, less conflict between positions within the field of ToM

development than there appears to be.


“Researchers pro and con infant theory of mind respond actually to different readings of the question, giving answers seemingly in conflict with each other but actually quite compatible – answers to different questions.” (Rakoczy 2012)


Conceptual clarity appears to be of critical importance in this debate and, as has been the case in various other fields (e.g. Plesser (2018)), could reconcile conflicting positions.


One distinction that need be made is between personal and subpersonal levels of description or explanation, a concept that also goes back to Dennett (Drayson 2014). The first is more general and belongs to the realm of what is known as ‘folk psychology’: “At this level people are said to act (rather than simply behave)” (von Eckhard 2012). Individuals are described as possessing mental states, propositional attitudes, and emotions, as well as exhibiting various cognitive capacities, such as perceiving, understanding and speaking language, or remembering. These states are pivotally attributed to the whole person and the actions of the whole person (Rakoczy 2012).

The subpersonal level, however, is more abstract and ‘scientific’, as it deals with “informationprocessing and neurophysiological descriptions and explanations common in cognitive science, and describe parts of the information processing system or the brain” (Rakoczy 2012). An example of a term which is ambiguously situated within both levels of description is ‘representation’ (Rakoczy 2012; von Eckhard 2012).


A further distinction, pointed out by Rakoczy (2012), concerns ‘propositional attitudes’

versus ‘subdoxastic states’. The former is a category of intentional mental states, including

beliefs, desires, intentions, hopes, and fears and is termed such, because “they involve thehaving of an attitude to a content or proposition” (Carston 2006). An individual may stand in one of various relations to a ‘representation of a situation’, i.e. a ‘proposition’, of which these mental states are a selection. She can desire that p, hope that p, believe that p, etc. - p standing in for any proposition.


“Propositional attitudes are often explained as functional roles: The belief that p can be explained as the mental representation of p that plays a certain functional role for the thinker’s behavior. If I search for my pencil on the desk, for example, I will do so partly because I believe it is there. The belief that my pencil is on the desk hence plays a certain functional role in my behavior and can therefore be characterized as a belief. Likewise, believing that something is true, probable, possible, false, or supposed can be

characterized as different propositional attitudes.” (Vosgerau 2006)


Stich (1978) notes that beliefs are generally inferred from other, more basal beliefs and,

to avoid circularity, there must be a baseline of beliefs not inferred from any other belief.

These are “a heterogeneous collection of psychological states that play a role in the proximate causal history of beliefs, though they are not beliefs themselves” (Stich 1978) and have been termed subdoxastic, ‘below the level of pertaining to belief’ (Rakoczy 2012). An example of a subdoxastic state could be the vague grammatical ‘knowledge’ untrained individuals have, which allows them to distinguish grammatical from ungrammatical phrases in their language, without being able to explain why (What is meant here, does not pertain to any generative notions of innateness - simply to mental states which are below the level of consciousness and devoid of representation in the way beliefs are). To define the usage of ‘belief’ here, ‘believing’ shall be termed “a relation between an organism and a propositional content, a relation that obtains through the organism’s being related to (“having”) a mental representation which represents that content” (Nelkin 1989). Subdoxastic informational states differ from regular beliefs in three crucial ways, outlined in Rakoczy (2012): (1) Beliefs are inferentially integrated, i.e. they combine with other beliefs to form yet new beliefs; (2) Beliefs are accessible to consciousness; (3) Beliefs necessarily have conceptual content, while the informational content of subdoxastic states does not rely on this condition.


3 Infant Theory of Mind


Using the distinctions and clarifications outlined in Section 2, an attempt at contextualization of infant theory of mind research may be made. In broad terms, there can be said to be two camps: those who support the notion of ToM competence in children under the age of 4 (e.g. Onishi & Baillargeon (2005)), and those who reject it (e.g. Wellman et al. (2001); Ruffman & Perner (2005)). Specifically, precise definitions of such concepts as ‘belief’ and ‘false belief’ come to bear in this discussion, as will be addressed. Overall, the presence of ToM after the age of 4 (or 5) appears to be uncontroversial (Milligan et al. 2007; Sodian 2011; Wellman 2018). Sodian (2011) distinguishes two conceptual systems, based on simple or more sophisticated interpretations (respectively) of what exactly the attribution of intentionality entails, as discussed in Section 2.2. Either a simple goal-oriented intention suffices to explain or predict an agent’s action, or the “attribution of intentional states requires a representational ToM: The agent who has desires and beliefs about a certain state of the world mentally represents this state of the world” (Sodian 2011). This representational model of reality may not be congruent with an inter-subjective representation, or simply not correspond to the one held by the observer - either way, the observer would interpret this to be a false belief.

To better illustrate the relationships between observer, agent, and the mental states involved, the following schematic form (adapted from Rakoczy (2012)) may provide a helpful scaffolding:

Observer R1 [ agent R2 (p)]


The above schema describes an observer, who has a “cognitive relation (R1) to the situation

that some [agent] has some cognitive relation R2 to a situation (p)” (Rakoczy 2012). In the

case of a second-order intentional relation (i.e. a representational ToM), both R1 and R2 are

propositional attitudes (Sodian 2011; Rakoczy 2012). The question remains, however, what

exactly R1 and R2 are in human infants.


3.1 Intentional Action and Goal-Directedness


Various studies indicate that both understanding and even anticipation of goal-directed action emerges early in human development. In a landmark paper, Woodward (1998) describes infants as young as 5 months old showing signs of having represented an action in terms of its goal, instead of its spatiotemporal properties. The experimental paradigm involved habituation to a hand reaching for one of two objects. For testing, the objects’ position was reversed. The infants reacted more strongly to a new-object event than to a new-path event, i.e. the grasping for the same object which now was in a different location.

Subsequent research within this same experimental paradigm found evidence for tracking of

agent preference and also what the observed agent could see (Luo & Baillargeon 2007; Luo & Johnson 2009). Furthermore, Phillips et al. (2002) and Sodian & Thoermer (2004) showed that “infants also encode referential gestures, such as pointing and looking at a specific object, as goal-directed” (Sodian 2011). Subsequent developments in eye-tracking, and adaptation of the Woodward-paradigm to such techniques (Corbetta et al. 2012), further supported the notion of infant understanding of goal-directed action. Moreover, Cannon & Woodward (2012) reported findings which they say indicate infants use their analysis of goal-directedness “to generate rapid, on-line predictions about others’ next actions when the context has changed” (Cannon & Woodward 2012). Furthermore, infants differentiate between animate and inanimate ‘graspers’: 9-month-old infants do not encode the movements of a grasping claw as goal-directed, unless they are shown how a human operates the mechanical grasping device. 12-month-olds, however, interpreted the claw’s movements as goal-directed regardless (Hofer et al. 2005). Interestingly, the ability to discern intent or goal-directedness of an action depends on its success. Infants 10-12 months of age were able to recognize the goal-directedness of an action regardless of its outcome, while 8-month-olds could not for unsuccessful attempts of the action (Brandone & Wellman 2009; Brandone et al. 2014). In general, the observation of failed attempts sheds light on the mental states present in infants. Goals are inferred by 15-18 month old infants when

imitating a failed goal-directed action and by the age of 14 months, they are able to, in an

imitation experiment, distinguish between accidental and intentional actions (Sodian 2011). By the age of 9-11 months, and at the latest at 15 months, infants assign thematic roles to agents, such as ‘giver’ and ‘recipient’ (Schöppner et al. 2006; Tatone et al. 2019).


Mere attribution of goal-directedness cannot be equated with attribution of intentional

states. Infants appear to have a certain sensitivity for agency and animacy in actions, but the

claim that they understand and predict behavior on the basis of a mental representation of the agent’s beliefs and representations of the world is not supported merely by such findings (Sodian 2011; Wellman 2018). The data from failed goal-directed action, however, does elucidate the issue: the children indeed appear to predict behavior and attribute an intention on some level, their behavior is much harder to explain if this were not the case. There appears, furthermore, to be a strong social component involved in the development of the observed capacity, as triadic social interaction (involving an adult social partner) strongly contributed to the understanding of intentional action (Brandone et al. 2020). A recent study involving 14-month old infants showed that they “anticipated an individual’s future actions based on her past collaborative behavior” (Krogh-Jespersen et al. 2020).

Sodian (2011) claims such evidence is an indication for the attribution of intentional mental

states, and that “infants attribute concrete action goals, as well as motivational states and dispositions, to agents, and they integrate a precise representation of an agent’s perception with a representation of the agent’s goals” (Sodian 2011). Based on the findings outlined above, this statement might be somewhat over-reaching, as there are multiple possibilities for explanation, among them the attribution of the infants’ behavior to some subdoxastic understanding of human action or attributing some vague concept of ‘knowledge’ instead of ‘belief’ proper, akin to what has been reported for chimpanzees (Sodian 2011; Kaminski et al. 2008). Specifically, it seems the results in this line of inquiry have shed some light on R1 - the child does seem to have some sort of representation of the observed agent. The question remains, however, what is R2? Depending on what R2 actually is, and depending on what exact definition of theory of mind one uses, the answer to the question of infant ToM can have different outcomes. It is here that Rakoczy (2012) sees one of the origins of misunderstanding.


3.2 Epistemic States


As was illustrated in Section 2.2, mere attribution of intent is not necessarily enough to make

a prediction of behavior. The predictions described in the previous section are explainable by subdoxastic attitudes, falling short of ‘belief’. This, however, does not mean that they inevitably are. The standard method for testing this has become the analysis of false belief, as “reasoning about false beliefs requires a differentiation of mental representation and reality” (Sodian 2011). An influential study was Onishi & Baillargeon (2005), a looking-time and violation-ofexpectation paradigm which “used a novel nonverbal task to examine 15-month-old infants’ ability to predict an actor’s behavior on the basis of her true or false belief about a toy’s hiding place” (Onishi & Baillargeon 2005). According to the authors, the results support the notion that “children appeal to mental states — goals, perceptions, and beliefs — to explain the behavior of others” from a young age. Two other studies, Buttelmann et al. (2009, 2014), found that infants’ interpretation of an adult’s behavior differed if the adult was not present when an object was moved (from one box to another, for example). In the first study, an adult tried opening a box (A). Depending on whether the adult had seen a toy being moved from box (A) to another box (B), the infants tested reacted differently. If the adult had observed the transition, the infant attempted to help the adult open box (A). If not, the infant proceeded to retrieve the toy from box (B). Thus, the child appeared to infer that, knowing where the toy was, the adult in the first scenario was looking for something else. This appears to clearly demonstrate a representation of false-belief understanding in 18-month old infants (Buttelmann et al. 2009; Sodian 2011). A further study, employing a similar paradigm, even concluded “the results suggest that infants’ false-belief understanding is as sophisticated as that of preschool children” (Buttelmann et al. 2015), negating the 4 year age boundary proposed by other authors. Such studies are based on a violation-of-expectation paradigm. By virtue of this, they indicate whether or not an infant was surprised by the behavior observed, rather than showing the infant’s consideration of an agent’s false belief and its consequences (Sodian 2011). This problem may be addressed by using anticipatory looking as a measure, which was done by such studies as Southgate et al. (2007) and Neumann et al. (2008). Developmental continuity from infancy to preschool age has been proposed, based largely on anticipatory looking measures (Thoermer et al. 2012).


As with many notions in this field of inquiry, there is a strong counterargument. Dörrenberg

et al. (2018) conducted a broad replication and validation study, attempting to assess the

merits of different approaches to the study of infant theory of mind. The results were not in

favor of anticipatory looking time, as it seemed not to correlate with other false-belief results. The general idea of this study stipulated that if all these test purportedly measure the same phenomenon, they should converge and correlate in some systematic way. Kulke et al. (2018) also failed at replicating the results presented by Southgate et al. (2007), both in children as well as in elderly adults. Such experiments as outlined previously, and crucially the conclusions drawn therefrom, rely on the notion that representation of another’s false belief is crucial to attributing a representational theory of mind (i.e. second-order intentionality) (Sodian 2011). This, however, is not necessarily the case. Firstly, as addressed in Section 2.1 and Section 2.2, mere prediction is not a reliable indicator for second-order intentionality as other methods might render the same or similar results. Bloom & German (2000) give the following example, which would constitute a case of the physical stance (see Section 2.1): “Suppose A knows the chocolate is in the basket and observes B searching for food. A might expect B to look in the basket, not because A is attributing a belief to B, but because the chocolate actually is in the basket”. The following succinct passage

elucidates a further issue with the false-belief paradigm:


“The more serious problem is that false belief tasks are inherently difficult. This is because any false belief task requires, at minimum, that the child reasons about a belief that is false. As Leslie, among others, has pointed out, beliefs are supposed to be true. This is what they are for (Leslie 1994). Hence, even for a child who clearly understands that beliefs can be false, getting the right answer places non-trivial processing demands. To put it another way, to succeed at the false belief task, the child has to override useful and simple heuristics, such as ‘people will act in accord with their desires’ ” (Bloom & German 2000)


It thus appears that passing a false-belief task asks more than the paradigm intends to test. This makes it very difficult to assess varying reports of performance in different variations. Sodian (2011) suggests that infants might succeed on belief-based paradigms because of “automatic, subconscious reactions to a set of relevant behavioral and situational cues”, a proposition very much like the subdoxastic states discussed in Section 2.4. Sodian (2011) further suggests the evolutionary importance of predicting and explaining agents’ behavior as a possible explanation for such a subconscious and possibly innate endowment. This line of reasoning, however, is problematic as it seems to amount to a ill-advised teleological reverse conclusion - all too common in evolutionary psychology. The practicability of a trait does not necessitate nor explain its evolution. Apart from the needed demonstration of a benefit in fitness, the notion purported here would imply other social animals performing on a similar level as human infants. This would be supported by some research conducted on other primates (Flombaum & Santos 2005; Krupenye et al. 2016). Further criticism of the false-belief paradigm, such as found in Bloom & German (2000), appears to be based on differing interpretations of what exactly constitutes ToM, a combination of elements addressed in Section 2.


To what extent the false-belief paradigm tests for the presence of a theory of mind greatly

depends on the chosen definition applied (see Section 2.3). Sodian (2011) states: “If infants

possess a ToM, then they should be able to make action predictions based on an agent’s false belief”. This view clearly takes ToM to include a predictive element, and not only the ability to attribute a mental state or ‘belief’. This is a focal point of the criticism leveled against falsebelief tasks by Bloom & German (2000) and an illustration of the importance of terminological clarity. It is also everything but clear if false-belief tasks indicate what exactly R2 is - for the above reasons, this remains ambiguous.


4 Conclusions and Outlook


It remains unclear to what extent infants possess a theory of mind. The reasons for this status, however, are varied. A not inconsiderable amount of the blame may be placed in terminological and theoretical background issues, as discussed in Section 2. It appears that this theoretical foundation, upon which subsequent research rests, must be shared by different authors, or at least discrepancies must be clearly communicated (or even first identified). As lamented by Rakoczy (2012), the seemingly conflicting answers given by different researchers may well be answers to different questions. Thus, apart from clarifying the terminological field and clearly categorizing who is saying what about exactly which notion, an overarching interpretation of all these ‘answers’ must be found.

A framework which hopes to break the apparent stalemate proposes a third path. Instead

of either supporting theory of mind being present before or after the age of 4, various authors suggest the marriage of both ideas, terming it a ‘two-systems approach’. A crucial pivot point for distinguishing early versus late ‘ToM’ is proposed to be ‘aspectuality’, illustrated as follows:


“[...] Consider a popular film. Lois Lane is yet to discover that Clark Kent is Superman. She simultaneously believes that Superman is with her and that Clark is elsewhere. She has incompatible beliefs about one and the same person under two different aspects. Only S2 is capable of tracking Lois’s beliefs, which essentially involve aspectuality. Suppose that success on implicit FB tasks is a consequence of S1 only, whereas success on explicit FB tasks requires S2. In that case, infants’ performance should exhibit the limits of S1. Infants may succeed on many implicit FB tasks that do not essentially involve aspectuality, such as the many tasks that involve simple mistakes about location only. But where FB tasks essentially

involve aspectuality, infants should not succeed. ” (Fizke et al. 2017)


In the study conducted by Fizke et al. (2017), children were tested on aspectuality and the

results appear to indicate support for a two-systems approach to the question of infant ToM.

Significantly more research is required, however.


Manuel Rüdisühli


5 References


Andrews, K. 2000. Our Understanding of Other Minds: Theory of Mind and the Intentional

Stance. Journal of Consciousness Studies 7(7). 12–24.


Ayer, Alfred Jules. 1956. The Problem of Knowledge. Macmillan.


Baron-Cohen, Simon, Alan M. Leslie & Uta Frith. 1985. Does the Autistic Child Have a “Theory of Mind”? Cognition 21(1). 37–46. doi:10.1016/0010-0277(85)90022-8.


Blijd-Hoogewys, Els M. A. & Paul L. C. van Geert. 2017. Non-Linearities in Theory-of-Mind

Development. Frontiers in Psychology 7. doi:10.3389/fpsyg.2016.01970.


Bloom, Paul & Tim P German. 2000. Two Reasons to Abandon the False Belief Task as a Test

of Theory of Mind. Cognition 77(1). B25-B31. doi:10.1016/S0010-0277(00)00096-2.


Brandone, Amanda C., Suzanne R. Horwitz, Richard N. Aslin & Henry M. Wellman. 2014.

Infants’ Goal Anticipation during Failed and Successful Reaching Actions. Developmental

Science 17(1). 23–34. doi:10.1111/desc.12095.


Brandone, Amanda C., Wyntre Stout & Kelsey Moty. 2020. Triadic Interactions Support Infants’ Emerging Understanding of Intentional Actions. Developmental Science 23(2). e12880. doi:10.1111/desc.12880.


Brandone, Amanda C. & Henry M. Wellman. 2009. You Can’t Always Get What You

Want: Infants Understand Failed Goal-Directed Actions. Psychological Science 20(1). 85–

91. doi:10.1111/j.1467-9280.2008.02246.x.


Buttelmann, David, Malinda Carpenter & Michael Tomasello. 2009. Eighteen-Month-Old Infants Show False Belief Understanding in an Active Helping Paradigm. Cognition 112(2).

337–342. doi:10.1016/j.cognition.2009.05.006.


Buttelmann, David, Harriet Over, Malinda Carpenter & Michael Tomasello. 2014. Eighteen-

Month-Olds Understand False Beliefs in an Unexpected-Contents Task. Journal of Experimental Child Psychology 119. 120–126. doi:10.1016/j.jecp.2013.10.002.


Buttelmann, Frances, Janina Suhrke & David Buttelmann. 2015. What You Get Is What You

Believe: Eighteen-Month-Olds Demonstrate Belief Understanding in an Unexpected-Identity

Task. Journal of Experimental Child Psychology 131. 94–103. doi:10.1016/j.jecp.2014.11.009.


Byrne, Richard W. 1996. Machiavellian Intelligence. Evolutionary Anthropology: Issues, News,

and Reviews 5(5). 172–180.


Cannon, Erin N. & Amanda L. Woodward. 2012. Infants Generate Goal-Based Action Predictions. Developmental Science 15(2). 292–298. doi:10.1111/j.1467-7687.2011.01127.x.


Carruthers, Peter. 2020. Representing the Mind as Such in Infancy. Review of Philosophy and

Psychology doi:10.1007/s13164-020-00491-9.


Carston, R. 2006. Language of Thought. In Keith Brown (ed.), Encyclopedia of Language &

Linguistics (Second Edition), 559–561. Oxford: Elsevier. doi:10.1016/B0-08-044854-2/04780-

5.


Corbetta, Daniela, Yu Guan & Joshua L. Williams. 2012. Infant Eye-Tracking in the Context

of Goal-Directed Actions. Infancy 17(1). 102–125. doi:10.1111/j.1532-7078.2011.00093.x.


Csibra, Gergely. 2017. Cognitive Science: Modelling Theory of Mind. Nature Human Behaviour 1(4). 1–1. doi:10.1038/s41562-017-0066.


Dennett, Daniel C. 1971. Intentional Systems. The Journal of Philosophy 68(4). 87–106.

doi:10.2307/2025382.


Dennett, Daniel C. 1978. Beliefs about Beliefs [P&W, SR&B]. Behavioral and Brain sciences

1(4). 568–570.


Dennett, Daniel C. 1988. Précis of The Intentional Stance. Behavioral and Brain Sciences

11(3). 495–505. doi:10.1017/S0140525X00058611.


Dennett, Daniel C. 1989. The Intentional Stance. MIT Press.


Dennett, Daniel C. 1991. Consciousness Explained. Little, Brown and Company.


Dennett, Daniel C. 2009. Intentional Systems Theory. In Brian McLaughlin, Ansgar Beckermann & Sven Walter (eds.), The Oxford Handbook of Philosophy of Mind, 339–350. OUP Oxford.


Dörrenberg, Sebastian, Hannes Rakoczy & Ulf Liszkowski. 2018. How (Not) to Measure Infant

Theory of Mind: Testing the Replicability and Validity of Four Non-Verbal Measures.


Cognitive Development 46. 12–30. doi:10.1016/j.cogdev.2018.01.001.


Drayson, Zoe. 2014. The Personal / Subpersonal Distinction. Philosophy Compass 9(5). 338–

346. doi:10.1111/phc3.12124.


Fitch, W. Tecumseh. 2010. The Evolution of Language. Cambridge University Press.


Fizke, Ella, Stephen Butterfill, Lea van de Loo, Eva Reindl & Hannes Rakoczy. 2017. Are There

Signature Limits in Early Theory of Mind? Journal of Experimental Child Psychology 162.

209–224. doi:10.1016/j.jecp.2017.05.005.


Flombaum, Jonathan I. & Laurie R. Santos. 2005. Rhesus Monkeys Attribute Perceptions to

Others. Current Biology 15(5). 447–452. doi:10.1016/j.cub.2004.12.076.


Gärdenfors, Peter. 2009. The Social Stance And Its Relation To Intersubjectivity. Frontiers of

Sociology 289–305.


Gergely, György, Zoltán Nádasdy, Gergely Csibra & Szilvia Bíró. 1995. Taking the Intentional

Stance at 12 Months of Age. Cognition 56(2). 165–193. doi:10.1016/0010-0277(95)00661-H.


Hofer, Tanja, Petra Hauf & Gisa Aschersleben. 2005. Infant’s Perception of Goal-Directed

Actions Performed by a Mechanical Device. Infant Behavior and Development 28(4). 466–

480. doi:10.1016/j.infbeh.2005.04.002.


Jackendoff, Ray. 2010. Your Theory of Language Evolution Depends on Your Theory of Language. In The Evolution of Human Language - Biolinguistic Perpectives, 63–72. Cambridge: Cambridge University Press first edition edn.


Jahme, Carole. 2013. Daniel Dennett: ’I Don’t like Theory of Mind’ – Interview. The Guardian.


Kaminski, Juliane, Josep Call & Michael Tomasello. 2008. Chimpanzees Know What Others Know, but Not What They Believe. Cognition 109(2). 224–234. doi:10.1016/j.cognition.2008.08.010.


Krogh-Jespersen, Sheila, Annette M. E. Henderson & Amanda L. Woodward. 2020. Let’s Get It Together: Infants Generate Visual Predictions Based on Collaborative Goals. Infant Behavior and Development 59. 101446. doi:10.1016/j.infbeh.2020.101446.


Krupenye, Christopher, Fumihiro Kano, Satoshi Hirata, Josep Call & Michael Tomasello. 2016.

Great Apes Anticipate That Other Individuals Will Act According to False Beliefs. Science

354(6308). 110–114. doi:10.1126/science.aaf8110.


Kulke, Louisa, Mirjam Reiß, Horst Krist & Hannes Rakoczy. 2018. How Robust Are Anticipatory Looking Measures of Theory of Mind? Replication Attempts across the Life Span. Cognitive Development 46. 97–111. doi:10.1016/j.cogdev.2017.09.001.


Leslie, Alan M. 1994. Pretending and Believing: Issues in the Theory of ToMM. Cognition

50(1). 211–238. doi:10.1016/0010-0277(94)90029-9.


Luo, Yuyan & Renée Baillargeon. 2007. Do 12.5-Month-Old Infants Consider What Objects

Others Can See When Interpreting Their Actions? Cognition 105(3). 489–512. doi:10.1016/j.cognition.2006.10.007.


Luo, Yuyan & Susan C. Johnson. 2009. Recognizing the Role of Perception in Action at 6

Months. Developmental Science 12(1). 142–149. doi:10.1111/j.1467-7687.2008.00741.x.


McMahon, April & Robert McMahon. 2012. Evolutionary Linguistics. Cambridge University

Press.


Milligan, Karen, Janet Wilde Astington & Lisa Ain Dack. 2007. Language and Theory of Mind:

Meta-Analysis of the Relation Between Language Ability and False-Belief Understanding.

Child Development 78(2). 622–646. doi:10.1111/j.1467-8624.2007.01018.x.


Mosconi, Matthew W., Peter B. Mack, Gregory McCarthy & Kevin A. Pelphrey. 2005. Taking

an “Intentional Stance” on Eye-Gaze Shifts: A Functional Neuroimaging Study of Social

Perception in Children. NeuroImage 27(1). 247–252. doi:10.1016/j.neuroimage.2005.03.027.


Nagel, Thomas. 1974. What Is It Like to Be a Bat? The Philosophical Review 83(4). 435–450.

doi:10.2307/2183914.


Nelkin, Norton. 1989. Propositional Attitudes and Consciousness. Philosophy and Phenomenological Research 49(3). 413–430.


Neumann, Annina, Claudia Thoermer & Beate Sodian. 2008. False Belief Understanding in 18- Month-Olds’ Anticipatory Looking Behavior: An Eye-Tracking Study. International Journal

of Psychology 43(3-4).


Onishi, Kristine H. & Renée Baillargeon. 2005. Do 15-Month-Old Infants Understand False

Beliefs? Science 308(5719). 255–258. doi:10.1126/science.1107621.


Phillips, Ann T, Henry M Wellman & Elizabeth S Spelke. 2002. Infants’ Ability to Connect Gaze and Emotional Expression to Intentional Action. Cognition 85(1). 53–78. doi:10.1016/S0010-0277(02)00073-2.


Plesser, Hans E. 2018. Reproducibility vs. Replicability: A Brief History of a Confused Terminology. Frontiers in Neuroinformatics 11. doi:10.3389/fninf.2017.00076.


Poulin-Dubois, Diane, Hannes Rakoczy, Kimberly Burnside, Cristina Crivello, Sebastian

Dörrenberg, Katheryn Edwards, Horst Krist, Louisa Kulke, Ulf Liszkowski, Jason Low,

Josef Perner, Lindsey Powell, Beate Priewasser, Eva Rafetseder & Ted Ruffman. 2018.

Do Infants Understand False Beliefs? We Don’t Know yet – A Commentary on Baillargeon,

Buttelmann and Southgate’s Commentary. Cognitive Development 48. 302–315.

doi:10.1016/j.cogdev.2018.09.005.


Premack, David. 1988. ”Does the Chimpanzee Have a Theory of Mind?” Revisited. Machiavellian intelligence.


Premack, David & Guy Woodruff. 1978. Does the Chimpanzee Have a Theory of Mind? Behavioral and Brain Sciences 1(4). 515–526. doi:10.1017/S0140525X00076512.


Rakoczy, Hannes. 2012. Do Infants Have a Theory of Mind? British Journal of Developmental

Psychology 30(1). 59–74. doi:10.1111/j.2044-835X.2011.02061.x.


Ruffman, Ted & Josef Perner. 2005. Do Infants Really Understand False Belief?: Response to

Leslie. Trends in cognitive sciences 9(10). 462–463.


Schöppner, Barbara, Beate Sodian & Sabina Pauen. 2006. Encoding Action Roles

in Meaningful Social Interaction in the First Year of Life. Infancy 9(3). 289–311.

doi:10.1207/s15327078in0903_2.


Sodian, Beate. 2011. Theory of Mind in Infancy. Child Development Perspectives 5(1). 39–43.

doi:10.1111/j.1750-8606.2010.00152.x.


Sodian, Beate & Claudia Thoermer. 2004. Infants’ Understanding of Looking, Pointing, and

Reaching as Cues to Goal-Directed Action. Journal of Cognition and Development 5(3).

289–316. doi:10.1207/s15327647jcd0503_1.


Song, Hyun-joo, Kristine H. Onishi, Renée Baillargeon & Cynthia Fisher. 2008. Can an Agent’s

False Belief Be Corrected by an Appropriate Communication? Psychological Reasoning in

18-Month-Old Infants. Cognition 109(3). 295–315. doi:10.1016/j.cognition.2008.08.008.


Southgate, V., A. Senju & G. Csibra. 2007. Action Anticipation Through Attribution of

False Belief by 2-Year-Olds. Psychological Science 18(7). 587–592. doi:10.1111/j.1467-

9280.2007.01944.x.


Stich, Stephen P. 1978. Beliefs and Subdoxastic States. Philosophy of Science 45(4). 499–518.

Tatone, Denis, Mikołaj Hernik & Gergely Csibra. 2019. Minimal Cues of Possession

Transfer Compel Infants to Ascribe the Goal of Giving. Open Mind 3. 31–40.

doi:10.1162/opmi_a_00024.


Thoermer, Claudia, Beate Sodian, Maria Vuori, Hannah Perst & Susanne Kristen. 2012. Continuity from an Implicit to an Explicit Understanding of False Belief from Infancy to Preschool Age. British Journal of Developmental Psychology 30(1). 172–187. doi:10.1111/j.2044-835X.2011.02067.x.


Townsend, Simon W., Sonja E. Koski, Richard W. Byrne, Katie E. Slocombe, Balthasar Bickel,

Markus Boeckle, Ines Braga Gonçalves, Judith M. Burkart, Tom Flower, Florence Gaunet,

Hans Johann Glock, Thibaud Gruber, David A.W.A.M. Jansen, Katja Liebal, Angelika Linke,

Ádám Miklósi, Richard Moore, Carel P. van Schaik, Sabine Stoll, Alex Vail, Bridget M. Waller, Markus Wild, Klaus Zuberbühler & Marta B. Manser. 2016. Exorcising Grices’ Ghost: An Empirical Approach to Studying Intentional Communication in Animals. Biological Reviews doi: 10.1111/brv.12289.


von Eckhard, Barabara. 2012. The Representational Theory of Mind. In Keith Frankish &

William Ramsey (eds.), The Cambridge Handbook of Cognitive Science, 29–50. Cambridge

University Press.


Vosgerau, Gottfried. 2006. The Perceptual Nature of Mental Models. In Carsten Held, Markus

Knauff & Gottfried Vosgerau (eds.), Advances in Psychology, vol. 138 Mental Models and the

Mind, 255–275. North-Holland. doi:10.1016/S0166-4115(06)80039-7.


Wellman, Henry M. 2018. Theory of Mind: The State of the Art. European Journal of Developmental Psychology 15(6). 728–755. doi:10.1080/17405629.2018.1435413.


Wellman, Henry M., David Cross & Julanne Watson. 2001. Meta-Analysis of Theory-of-

Mind Development: The Truth about False Belief. Child Development 72(3). 655–684.

doi:10.1111/1467-8624.00304.


Wimmer, Heinz & Josef Perner. 1983. Beliefs about Beliefs: Representation and Constraining

Function of Wrong Beliefs in Young Children’s Understanding of Deception. Cognition 13(1).

103–128. doi:10.1016/0010-0277(83)90004-5.


Woodward, Amanda L. 1998. Infants Selectively Encode the Goal Object of an Actor’s Reach.

Cognition 69(1). 1–34.


9 Ansichten0 Kommentare

Comments


bottom of page