In The Enigma of Reason: A New Theory of Human Understanding (2017) Hugo Mercier and Dan Sperber (M&S) give an evolutionary account of reason. M&S claim that reason is a module that forms intuitive inferences based on reasons. I analyse M&S’s view within the broader debate surrounding modularity. This debate concerns Jerry Fodor’s (1983) account of modularity and that of his opponents who support a massively modular (MM) view of the mind (Cosmides & Tooby, 2006; Pinker, 1997; Sperber, 2001). While Fodor claims that cognitive processes such as reasoning are domain general, proponents of MM – including M&S – claim that cognitive processes are modular. The term ‘module’ is understood differently in this debate. For M&S, a biological module performs a cognitive function by manipulating information computationally.
I claim that M&S fail to show that reason is a module. In order to argue that reason is a module, M&S must address the fact that we appear to reason about a broad range of issues. As such, reason appears to be domain general, not domain specific. M&S respond by arguing that the reason module exhibits virtual domain generality. I argue that virtual domain generality faces a serious problem because different reasons result from diverse modules with diverse syntax. This syntax must be physically instantiated to be causal. It is not possible for the reason module to produce adaptive outputs based upon this diverse syntax. M&S can respond by having diverse modules duplicated in the reason module. These sub-modules would then be responsible for processing their relevant syntax. However, this results in M&S’s reason module collapsing. Consequently, reasoning is not the result of a distinct reason module. Instead, reasoning is the result of diverse modules performing distinct functions.
This paper proceeds as follows. Section one outlines M&S’s view. Section two discusses the apparent domain general nature of our reasoning, with reference to Fodor. Section three explores M&S’s response to the problem of domain generality. Specifically, their argument for virtual domain generality. Section four highlights the problems with virtual domain generality. Section five concludes.
Section One: M&S’s view
M&S give an evolutionary account of reason that contrasts to the standard view of reason. The standard view holds that reason is a type of ‘superpower’ that helps individuals make better decisions and gain greater knowledge. M&S argue that this view poses a double enigma. Firstly, if reason is a ‘superpower’, then why didn’t it evolve in other species? For example, vision evolved in many species because it is a useful adaptation. If reason leads to better knowledge that helps to achieve goals then, like vision, it should be more widespread in the animal world. Secondly, the view that reason is a superpower is at odds with empirical studies that show that humans are bad at reasoning. Our reasoning is often “flawed, biased, and prone to mistakes” (Mercier & Sperber, 2019, p. 69). In contrast to the standard view, M&S argue that reason evolved to fulfil a dual social function – that is, to produce reasons to justify ourselves to others; and to assess the strength of others’ reasons. As such, they give an interactionist account of reason in contrast to the intellectualist approach of the standard view.
M&S argue that reason is a type of inference that is performed by a module. Animals use inferences to draw conclusions based upon information that they already have. For example, an animal will use information about its environment to anticipate what its prey will do. The inference they form will then guide their behaviour. Inferences are possible because of regularities in the environment. Animals have modules that have evolved or developed to respond to regularities that are important for their reproductive success. M&S define a biological module as a module with a cognitive function. A module takes in information and implements a computational procedure to produce an output, for example an inference. Modules use information that is in the form of representations. A representation is an object that has the function of carrying information. In the brain, representations can manifest in the activity of groups of neurons.
M&S claim that the reason module forms intuitive inferences about reasons. They suggest that there is no one inferential mechanism, but rather many mechanisms that have evolved to respond to diverse problems. While these are largely instinctual in animals (and, thus, the result of innate fixed patterns), when humans form inferences some of these can be experienced consciously in the form of intuitions (that is, we can be aware of these intuition and, thus, they are not pre-lingual as in the case of instincts). Intuitions are often judgments or decisions that we feel justified in, even though we are unconscious of the processes that lead to these judgements and decisions. M&S argue that reason is a type of ‘intuitive inference’ about ‘reasons’. As such, reason is metarepresentational. A ‘metarepresentation’ carries information about representations. In this case, the representations in questions are reasons.
I now discuss a significant challenge to M&S’s position – that is, the apparent domain general nature of our reasoning.
Section Two: Domain Generality
Our reasoning appears to be domain general because we can reason about a broad range of issues that involves a broad range of information. For example, I can have a reason to vote for the Greens because of climate change. I can have a reason to go to the shops on Saturday because I have more time. I can have a reason to help my brother because he is my kin. When I am reasoning about an issue, I can draw on diverse information from any part of my belief system in order to arrive at a conclusion. This suggests that reasoning is not modular, because modular processes process limited inputs to produce limited outputs.
Fodor (1983) highlights the domain general nature of reasoning and draws a distinction between modular and non-modular or central processes. In The Modularity of Mind (1983) Fodor argues that some functions of the mind – such as perception and language – are modular. He defines modular processes as being domain specific, informationally encapsulated, having mandatory application and speed, with fixed neural architecture and specific breakdown patterns. The encapsulation and domain specificity of modules is illustrated by perceptual illusions. For example, in the Muller-Lyer illusion the illusion does not disappear once we learn that the lines are the same length. This is because our beliefs do not affect the perceptual process that leads to the illusion. In other words, the processes involved in perception take in limited and specific information and process this to produce an output (Fodor, 1985).
Modular processes can be understood within the framework of a Computational Theory of Mind (CMT). CTM holds that the mind performs like a Turing machine. A Turing machine is an idealised computer. Fodor is interested in how Turing-style computation manipulates symbols. He suggests that thinking occurs in a ‘language of thought’ (LOT or ‘Mentalese’). This involves primitive mental representations that can combine into more complex representations. According to Fodor, mental life involves Turing-style computation where the symbols of Mentalese are manipulated according to mechanical rules. The symbols of Mentalese are realised in neural states. Thus, computational processes occur via neural processes (Rescorla, 2017).
Fodor argues that central processes – or cognitive processes – like thinking, decision making and reasoning, involve domain general architecture and are not modular (1975). Instead, cognition is seen to be ‘global’ and unencapsulated. Global properties include consistency, coherence and explanatory power. Additionally, global processes are Quinean and isotropic. ‘Quinean’ refers to the idea that our belief systems are holistic. As such, we can only determine how coherent a belief is in the context of the whole system of beliefs an individual has. Being ‘isotropic’ suggests that belief systems are not informationally encapsulated because any part of a system may be used to confirm or disconfirm any other part. The isotropic nature of belief systems is illustrated in the way we often need to process information from diverse parts of the system to understand jokes or solve problems (Bermúdez, 2010).
Fodor argues that it is computationally mysterious how non-modular or global processes work. More strongly, he states that it is not possible to understand global processes such as analogical reasoning. CTM involves information that is processed via Mentalese and this involves manipulating and transforming information based upon syntactic properties that are intrinsic physical properties. Intrinsic properties cannot be context sensitive. For example, when a representation has an intrinsic property, this does not vary regardless of any other representations or cognitive processes that the representation responds to. These processes can be understood according to LOT because LOT takes the syntactic structure of physical symbols and this is like a ‘shape’ or ‘key’ that determines how the sentence will work. But global properties such as consistency, coherence and explanatory power are context sensitive and rely on extrinsic properties. Consequently, global processes such as reasoning cannot be understood in the context of LOT. As a result, according to Fodor, they cannot be understood computationally (Bermúdez, 2010).
M&S must address the apparent domain general nature of reasoning if they are to establish that reason is a module. I discuss this next.
Section Three: M&S’s response to the problem of domain generality
In order to argue that reason is a module, M&S must respond to Fodor’s claim that reasoning is domain general. They do this by claiming that the reason module exhibits virtual domain generality. Before discussing virtual domain generality, it is helpful to give some background into the massively modular view of mind (MM). M&S’s view is a development of MM and MM is the position that reacts against Fodor’s view.
3.1 Massive Modularity
In contrast to Fodor, evolutionary psychologists such as Cosmides and Tooby (2006), Sperber (2001) and Pinker (1997) claim that the mind is massively modular. As such, these theorists argue that cognitive processes are modular. Proponents of MM combine CTM with psychological nativism and a Neo-Darwinist account of evolution. Psychological nativism draws on Chomsky’s “poverty of the stimulus argument” to suggest that the mind has innate content. And a Neo-Darwinist account of the mind claims that the cognitive architecture of the mind is a Darwinian adaptation (Fodor, 2000). Proponents of MM hold a less strict version of modularity that Fodor. Darwinian modules are only defined by the first two characteristics of Fodorian modules – that is, domain specificity and information encapsulation (Bermúdez, 2010).
Proponents of MM argue that Darwinian modules are cognitive mechanisms that have evolved to solve problems that our Pleistocene hunter-gatherer ancestors faced. Evolutionary psychologists such as Cosmides and Tooby (2006) argue that domain specific mechanisms are better than domain general mechanisms at solving adaptive problems because different problems required different solutions and these solutions required implementation by different mechanisms. Additionally, domain specific mechanisms outperform more general mechanisms because, in general, if there are two adaptive problems that require different solutions, then two specialised solutions are better than a general solution because a general solution will sacrifice effectiveness. Domain specific mechanisms are fast, reliable and efficient because they are specialised to perform one task rather than competing tasks. Darwinian modules include modules to solve social problems, such as kin detection, cheater detection and mate selection modules. They also include modules for such things as face recognition, gaze following and emotional detection (Bermúdez, 2010).
S&M’s view is developed from a MM view of mind. However, M&S appear to present a third position in the modularity debate. Proponents of MM argue that reasoning is the result of many different modules that perform different tasks. As such, reasoning is a complex system and not a single mechanism. So, while they claim that cognition is modular, they do not hold that there is a ‘reason’ module. In contrast, M&S claim that there is a reason module. This module is a metarepresentational module that represents intuitions that have arisen from domain specific modules. It is these domain specific modules that contain our ‘reasons’. For example, I may have a reason to help my brother because he is my kin. This may be represented in a kin selection module. The reason module will then metarepresent the fact that I will help my brother because he is my kin as a reason. According to M&S, we can have reasons for a wide variety of issues. However, these reasons are the same type of information – that is, they are reasons.
M&S appeal to the ‘virtual domain generality’ of the reason module to address the apparent domain general nature of reasoning. I discuss this next.
3.2 Virtual domain generality
In order to show that reason is a module, M&S must respond to Fodor’s argument for why reason is not modular. Fodor claims:
1) Reasoning involves domain general architecture
2) CTM cannot account for domain general architecture
3) Hence, reasoning cannot be understood computationally
4) But: modular processes are computational
5) Hence, reasoning cannot be modular.
Premise 1) of Fodor’s argument states that our reasoning requires domain general architecture. Recall that Fodor claims our belief systems are Quinean and isotropic. This means that the coherence of a belief depends upon the whole system of beliefs and that any part of the system can be used to confirm or disconfirm any other part. Our reasoning is domain general because it draws on information from any part of our belief system. As such, it requires domain general architecture.
M&S respond to premise 1) of Fodor’s argument by claiming that reasons are ‘virtual domain general’. According to M&S, cognitive modules exploit regularities in a specific domain. The domain of the reason module is ‘reasons.’ But reasons can be about almost anything (for example, I can have reasons to help my brother, or to vote for the Greens). If modules exploit regularities, it is difficult to understand what the regularities are in these examples. That is, my reasons to help my brother would seem to rely on different information from my belief system than my reasons to vote for the Greens. However, M&S claim that the regularities exploited by the reason module are not inside the representations that are the reasons (for example, the content of my reason to vote for the Greens is because of climate change). Instead, the regularities that the reason module exploits are found in the metacognitive intuitions that are produced by the reason module. That is, the intuitions that are formed about reasons. So, while my reasons to help my brother and to vote for the Greens rely on different information from my belief system, they each result in an intuition and it is the regularities of these intuitions that are tracked by the reason module. As such, the reason module is performing a domain specific function (tracking these intuitions). What it metarepresents regarding these diverse states of affairs is their shared value of being reasons (Mercier & Sperber, 2018).
M&S support the claim that the reason module exploits regularities that are the result of metacognitive intuitions produced by the reason module by suggesting that our intuitions that are based on representations can be different to our intuitions about the things that these representations represent. Nevertheless, representations can give us information about the things they represent. For example, in the case of mindreading, we use metarepresentations that represent the mental states of others. So, if I form a belief that you believe that there is milk in the fridge, my belief is a metarepresentation that represents your belief that there is milk in the fridge. Here, I gain knowledge about the representation that is your mental state. But I also gain knowledge about the content of that representation – or what that representation is about (assuming it is true, and I am justified in believing it to be true). In this case, the content of the mental state relates to a fact that there is milk in the fridge. So, my metarepresentation of your mental state can give me information about both your belief and the state of the world that your belief is about.
M&S claim that the content of the representations that the reason module represents are virtual domain general. As such, they can be about any number of potential states that give information about my reasons for different courses of action. But all these states fall under the one type of state – that is, they are ‘reasons.’ The fact that they are ‘reasons’ is what allows them to be accepted by the reason module. According to M&S, the fact that they are ‘reasons’ is like a key that enables the reason module to process them. As such, they can be broad – or domain general – in their content.
Section four: Problems with virtual domain generality
M&S’s appeal to virtual domain generality faces a significant challenge. M&S appeal to virtual domain generality to show how the reason module deals with the broad range of information that we can reason about. However, virtual domain generality faces a serious problem because it is difficult to understand how the reason module can produce adaptive outputs based on the broad range of syntax it is fed. This argument can be understood as follows:
1) Information must be physically instantiated to be causal
2) Physically instantiated information has a specific syntax
3) Modules produce adaptive outputs based on limited and specific syntax (the domain specificity of modular processes means that these processes “carry out very specific and circumscribed information-processing tasks” (Bermúdez, 2010, p. 288))
4) Hence, the reason module produces adaptive outputs based on limited and specific syntax
5) But: the reason module is fed a broad range of syntax related to the broad range of issues we can reason about
6) Hence, the reason module cannot produce adaptive outputs.
I address this more slowly in what follows.
Cognitive modules are physically instantiated in the brain. Planer (2019) defines cognitive mechanisms (or modules) as “neurally instantiated causal systems that operate on information carrying patterns of neural activity” (Planer, 2019, p. 22). As such, these modules exchange information via the firing of distinct populations of neurons. For example, if I have a module that contains my reason for helping my brother (because he is my kin) then, if the reason module forms an intuition based upon the content of this module, it can only do this because the content of this module is physical instantiated. This is required if this information is to causally affect my behaviour.
Information that is physically instantiated has a specific syntax. M&S argue that the reason module does not process the content of the module that is my ‘reason’ for helping my brother (that is – because he is my kin). In other words, if my reason for helping my brother is represented in my kin selection module, then the reason module does not metarepresent the content of the kin selection module. Instead, the reason module tracks a regularity that is an intuition that has arisen from this module. But, if this were the case, it is difficult to understand my behaviour to help my brother. This behaviour is the result of the information that he is my kin. As such, this specific information must be represented in order that I act in an appropriate way. And this information has a specific syntax.
The syntax of my ‘reasons’ (my brother is my kin, so I will help him) affect the outcome of the reason module. On the standard view of how modules work, the syntax of a representation is like a shape or key that fits a module. Only a specific shape or procedure can occur in each module. Different modules use different syntax. If a module is fed the wrong syntax it will not perform its function properly. The following example from Planer (2017) illustrates this. Say a module has evolved to determine if another person is friendly or dangerous. This module takes the identity of a person as inputs (either x, y, or z) and combines this information with whether they have hit or hugged someone. If person x ‘hits’ someone, they are dangerous. ‘Dangerous’ is the output of the module. This module uses the syntax of the inputs to generate its outputs. If it was fed inputs with a different syntax it could not perform its function. In other words, in order to operate effectively the module “exploits specific structural features of the set of input expressions it was designed to operate on” (Planer, 2017, p. 792).
The problem for the reason module is that it remains wholly mysterious how it can produce cognitively or behaviourally adaptive outputs while it is being fed a broad range of diverse syntax. For example, if the reason module is fed information that relates to my various subjective reasons for diverse issues (such as my reasons to vote for the Greens and to help my brother), then how does the reason module know what to do with this information? As the reason module is a module it can only perform a limited procedure or range of procedures. This is explained by CTM where limited syntax is used computationally to produce a limited output (for example, the cheater detection module tracks individuals and processes if they ‘cheat’ in their social interactions). However, the reason module must include a broad range of syntax that is associated with all the ‘reasons’ it metarepresents. This is because syntax must be physically instantiated to be causal.
In effect, it seems M&S must respond to this problem by positing sub-modules within the reason module that deal with specific syntax. For example, my reason to help my brother because he is my kin has a specific syntax that has arisen from a specific module, such as a kin selection module. Therefore, to process this information it is necessary to posit a kin selection module within the reason module that can process the syntax related to my reason to help my brother or not. This module would rely on specific syntax that is physically instantiated.
In other words, it is necessary to presuppose sub-modules within the reason module if the reason module is to produce adaptive outputs. And, as a result, it is these sub-modules that are performing the procedures that result in the outputs that are used to determine an organism’s behaviour. Because the reason module now consists in sub-modules that perform domain specific tasks, it is no longer a metarepresentational module.
Consequently, M&S’s reason module has collapsed. The result is a MM view of mind where reasoning is the result of many modules performing distinct functions rather than being the product of a ‘reason’ module. As such, reasoning is not the result of a single mechanism but is the result of multiple mechanisms performing distinct tasks. For proponents of MM there is nothing computationally mysterious in this result. Diverse modules perform domain specific procedures and our reasoning is a result of this complex system producing outputs. However, for M&S, this result undermines their claim that reason is a module.
M&S fail to show that reason is a module because they are unable to address the domain general nature of our reasoning. M&S argue that the reason module deals with the broad range of information that our reasoning involves because reasons have in common the fact that they are ‘reasons’. This information allows them access to the reason module. However, our diverse reasons come from diverse modules that have different syntax. This syntax must be physically instantiated if the reasons are to have a causal effect on behaviour. This is problematic because the reason module cannot produce adaptive outputs based upon the diverse syntax it receives from different modules. M&S can respond to this problem by having diverse modules duplicated in the reason module. The reason module then consists of sub-modules that perform specific functions. However, this result collapses into MM, where reasoning is not the result of a module, but is the result of many such modules performing distinct functions.
Bermúdez, J. L. (2010). Cognitive Science : An Introduction to the Science of the Mind. Cambridge, UK: Cambridge University Press.
Cosmides, L., & Tooby, J. (2006). Origins of Domain Specificity: The Evolution of Functional Organization. In J. L. Bermúdez (Ed.), Philosophy of Psychology: Contemporary Readings. New York, US: Routledge.
Fodor, J. (1975). The Language of Thought. Harvard, US: Harvard University Press.
Fodor, J. (1983). The Modularity of Mind. Massachusetts, US: MIT Press.
Fodor, J. (1985). Precis of the modularity of mind. Behavioral and Brain Sciences, 8(1), 1-42.
Fodor, J. (2000). The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology (Vol. 51). Massachusetts, US: MIT Press.
Mercier, H., & Sperber, D. (2017). The Enigma of Reason: A New Theory of Human Understanding. Harvard, US: Harvard University Press.
Mercier, H., & Sperber, D. (2018). Why a modular approach to reason? Mind and Language, 33(5), 533-541.
Mercier, H., & Sperber, D. (2019). Précis of The Enigma of Reason. Teorema: Revista internacional de filosofía, 38(1), 69-76.
Pinker, S. (1997). How the Mind Works. New York, US: W.W Norton Comapany Inc.
Pinker, S. (2005). So How Does the Mind Work? Mind & Language, 20(1), 1-24. doi:10.1111/j.0268-1064.2005.00274.x
Planer, R. (2017). How language couldn’t have evolved: a critical examination of Berwick and Chomsky’s theory of language evolution. Biology and Philosophy, 32(6), 779-796.
Planer, R. (2019). The evolution of languages of thought. Biology and Philosophy, 34(5), 1-27.
Rescorla, M. (2017). The Computational Theory of Mind. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy Stanford, US: Stanford University.
Sperber, D. (2001). In Defence of Massive Modularity. In E. Dupoux (Ed.), Language, Brain, and Cognitive Development: Essays in Honor of Jacques Mehler. Massachusetts, US: MIT Press.
 Named after the philosopher Willard Van Orman Quine.
 Indeed, the claim by proponents of MM is often stronger and to the effect that there is no domain general processing. For example, Pinker (2005) states that all mental life consists of information processing or computation. As such, he disagrees with Fodor that there is computationally mysterious domain general processing.