Preface to a Forethought: 2007 John-Dylan Haynes discovers pre-forethought; Each and every minute of every day our brains are seemingly forced to plan thousands of mundane actions that allow our lives to seamlessly flow. How and where the brain stores these Intentions has been apparently been revealed by John-Dylan Haynes. For the first time researchers were able to read participants’ intentions from their brain activity. Concluding that the brain makes a decision approximately 1-7 seconds before consciousness is aware of the decision! This was made possible by a new combination of functional magnetic resonance imaging – MRI - and a set of sophisticated computer algorithms. Haynes and similar thinking scientists believe that what Philosophy calls Free Will, are in fact pre-mapped – Intentions - critical decisions made prior to all conscious and non conscious decisions. Free Will: Science has always had a problem with Free Will , it implies no cause and effect, science relies entirely on empirical data. Free Will, defies any quantification, it flies in the face of an organised, rational, systemic view on thought processing and creation. The work of Haynes and his colleagues went far beyond simply confirming previous theories of the lack of Free Will. The study revealed fundamental principles about the way the brain stores Intentions.
“The experiments show that intentions are not encoded in single neurons but in a whole spatial pattern of brain activity” says Haynes.
They furthermore reveal that different regions of the prefrontal cortex perform different operations. Regions towards the front of the brain store the intention until it is executed, whereas regions further back take over when subjects become active and start doing the calculation.
“Intentions for future actions that are encoded in one part of the brain need to be copied to a different region to be executed” says Haynes.
Haynes wasn’t the first neuroscientist to explore unconscious decision-making. In the 1980s, Benjamin Libet, a neuropsychologist at the University of California, San Francisco, rigged up study participants to an electroencephalogram - EEG- and asked them to watch a clock face with a dot sweeping around it. When the participants felt the urge to move a finger, they had to note the dot’s position. Libet recorded brain activity several hundred milliseconds before people expressed their conscious intention to move. Haynes’s 2007 study modernized the earlier experiment: where Libet’s EEG technique could look at only a limited area of brain activity ~ Haynes’s MRI could survey the entire brain; where Libet’s participants decided simply on when to move ~ Haynes’s test forced them to decide between two alternatives. Critics point out that Haynes and his team could predict a left or right button press with only 60% accuracy at best.
Philosophers question the assumptions underlying such interpretations. “Part of what’s driving some of these conclusions is the thought that free will has to be spiritual or involve souls or something,” says Al Mele, a philosopher at Florida State University in Tallahassee. If neuroscientists find unconscious neural activity that drives decision-making, the troublesome concept of mind as separate from body disappears, as does free will. This ‘dualist’ conception of free will is an easy target for neuroscientists to knock down, says Glannon. “Neatly dividing mind and brain makes it easier for neuroscientists to drive a wedge between them,” he adds.
But Wait There’s More ->
The trouble is, most current philosophers don’t think about free will like that, says Mele. Many are materialists — believing that everything has a physical basis, and decisions and actions come from brain activity. So scientists are weighing in on a notion that philosophers consider irrelevant.
Mele says, “the majority of philosophers are comfortable with the idea that people can make rational decisions in a deterministic universe. They debate the interplay between freedom and determinism — the theory that everything is predestined, either by fate or by physical laws — but Roskies says that results from neuroscience can’t yet settle that debate. They may speak to the predictability of actions, but not to the issue of determinism.
Neuroscientists also sometimes have misconceptions about their own field, says Michael Gazzaniga, a neuroscientist at the University of California, Santa Barbara. In particular, scientists tend to see preparatory brain activity as proceeding stepwise, one bit at a time, to a final decision. He suggests that researchers should instead think of processes working in parallel, in a complex network with interactions happening continually. The time at which one becomes aware of a decision is thus not as important as some have thought.
Libet’s pioneering experiment in this field of the 1980s, in which he asked each subject to choose a random moment to flick their wrist while he measured the associated activity in their brain, in particular, the build-up of electrical signal called the readiness potential. Although it was well known that the readiness potential preceded the physical action, Libet asked how the readiness potential corresponded to the felt intention to move. To determine when the subject felt the intention to move, he asked her to watch the second hand of a clock and report its position when she felt that she had felt the conscious will to move.
Libet found that the unconscious brain activity leading up to the conscious decision by the subject to flick his or her wrist began approximately half a second before the subject consciously felt that she had decided to move. Libet’s findings suggest that decisions made by a subject are first being made on a subconscious level and only afterward being translated into a “conscious decision”, and that the subject’s belief that it occurred at the behest of her will was only due to her retrospective perspective on the event.
Typical recording of the readiness potential.Benjamin Libet investigated whether this neural activity corresponded to the “felt intention” (or will) to move of experimental subjects. In a variation of this task, Haggard and Eimer asked subjects to decide not only when to move their hands, but also to decide which hand to move. In this case, the felt intention correlated much more closely with the Lateralized Readiness Potential - LRP - an Event Related Ptentia - ERP - component which measures the difference between left and right hemisphere brain activity. Haggard and Eimer argue that the feeling of conscious will must therefore follow the decision of which hand to move, since the LRP reflects the decision to lift a particular hand.
The interpretation of these findings has been criticized by Daniel Dennett, who argues that people will have to shift their attention from their intention to the clock, and that this introduces temporal mismatches between the felt experience of will and the perceived position of the clock hand. Consistent with this argument, subsequent studies have shown that the exact numerical value varies depending on attention. Despite the differences in the exact numerical value, however, the main finding has held. Philosopher Alfred Mele criticizes this design for other reasons. Having attempted the experiment himself, Mele explains that “the awareness of the intention to move” is an ambiguous feeling at best. For this reason he remained skeptical of interpreting the subjects’ reported times for comparison with their ‘readiness potential’.
A more direct test of the relationship between the readiness potential and the “awareness of the intention to move” was conducted by Banks and Isham (2009). In their study, participants performed a variant of the Libet’s paradigm in which a delayed tone followed the button press. Subsequently, research participants reported the time of their intention to act (e.g., Libet’s “W”). If W were time-locked to the readiness potential, W would remain uninfluenced by any post-action information. However, findings from this study show that W in fact shift systematically with the time of the tone presentation, implicating that W is, at least in part, retrospectively reconstructed rather than pre-determined by the readiness potential.
A study conducted by Jeff Miller and Judy Trevena (2009) suggests that the readiness potential (RP) signal in Libet’s experiments doesn’t represent a decision to move, but that it’s merely a sign that the brain is paying attention. In this experiment the classical Libet experiment was modified by playing an audio tone indicating to volunteers to decide whether to tap a key or not. The researchers found that there was the same RP signal in both cases, regardless of whether or not volunteers actually elected to tap, which suggests that the RP signal doesn’t indicate that a decision has been made. In a second experiment, researchers asked volunteers to decide on the spot whether to use left hand or right to tap the key while monitoring their brain signals, and they found no correlation among the signals and the chosen hand. This criticism has itself been criticized by free-will researcher Patrick Haggard, who mentions literature that distinguishes two different circuits in the brain that lead to action: a “stimulus-response” circuit and a “voluntary” circuit. According to Haggard, researchers applying external stimuli are testing neither the voluntary circuit, nor Libet’s hypothesis about internally triggered actions.
Discussed below (In the section “Timing intentions compared to actions”) is one study that has replicated many of Libet’s findings, whilst addressing some of the original criticisms. It is also worth noting that, in 2011, Itzhak Fried replicated Libet’s findings at the scale of the single neuron. This was accomplished with the help of volunteer epilepsy patients, who needed electrodes implanted deep in their brain for evaluation and treatment anyway. Now able to monitor awake and moving patients, the researchers replicated the timing anomalies that were discovered by Libet and are discussed in the following study.
Manipulating the unconscious has previously been demonstrated via – TMS - Transcranial Magnetic Stimulation. TMS uses magnetism to safely stimulate or inhibit parts or hemispheres of the brain. Related experiments have showed that neurostimulation could affect which hands people move, even though the experience of free will was intact. Ammon and Gandevia found that it was possible to influence which hand people move by stimulating frontal regions that are involved in movement planning using Transcranial Magnetic Stimulation in either the left or right hemisphere of the brain.
Scientists were able to change which hand subjects normally chose to move without subjects noticing the influence.
Right-handed people would normally choose to move their right hand 60% of the time, but when the right hemisphere was stimulated they would instead choose their left hand 80% of the time (recall that the right hemisphere of the brain is responsible for the left side of the body, and the left hemisphere for the right). Despite the external influence on their decision-making, the subjects continued to report that they believed their choice of hand had been made freely. In a follow-up experiment, Alvaro Pascual-Leone and colleagues found similar results, but also noted that the transcranial magnetic stimulation must occur within 200 milliseconds, consistent with the time-course derived from the Libet experiments.
It should be noted that, despite his findings, Libet himself did not interpret his experiment as evidence of the inefficacy of conscious free will – he points out that although the tendency to press a button may be building up for 500 milliseconds, the conscious will retains a right to veto any action at the last moment. According to this model, unconscious impulses to perform a volitional act are open to suppression by the conscious efforts of the subject (sometimes referred to as “free won’t”). A comparison is made with a golfer, who may swing a club several times before striking the ball. The action simply gets a rubber stamp of approval at the last millisecond. Max Velmans argues however that “free won’t” may turn out to need as much neural preparation as “Free Will”
Unconsciously cancelling actions. The possibility that human – Free Won’t – is also the prerogative of the subconscious is being explored.
“Slightly better than chance, this isn’t enough to claim that you can see the brain making its mind up before conscious awareness” argues Adina Roskies
Adina Roskies is a neuroscientist and philosopher who identified two major divisions in neuroethics: the ethics of neuroscience and the neuroscience of ethics. Research falling under the first area, the ethics of neuroscience, is focused on the ethics of practice of neuroscience and “the implications of our mechanistic understanding of brain function for society… integrating neuroscientific knowledge with ethical and social thought”. The neuroscience of ethics borrows from the field of neurophilosophy and examines the neurological foundations of moral cognition. Roskies works on free will at Dartmouth College in Hanover, New Hampshire.
“Besides, all it suggests is that there are some physical factors that influence decision-making”, which shouldn’t be surprising. Philosophers who know about the science, she adds, don’t think this sort of study is good evidence for the absence of free will, because the experiments are caricatures of decision-making. Even the seemingly simple decision of whether to have tea or coffee is more complex than deciding whether to push a button with one hand or the other” says Adina Roskies
Haynes stands by his interpretation, and has replicated and refined his results in two studies. One uses more accurate scanning techniques to confirm the roles of the brain regions implicated in his previous work. In the other, which is yet to be published, Haynes and his team asked subjects to add or subtract two numbers from a series being presented on a screen. Deciding whether to add or subtract reflects a more complex intention than that of whether to push a button, and Haynes argues that it is a more realistic model for everyday decisions. Even in this more abstract task, the researchers detected activity up to four seconds before the subjects were conscious of deciding, Haynes says.
BBC Video: Neuroscience and Free Will, presented by: Marcus Du Sautoy, Professor of Mathematics at the University of Oxford. With John-Dylan Haynes.
“Freedom or brain it is far too clumsy. Firstly because the brain is indeed part of our person, and secondly, the brain processes would be consistent with all our beliefs and values. – If it is, sometimes, ‘My brain has decided so, and so I can not be sure that’ it’s not nonsense” says John-Dylan Haynes.
For Philosophers a huge negative on accepting the premise that the brain relies on Intentions rather than Free Will to make decisions is that it raises the worry that understanding how brains cause behavior will undermine our views about free will and, consequently, about moral responsibility. The potential ethical consequences of such a result are sweeping. Philosophers are a long way from being convinced that brain scans can demolish free will so easily. Many have questioned the neuroscientists’ results and interpretations, arguing that the researchers have not quite grasped the concept that they say they are debunking.
One proposed explanation is a ‘forward model of motor control. The idea is that our conscious self does not cause all behaviours. Instead, the conscious self is alerted (through various sensations) to behaviours that the rest of the brain and body are already planning and performing. To be clear, this model does not deny that consciousness affects behaviour; it does not forbid conscious experience from being used as input by unconscious processes – information that might modify a behaviour in progress. The key is that the unconscious processes play a much larger role in behaviour, because they are what first initiate actions. This model thus challenges some conceptions of Free Will – particularly libertarian free will - but not so much compatibilistic free will, since self awareness may only recognize a feeling of will, which appears just before an action.
It may be possible, then, that our intuitions about the role of our conscious “intentions” have led us astray; it may be the case that we have confused correlation with causation by believing that conscious awareness necessarily causes the body’s movement. This possibility is bolstered by findings in neurostimulation, brain damage, but also research into introspection illusions. Such illusions show that humans do not have full access to various internal processes. The discovery that humans possess a determined will would have implications for moral responsibility. Neuroscientist and author Sam Harris believes this is the case, but says we should never have expected we had libertarian free will.
Harris argues that “Thoughts simply arise in the brain. What else could they do? The truth about us is even stranger than we may suppose: The illusion of free will is itself an illusion”
A recent study by Masao Matsuhashi and Mark Hallett claims to have replicated Libet’s findings without relying on subjective report or clock memorization on the part of participants. The authors believe that their method can identify the time [T] at which a subject becomes aware of her own movement. Matsuhashi and Hallet argue that this time not only varies, but often occurs after early phases of movement genesis have already begun, as measured by the readiness potential. They conclude that a person’s awareness cannot be the cause of movement, and may instead only notice the movement.
The researchers hypothesized that, if our conscious intention are what causes movement genesis (i.e. the start of an action), then naturally, our conscious intention should always occur before any movement has begun. This is because otherwise, if we ever become aware of a movement only after it has already been started, our awareness could not have been the cause of that particular movement. Simply put, conscious intention must precede action if it is its cause.
To test this hypothesis, Matsuhashi and Hallet had volunteers perform self-paced finger movements while “stop-signal” sounds played randomly. The volunteers had to stop their movement if they heard a signal while being aware of any intention to move. Whenever there was an action, the authors documented and graphed any tones that occurred before that action. The graph of tones before actions therefore shows tones at only 2 points: 1) before the subject is even aware of their movement genesis (or else they would have stopped or “vetoed” the movement, and 2) after it is too late to veto the action.
By looking to see when tones started preventing actions, the researchers supposedly know exactly how many seconds before most actions the subject normally becomes aware. This moment of awareness – as shown in the graph below – is dubbed [T]. It can be found by looking at the border between tones and no tones. That is how they estimate the timing of the conscious intention to move without relying on the subject’s knowledge or demanding them to focus on a clock. The last step of the experiment is to compare time [T] for each subject with their Event-related potential - ERP – measures that reveal when their finger movement genesis first begins.
What the researchers found was that the time of the conscious intention to move [T] normally occurred too late to be the cause of movement genesis. See the example of a subject’s graph below on the right. Although it is not shown on the graph, the subject’s readiness potentials tells us that his actions start at -2.8 seconds, and yet this is even before their time [T] (-1.8 seconds). Matsuhashi and Hallet concluded that the feeling of the conscious intention to move does not cause movement genesis; both the feeling of intention and the movement itself are the result of unconscious processing.
This study is similar to Libet’s in some ways: volunteers were again asked to perform finger extensions in short, self-paced intervals. In this version of the experiment, researchers introduced randomly timed “stop tones” during the self paced movements. If participants were not conscious of any intention to move, they simply ignored the tone. On the other hand, if they were aware of their intention to move at the time of the tone, they had to try and veto the action, then relax for a bit before continuing self paced movements. This experimental design allowed Matsuhashi and Hallet to see when, once the subject moved their finger, any tones occurred. The goal was to identify their own equivalent of Libet’s W, their own estimation of the timing of the conscious intention to move, which they would call [T]
Testing the hypothesis that ‘conscious intention occurs after movement genesis has already begun’ required the researchers to analyse the distribution of tones before actions. The idea is that, after time T, tones will lead to vetoing and thus a reduced representation in the data. There would also be a point of no return P where a tone was too close to the movement onset for the movement to be vetoed. In other words, the researchers were expecting to see the following on the graph: many tones, while the subjects are not yet aware of their movement genesis, followed by a drop in the number of tones during a certain period of time, while the subjects are conscious of their intentions and are stopping any movements, and finally a brief increase again in tones (when the subjects do not have the time to process the tone and prevent an action – they have past the action’s “point of no return”. That is exactly what the researchers found.
This field of research – from both angles – remains highly controversial, there is no consensus among researchers about the significance of findings, their meaning, or what conclusions may be drawn. In the light of recent studies, there lies the real possibility – a greater likelihood in the eyes of some researchers – that the experience of “will”, and its role in human choice, requires re-conceptualization. While this would carry enormous implications for moral responsibility in general, all of the research above is still new enough to warrant only tentative conclusions.
How the brain constructs consciousness is still a mystery, and cracking it open would have a significant bearing on the question of free will. Numerous different models have been proposed, for example, the Multiple Drafts Model which argues that there is no central Cartesian Theater where conscious experience would be represented, but rather that consciousness is located all across the brain. This model would explain the delay between the decision and conscious realization, as experiencing everything as a continuous ‘filmstrip’ comes behind the actual conscious decision. In contrast, some models of Cartesian Materialism have gained any recognition by neuroscience, implying that there actually might be special brain areas that store the contents of consciousness; this does not, however, rule out the possibility of a conscious will. Other models such as epiphenomenalism argue that conscious will is an illusion, and that consciousness is a by-product of physical states of the world. Work in this sector is still highly speculative, and there is no single model of consciousness which would be favored by the researchers.
Although humans clearly make choices, the role of consciousness (at least, when it comes to motor movements) may need re-conceptualization. Only one thing is certain: the correlation of a conscious “intention to move” with a subsequent “action” does not guarantee causation. Recent studies cast doubt on such a causal relation, and so more empirical data is required.
Most modern philosophers of the mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences. Other philosophers, however, adopt a non-physicalist position which challenges the notion that the mind is a purely physical construct. Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues. However, they are far from having been resolved, and modern philosophers of mind continue to ask how the subjective qualities and the intentionality – aboutness – of mental states and properties can be explained in naturalistic terms.
Our perceptual experiences depend on stimuli which arrive at our various sensory organs from the external world and these stimuli cause changes in our mental states, ultimately causing us to feel a sensation, which may be pleasant or unpleasant. Someone’s desire for a slice of pizza, for example, will tend to cause that person to move his or her body in a specific manner and in a specific direction to obtain what he or she wants. The question, then, is how it can be possible for conscious experiences to arise out of a lump of gray matter endowed with nothing but electrochemical properties.
A related problem is how someone’s propositional attitudes – beliefs and desires – cause that individual’s neurons to fire and his muscles to contract. These comprise some of the puzzles that have confronted epistemologists and philosophers of mind from at least the time of René Descartes.
Dualist solutions to the mind-body problem: Dualism is a set of views about the relationship between mind and matter / body. It begins with the claim that mental phenomena are, in some respects, non-physical. One of the earliest known formulations of mind-body dualism was expressed in the eastern Sankhya and Yoga schools of Hindu philosophy, which divided the world into purusha - mind/spirit and prakriti - material/substance. Specifically, the Yoga Sutra of Patanjali presents an analytical approach to the nature of the mind.
In Western Philosophy, the earliest discussions of dualist ideas are in the writings of Plato and Aristotle. Each of these maintained, but for different reasons, that humans’ “intelligence” (a faculty of the mind or soul) could not be identified with, or explained in terms of, their physical body. However, the best-known version of dualism is due to René Descartes -1641 – and holds that the mind is a non-extended, non-physical substance, a rescogitans or Mental Substance. Descartes was the first to clearly identify the mind with consciousness and self-awareness, and to distinguish this from the brain, which was the seat of intelligence. He was therefore the first to formulate the mind-body problem in the form in which it still exists today.
The most frequently used argument in favour of dualism is that it appeals to the common-sense intuition that conscious experience is distinct from inanimate matter. If asked what the mind is, the average person would usually respond by identifying it with their self, their personality, their soul, or some other such entity. They would almost certainly deny that the mind simply is the brain, or vice-versa, finding the idea that there is just one ontological entity at play to be too mechanistic, or simply unintelligible. Many modern philosophers of mind think that these intuitions are misleading and that we should use our critical faculties, along with empirical evidence from the sciences, to examine these assumptions to determine whether there is any real basis to them.
Another important argument in favor of dualism is that the mental and the physical seem to have quite different, and perhaps irreconcilable, properties. Mental events have a subjective quality, whereas physical events do not. So, for example, one can reasonably ask what a burnt finger feels like, or what a blue sky looks like, or what nice music sounds like to a person. But it is meaningless, or at least odd, to ask what a surge in the uptake of glutamate in the dorsolateral portion of the hippocampus feels like.
The Argument from Reason holds that if, as monism implies, all of our thoughts are the effects of physical causes, then we have no reason for assuming that they are also the consequent of a reasonable ground. Knowledge, however, is apprehended by reasoning from ground to consequent. Therefore, if monism is correct, there would be no way of knowing this—or anything else not the direct result of a physical cause, and we could not even suppose it, except by a fluke.
Philosophers of mind call the subjective aspects of mental events ‘qualia‘ or raw feels. There is something that it is like to feel pain, to see a familiar shade of blue, and so on. There are qualia involved in these mental events that seem particularly difficult to reduce to anything physical.
If consciousness – the mind – can exist independently of physical reality – the brain – one must explain how physical memories are created concerning consciousness. Dualism must therefore explain how consciousness affects physical reality. One possible explanation is that of a miracle, proposed by Arnold Geulincx and Nicolas Malebranche, where all mind-body interactions require the direct intervention of God. Another possible explanation has been proposed by C. S. Lewis. Although at the time Lewis wrote Miracles, Quantum Mechanics, and physical indeterminism was only in the initial stages of acceptance, he stated the logical possibility that if the physical world was proved to be indeterministic this would provide an entry – interaction – point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non physical entity on physical reality.
The zombie argument is based on a thought experiment proposed by Todd Moody, and developed by David Chalmers in his book The Conscious Mind. The basic idea is that one can imagine one’s body, and therefore conceive the existence of one’s body, without any conscious states being associated with this body. Chalmers’ argument is that it seems very plausible that such a being could exist because all that is needed is that all and only the things that the physical sciences describe about a zombie must be true of it. Since none of the concepts involved in these sciences make reference to consciousness or other mental phenomena, and any physical entity can be by definition described scientifically via physics, the move from conceivability to possibility is not such a large one. Others such as Dennett have argued that the notion of a philosophical zombie is an incoherent, or unlikely, concept. It has been argued under physicalism, that one must either believe that anyone including oneself might be a zombie, or that no one can be a zombie—following from the assertion that one’s own conviction about being (or not being) a zombie is a product of the physical world and is therefore no different from anyone else’s. This argument has been expressed by Dennett who argues that “Zombies think, they are conscious, think, they have qualia, think, they suffer pains—they are just ‘wrong’ (according to this lamentable tradition), in ways that neither they nor we could ever discover!”
Monist solutions to the mind-body problem: In contrast to dualism, monism does not accept any fundamental divisions. The fundamentally disseparate nature of reality has been central to forms of eastern philosophies for over two millennia. In Indian and Chinese philosophy, monism is integral to how experience is understood. Today, the most common forms of monism in Western philosophy are physicalist. Physicalistic monism asserts that the only existing substance is physical, in some sense of that term to be clarified by our best science. However, a variety of formulations (see below) are possible. Another form of monism - idealism - states that the only existing substance is mental. Although pure idealism, such as that of George Berkeley, is uncommon in contemporary Western philosophy, a more sophisticated variant called panpsychism, according to which mental experience and properties may be at the foundation of physical experience and properties, has been espoused by some philosophers such as Alfred North Whitehead and David Ray Griffin.
Phenomenalism is the theory that representations (or sense data) of external objects are all that exist. Such a view was briefly adopted by Bertrand Russell and many of the logical positivists during the early 20th century. A third possibility is to accept the existence of a basic substance which is neither physical nor mental. The mental and physical would then both be properties of this neutral substance. Such a position was adopted by Baruch Spinoza and was popularized by Ernst Mach in the 19th century. This neutral monism, as it is called, resembles property dualism.
Behaviorism dominated philosophy of mind for much of the 20th century, especially the first half. In psychology, behaviorism developed as a reaction to the inadequacies of introspectionism. Introspective reports on one’s own interior mental life are not subject to careful examination for accuracy and cannot be used to form predictive generalizations. Without generalizability and the possibility of third-person examination, the behaviorists argued, psychology cannot be scientific. The way out, therefore, was to eliminate the idea of an interior mental life (and hence an ontologically independent mind) altogether and focus instead on the description of observable behavior.
Parallel to these developments in psychology, a philosophical behaviorism (sometimes called logical behaviorism) was developed. This is characterized by a strong verificationism, which generally considers unverifiable statements about interior mental life senseless. For the behaviorist, mental states are not interior states on which one can make introspective reports. They are just descriptions of behavior or dispositions to behave in certain ways, made by third parties to explain and predict others’ behavior.
Philosophical behaviorism has fallen out of favor since the latter half of the 20th century, coinciding with the rise of cognitivism/psychology. Cognitivists reject behaviorism due to several perceived problems. For example, behaviorism could be said to be counter-intuitive when it maintains that someone is talking about behavior in the event that a person is experiencing a painful headache.
Each attempt to answer the mind-body problem encounters substantial problems. Some philosophers argue that this is because there is an underlying conceptual confusion. These philosophers, such as Ludwig Wittgenstein and his followers in the tradition of linguistic criticism, therefore reject the problem as illusory. They argue that it is an error to ask how mental and biological states fit together. Rather it should simply be accepted that human experience can be described in different ways—for instance, in a mental and in a biological vocabulary. Illusory problems arise if one tries to describe the one in terms of the other’s vocabulary or if the mental vocabulary is used in the wrong contexts. This is the case, for instance, if one searches for mental states of the brain. The brain is simply the wrong context for the use of mental vocabulary—the search for mental states of the brain is therefore a category error or a sort of fallacy of reasoning.
“…the current work is in broad agreement with a general trend in neuroscience of volition: although we may experience that our conscious decisions and thoughts cause our actions, these experiences are in fact based on readouts of brain activity in a network of brain areas that control voluntary action…It is clearly wrong to think of [the judgement of will] as a prior intention, located at the very earliest moment of decision in an extended action chain. Rather, W seems to mark an intention-in-action, quite closely linked to action execution”
The conclusion must be that the truth lays someplace toward the middle of these two disciplines, where exactly that middle is . . . .