cognitive critique


Volume 1, 2009


Volume 2, 2010



Volume 3, 2011



Volume 4, 2011

  •   1
    Gibson’s reasons for realism
  • C. Wade Savage
  • p d fFull Text pdf (385kb) | Full Text HTML


    Over a period of some fifty years, in three books and a mountain of articles, James J. Gibson developed what he called a theory of direct visual perception, a theory which, he believed, makes reasonable the common sense position that has been called by philosophers direct or naïve realism (Gibson 1967, p.168). His theory is novel, iconoclastic, and vastly important both for psychology and philosophy. I am as eager as he to defend some version of direct perceptual realism, and as dissatisfied as he with most theories currently in vogue. But I am not yet persuaded that his theory is what he claims it to be, and I would like to present my doubts in this paper. Like Gibson, I will concentrate on visual perception.

    In brief, Gibson's theory is that visual perception is not a process of inferring from or organizing visual sensations produced by light falling on the retina, but rather a process in which the total visual system extracts (picks up) information about the environment from the light at the eye(s) of the organism as it explores its environment. He objects to theories that base perception on sensations (sense impressions, sensedata) or postulate some operation that converts sensations into percepts. His alternative information-based theory of perception assumes that sensory impressions are occasional and incidental symptoms of perception and are not required or normally involved in perception.

  •  15
    The rationality of preference construction (and the irrationality of rational choice)
  • Claire Hill
  • p d fFull Text pdf (569kb) | h t m l hill Full Text HTML


    Economics, and law and economics, assume that preferences are fixed and not constructed. The assumption is unrealistic, they acknowledge, but is nevertheless useful for generating accurate predictions. They assume as well that having fixed preferences with the other attributes accorded to them under rational choice theory is normatively desirable. Both these assumptions are false. It is critical in many cases to acknowledge constructed preferences; moreover, such preferences are often not normatively undesirable. More can and should be done to develop a more nuanced conception of preferences; such an account should take into account the extent to which the 'discovery' metaphor underlying the idea of fixed preferences distracts from the needed inquiries into how preferences are 'created' — the importance of the process, and the role of narratives and classification, While the task is difficult, payoffs potentially include better approaches to difficult public policy problems.

  •  61
    Time, observation, movement
  • Myrka Zago, Mauro Carrozzo, Alessandro Moscatelli &
    Francesco Lacquaniti
  • p d fFull Text pdf (423kb) | h t m l zago Full Text HTML


    Traditionally, a sharp distinction was made between conscious perception of elapsed time, considered a key attribute of cognition, and automatic time processes involving basic sensory and motor functions. Recently, however, this dichotomous view has been challenged on the ground that time perception and timed actions share very similar features, at least for events lasting less than a second. For both perception and action, time estimates require internally generated and/or externally triggered signals, because there is no specific sensory receptor for time in the nervous system. We argue that time can be estimated by synchronizing a neural time base either to sensory stimuli reflecting external events or to an internal simulation of the corresponding events. We review evidence in favor of the existence of distributed, specialized mechanisms, possibly related to brain mechanisms which simulate the behavior of different categories of objects by means of distinct internal models. A critical specialization is related to the animate-inanimate distinction which hinges on different kinematic and kinetic properties of these two different categories. Thus, the time base used by the brain to process visual motion can be calibrated against the specific predictions regarding the motion of biological characters in the case of animate motion, whereas it can be calibrated against the predictions of motion of passive objects in the case of inanimate motion.

  • 87 
    Game theory in neuroscience
  • Hyojung Seo, Timothy J. Vickery & Daeyeol Lee
  • p d fFull Text pdf (1.49mb) | h t m l seo Full Text HTML


    Decisions made during social interaction are complex due to the inherent uncertainty about their outcomes, which are jointly determined by the actions of the decision maker and others. Game theory, a mathematical analysis of such interdependent decision making, provides a computational framework to extract core components of complex social situations and to analyze decision making in terms of those quantifiable components. In particular, normative prescription of optimal strategies can be compared to the strategies actually used by humans and animals, thereby providing insights into the nature of observed deviations from prescribed strategies. Here, we review the recent advances in decision neuroscience based on game theoretic approaches, focusing on two major topics. First, a number of studies have uncovered behavioral and neural mechanisms of learning that mediate adaptive decision making during dynamic interactions among decision agents. We highlight multiple learning systems distributed in the cortical and subcortical networks supporting different types of learning during interactive games, such as model-free reinforcement learning and model-based belief learning. Second, numerous studies have investigated the role of social norms, such as fairness, reciprocity and cooperation, in decision making and their representations in the brain. We predict that in combination with sophisticated manipulation of socio-cognitive factors, game theoretic approaches will continue to provide useful tools to understand multifaceted aspects of complex social decision making, including their neural substrates.


  • 121 
    Two decades of functional imaging: from nuclear spins to cortical columns
  • p d fFull Text pdf (2.34mb) | h t m l Ugurbil Full Text HTML


    Since its introduction in 1992, there has been a revolution in the ability to image brain function with functional magnetic resonance (fMRI), going from early experiments demonstrating relatively course images of activity in the visual cortex to mapping cortical columns and to "brain reading" that constructs mental experiences of an individual, all using the fact that we were endowed with a complex paramagnetic molecule sequestered in our blood vessels and that neuronal activity has spatially-specific metabolic and physiologic consequences. These two decades of fMRI is marked by incessant improvements in instrumentation, innovative developments in image acquisition and reconstruction methods, and a significant expansion in our knowledge of neurovascular coupling. Collectively, this body of work has brought us recently to the point of depicting functional activity in three dimensions, in the entire human brain with submillimeter resolution. Some aspects of these accomplishments and the rational for their pursuit is reviewed and presented together with a personal history of the development of fMRI.



Volume 5, 2012

  • 1
    Consciousness Revealed?
  • Andrew C. Papanicolaou
  • p d fFull Text pdf (525kb) | Full Text HTML


    The purpose of this essay is to address and reconcile the conflicting messages deriving from the functional neuroimaging literature regarding whether the brain mechanism of consciousness and the neuronal correlates of concepts and of transient experiences are visualizable, and to what extent they have been visualized. It is argued, first, that the likelihood of visualization of different aspects of mentation can be deduced unambiguously from the fundamental principles and facts that comprise the formal structure of the functional neuroimaging methods. Second, it is shown that to the degree that such aspects do have neurological validity and the formal structure of the methods holds true, the likelihood of visualizing their neuronal correlates varies from extremely high for all psychological functions, including consciousness, to practically null for conscious experiences constituting the stream of consciousness. Third, the various claims broadcasted through the professional journals regarding visualization of the neuronal networks of consciousness and its products are scrutinized. The results of this scrutiny are that, thus far, and in spite of tremendous technical achievements on the part of the researchers in the area, none of these claims is correct. The validity of these results and what they bode for future research is left for the reader to evaluate, in the context of the formal structure of the methods and the pragmatic constraints that condition their implementation.

  • 37
    Listening To Bob Dylan
  • Alex Lubet
  • p d fFull Text pdf (458kb) | h t m l Full Text HTML


    What does it mean to say that one is listening to a piece of music, to a performance, or an artist? Initially, it may seem that there is little or no difference between these three questions. Upon closer reflection, though, there may be significant differences and these may be contingent on a variety of variables pertaining to musical content.

    This paper examines the case of listening to Bob Dylan and argues that it may provide a unique listening protocol. This case is examined in the context of a larger theory of music listening that considers issues including memory, erudition, focus versus diffusion, and verbal versus non-verbal musicality. Aspects of the research are grounded in the field of disability studies in music, and intermittent mental disability in particular. The potential of a role for experimental research based on this theory of listening in the making is discussed.

  • 59 
    Towards a Science of Embodiment
  • Guerino Mazzola, Alex Lubet & Romina De Novellis
  • p d fFull Text pdf (642kb) | Full Text HTML


    We propose here a science of human body/embodiment. We discuss our motivations, provide scientific and artistic precedents/foreshadowings, and then offer an overall picture of this nascent science, as well as a sample, unsolved problem from the field of disability studies. We imagine a science whose core subject is the connection of the body — as it appears in an action-perception paradigm derived from mirror neurons, the embodied artificial intelligence (AI), and embodied theories of dance, music, and painting — to the cognition of emotions, language, mathematics and logic. We propose to construe the missing link between these two poles, action-body and cognitive stratum, by use of a theory of gestures developed from music theory. We conclude with some suggestions for the foundation of a science of human embodiment and an associated scientific journal.

  • 87
    How complex are neural interactions?
  • Bagrat Amirikian
  • p d fFull Text pdf (655kb) | Full Text HTML


    A fundamental goal of systems neuroscience is to understand how the collective dynamics of neurons encode sensory information and guide behavior. To answer this important question one needs to uncover the network of underlying neuronal interactions. Whereas advances in technology during the last several decades made it possible to record neural activity simultaneously from a large number of network elements, these techniques do not provide information about the physical connectivity between the elements being recorded. Thus, neuroscientists are challenged to solve the inverse problem: inferring interactions between network elements from the recorded signals that arise from the network connectivity structure. Here, we review studies that address the problem of reconstructing network interactions from high-dimensional datasets generated by modern techniques, and focus on the emerging theoretical models capable of capturing the dominant network interactions of any order. These models are beginning to shed light on the structure and complexity of neuronal interactions.



Volume 6, 2012

  • 1
    The Nature-Nurture Controversy: A Dialectical Essay
  • Matt McGue & Irving I. Gottesman
  • p d fFull Text pdf (406kb) | Full Text HTML


    For millennia, scholars, philosophers and poets have speculated on the origins of individual differences in behavior, and especially the extent to which these differences owe to inborn natural factors (nature) versus life circumstances (nurture). The modern form of the nature-nurture debate took shape in the late 19th century when, based on his empirical studies, Sir Francis Galton concluded that nature prevails enormously over nurture. However, Galtons interest in eugenics undermined early research into the genetic origins of behavior, which did not re-emerge until the latter half of the 20th century when behavioral geneticists started to publish their findings from twin and adoption studies. Current consensus is that the nature versus nurture debate represents a false dichotomy, and that to progress beyond this fallacy will require the investigation of how both genetic and environmental factors combine to affect the biological systems that underlie behavioral phenotypes.

  • 11
    Building Layouts as Cognitive Data: Purview and Purview Interface
  • John Peponis
  • p d fFull Text pdf (5.11mb) | Full Text HTML


    This paper offers a computational description and provisional statistical profile of properties of building layouts that contribute to making them intelligible. Buildings are arrangements of boundaries that demarcate interior space according to the organization of human life and activity. At the same time a building is usually a continuously connected interior. Thus, the implicit question addressed in practice by the design of every building layout is as follows: how can spatial demarcation and differentiation be consistent with our capacity to cognitively map a building as a whole? Everyday experience suggests that spaces with characteristic names such as corridors, courtyards, atria, halls and hallways act as references. They provide us with a more expansive visual field than the rooms devoted to particular uses; more importantly, they often afford an overview of the connections to adjoining spaces. This is generalized by the idea of the purview interface: the interface between ordinary spaces confining perception to a limited part of the interior, and prominent spaces providing overview not only of area but also of connections. The purview interface can function as a rudimentary foundation of layout intelligibility. I show that prominent overview spaces are strategically distributed so that all other spaces are only very few visual turns away from the nearest overview space. The strategic distribution of overview spaces limits the minimum number of turns around corners that are necessary in order to move between any two spaces in the building. This affords anchoring cognitive maps on overview spaces which can act as effective references when navigating between any two spaces.

  • 71
    A Functionalist View of Metacognition in Animals
  • Nisheeth Srivastava & C. Wade Savage
  • p d fFull Text pdf (388kb) | Full Text HTML


    We present a structuralist analysis of the current state of cognitive science research into the phenomenon of metacognition. We begin from the assumption that cognitive intelligence is an organ just like any other biological organ, with the defined function of allowing intelligent entities to maintain homeostasis with a changing but predictable environment. This understanding leads to the conclusion that human cognition is functionally derived to generate accurate preference relations about the world. Examining empirical evidence from recent research using definitions of metacognition and theory of mind that emerge from our functionalist understanding of cognition, we conclude decisively in favor of the existence of metacognition in non-human animals.



Volume 7, 2013


Recent work on decision-making suggests that we are a conglomeration of multiple decision-making information-processing systems. In this paper, I address consequences of this work on decision-making processes for three philosophical conceptualizations that can be categorized as forms of dualism.

  1. A rejection of Cartesian dualism. Although most scientists reject the existence of a separate non-physical being, the importance of this still has not been fully appreciated in many other fields, such as legal or philosophical scholarship.
  2. A rejection of the software analogy. Many researchers still argue that we are software running on neural hardware. I will argue that this is a modern form of dualism and that this hypothesis is not compatible with the data.
  3. A rejection of Augustinian dualism. Many researchers identify human cognition with only one of the multiple decision-making systems, which leads to concepts such as emotion made me do it. I will argue that this is a poor description of the human individual.

All three of these errors are still used in making decisions in current law and legal scholarship. As scientific results begin to undercut these dualist interpretations, it becomes dangerously easy to reject intention, free will, and responsibility. I argue that taking a more realistic perspective on human decision-making will provide a more reasonable basis for legal decisions.


The concept of linguistics has isolated language from other natural systems, and linguistics has remained overwhelmingly a descriptive, rather than a theoretical science. The idea of language universals has thus been a matter of finding shared properties among the highest levels in the organization of language, rather than an attempt to understand the connections of language to other natural systems at all levels of its organization. Language basics such as discreteness have been treated as axiomatic or refractory. The origin of language has been treated as an exercise in continuity, rather than an attempt to understand the organization of language. These trends have been driven by the personalities of Edward Sapir, Zellig Harris and Noam Chomsky, more than by scientific considerations. Sapir’s eagerness, Harris’s indecision, and Chomsky’s magnetism have generated a perfect scientific storm.


Language and mind share a common source in nature with the number system and algebra. An equation represents the property of symmetry expressed in the form of symbols, where the equals represents the axis of symmetry, and the numbers are attached to the symmetrical halves. At the same time, an equation is a simple declarative sentence whose main verb is the equals, and which asserts, I am symmetrical, and It is true that I am symmetrical. Equations with arithmetical operators are generated on the basis of a dynamically developing cascade of symmetrical fractal subunits occurring in successive tiers. Ordinary sentences are generated by modifying the equation’s symmetrical infrastructure into an asymmetrical configuration, thus (1) allowing ordinary sentences to accept verbs other than the equals, and nouns other than numbers, and (2) introducing syntax by introducing meaning to word order, i.e., a=b and b=a are the same because equations are symmetrical, while John keeps bees and Bees keep John are different because ordinary sentences are asymmetrical. Since only human beings possess language, the uniquely human property of mind consists of the sense of truth-and-falsity expressed through declarative sentences. The necessity of discreteness for algebra and language, together with the periodic nature of the “phoneme” chart and the periodic table of the integers, suggest that language and algebra represent the quantum property of matter manifested at a human scale.


Important institutions and events merit occasional reflection—reflection on their accomplishments, on the antecedents to current events and situations, and on the people who brought them about. Cognitive Critique is an appropriate place for reflecting on recent events related to the Center for Cognitive Sciences. It was founded as the Center for Research in Human Learning (January 1964), and with shifting research trends became the Center for Research in Learning, Perception and Cognition (1987), and then the Center for Cognitive Sciences (1996). As of academic year 2013-14, The Center for Cognitive Sciences has reached an incredible milestone: 50 years of continuous funding by grants from the National Science Foundation, the National Institute of Child Health and Human Development, and the colleges at the University of Minnesota. This remarkable record reflects special leadership, effective training, and highly successful research and scholarship by the faculty and student members of the Center. It certainly is cause for reflection and celebration.

Sadly, however, the past year also records the loss of the Center’s first two Directors, James J. Jenkins and Herbert L. Pick. These two intellectual leaders of the Center modeled wide-ranging scholarship, commitment to true fellowship, the importance of working across disciplinary boundaries to advance our understanding, and abiding faith in the value of training the next generation. Their personal strengths shaped the Center for all time, led to many successes, and guided later directors. Herein, we shall reflect on these two seminal figures in the Center'

s history.




Online ISSN: 1946-7060
Contact U of M | Privacy
Cognitive Critique is published by the Center for Cognitive Sciences at the University of Minnesota.
©2016 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer.
Updated March 25, 2016