SELF-DECISIONS OF THOUGHT
BY: RICHARD J.KOSCIEJEW
Disciplines such as psychology and linguistics were in a position to contribute to the birth of cognitive science only after undergoing internal revolution, and artificial intelligence had first to be created - neuroscience - had a much longer and continuous history. The idea that the brain was not merely the organ of mental processes but might be discomposed into component systems which performed different specific functions in mental life was a product of the nineteenth century. The challenge to neuroscience, then and now, is to parse the brain into its functional components, and a more different task to figure out how they work together as a system. The implication for cognitive science is that information about the distinct function performed in models of cognitive activities. Exploring this functional decomposition and localization depends in part on the development of appropriate tools. Throughout the 1980's there began to determines the structure of the brain and the relation of its constituent components of mental life.
Before scientists could make claims about the functional architecture. At the end of the nineteenth century major advances were made in both the micro and the macro level. During the 1940's and 1950's advances in understanding the brain contributed to research' thinking about how concepts such as information and computation might produce a basis for understanding mental processes. At the micro level the crucial breakthrough was the discovery that nerve tissue is made up of discrete cells - neurons - and that there are tiny gaps between the axons that carry impulses away from one neuron and the dendrites to other neurons that pick up these impulses.
In the 1880's Camilio Golgi introduced silver nitrate to stain brain slices for microscopic examination. Silver nitrate had the unusual and useful feature of staining only certain cells in the specimen, thereby making it possible to see individual cells, with their associate ed axons and dendrites clearly.
While processes at the micro level of the neur on substrate would figure prominently in understanding cognitive processes such as learning (which is widely thought to invoke changes at synapses that alter the ability to one neuron to excite or inhibit another) and become an inspiration for computational modelling.
The developments in the 1940's and 1950's of computational analysis of neuronal systems, was the beginning of brain like computation modelling. A key figure of this development was Warran McColloch, a neurophydsiolgist whose collaborations with Walter Pitt showed that these networks could evaluate any component logical function, and claiming that, if supplemented with a tape and means for altering symbols in the tape - they were equivalent in computing power and resembled that the Truing machine (even before the first digital computer was built, Turing (1936) proposed a single machine for performing computations. The units of the network were intended as simplified, ever since as McCulloch-Pit processed neurons expiatory and inhibitory inputs from other units or from outside the network or neuronal network. The state of a network of these units emerges over a number of cycles: On a given cycle, if a unit receives any inhibitory input, it is blocked from firing. If it receives no inhibitory input, it fires if the sum of equally weighted excitatory input exceeds a specific threshold. A unit with this design is appropriate, not only as a model of a simplified neuron but also as a model of an electrical relay - a basic component between the brain and computers making a logical link: The neurons could be associated with propositions, and because of the binary nature of these units, their activation states could be associated with truth values.
While, the focus on perception continued in the central parts of Donald Hebb`s 1949 book, The Organization of Behaviour. The subtitle, Stimulus and Response - and what occurs in the brain in the interval between them, and points to one of the main emphases of Hebbs analysis: The development of opposition between the more localizationist switchboard theories emphasizing sensory-motor connections and the anti-localization approaches of the Gestalt theorists and his own mentor. Lashley, The key to his alternative was the notion of neuronal cell assemblies, which consisted of interconnected, and hence self-reinforcing, sets of neurons which represent and transform information in the brain.
Another kind of advance involved linking different macro-level brain areas with specific cognitive functions, as this required overcoming the view widely shared in the eighteenth century that the brain, especially the cerebral cortex, operated holistically, without any localized differentiation of function.
The major credit for promoting the idea that the macrostructures of the brain was divided into distinct functional areas is due to Franz Joseph Gall. Working in the early nineteenth century, he proposed that protrusions or indentations in the skull indicated the size of underlying brain areas. He further thought that it was the size of brain areas that determined how great a contribution they made to behaviour. Accordingly, he proposed that by correlating protrusions and indentations in individual' skulls with their excesses and deficiencies in particular mental and character traits, he could determine which brain areas were responsible for each mental or character trait. Phrenology, the name given to Gall's views by his one-time collaborator Johann Spurzheim, has been much derided as quackery. Nonetheless, Gall's fundamental claim that differentiation of structure corresponds to differentiation of function in the cortex came to be widely accepted, so much so that those espousing localization of function to the latter part of the nineteenth century were often referred to as neophrenologists.
One problem that researchers faced in attempting to localized mental function s in the brain was the lack of any standardized way of designating parts of the brain. The folding of the cortex creates 'gyri' (hills) and 'sulci' (valleys): Anatomists have named some of them and used the most prominent sulci to divide the brain into different lobes, as, for example, the Temporal lobe, occipital lobe, Parietal lobe, and the Frontal Lobe. But each lobe itself contains a number of anatomically distinct regions. Using such criteria as responses to various stains and the distribution of cells between cortical layers, a number of researchers at the end of the nineteenth century produced more detailed atlasers of the brain. Of these, that by Korbinian Brodmann (1909) became the most widely adopted, and his numbering of brain regions is still widely employee today.
One of the earliest and most fruitful source of information about the function of brain areas is the study of deficits that ensue when a neurostructure is damaged - if brain regions are massively interconnected and the brain functions as a whole, then it is impossible to work out what the functioning of a particular part of the brain is on the basis of what happens when that part is damaged - as, such, the cognitive psychologists attempt to understand behaviour in terms of the hidden processes involved in the representation and manipulation of information and knowledge. This discourse provides an ideal link between brain and behaviour, since the function of the brain can also be described in terms of the representation and manipulation of knowledge, cognitive neuropsychology, which is the application of the cognitive approach to the study of patterns with damaged brains, now flourishes, especially in Europe. Many practitioners, like their forebears at the end of the nineteenth century, consider that the study of single cases is particularly informative.
Neuropyschology is the study of patterns with abnormal brain functions. Because such patients often exhibit highly selective impairments - for example, short-term memory may be impaired, while long-term memory remains intact - they are thought to provide evidence that the mind is modular, the assumption, is that, if the mind was, instead, a uniform general-purpose system, damage to the brain would have more uniform effects on function.
An alternative interpretation, which seems to challenge this modular conclusion, claims that the selective nature of impairments reflect a hierarchical, than an independent relationship among cognitive functions. on such a view, more complex tasks are simply ones that require a greater quantity of largely homogeneous cognitive resources. By decreasing the overall amount of brain power available, the claim is, that brain damage has a disproportionate impact on cognitive functions demanding the most resources.
However, this nonmodular view is incompatible with an important source of evidence arising from double-dissociations. In double-dissociation, two patients or groups of patients exhibit complementary deficits. According to Timothy Shallice (1988) à disassociation occurs when a patient performs extremely poorly on one task . . . and at a normal level, or, at least at à very much better level on another task. In a double-dissociation a second patient shows the reverse performance pattern on the same two tasks. If the selective impairment of brain function reflected merely the quantity of cognitive resources required for different functions, one would expect all patients to exhibit the same pattern of cognitive deficits.
Yet, as Shallice argues, double-dissociations do not provide decisive evidence in favour of modularity. although, assuming the mind to be modular, we would expect. double-disassociations, Shallice cautions that we can draw no definite conclusion about whether mind is modular simply because we find such dissociations.
The assumptions that double-dissociations imply modularity is nonetheless persuasive in neuropsychology. Shallice and his colleagues attribute this view to two related factors. First, David Plaut (1995) points to a failure to distinguish the claim that double-dissociations and modularity fit together so naturally from the claim that the former genuinely imply the latter. Plaut acknowledged that modularity may be seen as a natural interpretation of the evidence from double-dissociations in the sense that, on such an interpretation, the taxonomy of cognitive abilities mirrors that of processing mechanisms. The idea, in that this isomorphic relationship is the simplest and so on, intuitively, the most natural interpretation. But, Plaut claims, to move from a claim about what seems intuitively natural to an assertion about how nature actually works - or has to work - is illegitimate.
To legitimize this move, one would need additionally to assume that general-purpose and modular systems exhaust the types of possible cognitive systems. The acceptance of this claim is the second factor, in Shallice and Plaut's view, accounts for the widespread belief that modularity can be inferred simply from the existence of double-dissociations. Since, as aforementioned, a general-purpose architecture is incompatible with the existence of double-dissociations, the inference would be justified if these were the only two types of cognitive architectures available. But Shallice's point is precisely that they are not. Shallice identifies a number of nonmodular (or partly modular) processing systems that might give rise to the pattern of deficits exhibited in double-dissociations, including overlapping processing regions, coupled systems, semi-modules, and multilevel systems. Once such a repertoire of alternative explanations is recognized the mere elimination of general-purpose systems does not licence the conclusion that the mind is modular.
Another type of nonmodular system that has seemed especially unlikely to generate double-dissociations is a connectionist network (Connectionism, artificial life, and dynamical systems are all approaches to cognition which are relatively new and have been claimed to represent such paradigm shifts. Just how deep the shifts are remains an open question, but each of these approaches certainly seems to offer novel ways of dealing with basic questions in cognitive science.) Yet, in recent work by Plaut and Shallice, a connectionist network lesioned to simulate brain damage does just that. The network, which was trained to pronounce written words based on their meanings, following damage.
Although questions remain regarding the relevance of connectionist models for understanding human cognitive processes, this result is important, because it provides an example of distinct behavioural deficits that do not correspond to distinct structures in the system. That is, although a lesion in one location generates a concrete, but not abstract, word-reading deficit, and a lesion in a different location generates an abstract but not concrete word-reading deficit, the system does not contain distinct components for pronouncing concrete words and abstract ones.
Instead, the complementary deficits arise from the differential contribution that different components of the system make to the pronunciation of these two word classes. The direct pathway from orthography to meaning is relatively more involved in the pronunciation
of abstract words than concrete words. This, damage to this pathway has a disproportionately strong effect on abstract words. The pronunciation of concrete words, by contrast, relies more on what is known as the clean-up pathway. Thus, the ability to pronounce concrete words correctly is more impaired by a lesion into this area than the ability to correctly pronounce abstract words. It is important to emphasize that in the normal functioning network, both pathways are involved in the pronunciations of both classes of words. As Plaut points out, 'it would be a mistake to claim that the direct pathway is specialized for abstract words while the clean-up pathway is specialized for concrete words'.
The observation that cognitive function can be damaged selectively in intimately connected with the claim that, in such cases, regions of the brain are damaged selectively as well. The discipline of neuropsychology only gets off the ground once a correlation between cognitive activity and brain damage is identified. While there is much hesitation regarding how narrowly this correlation should be specified, it is a commonplace assumption in neuropsychology that functionally independent cognitive systems are physically localized. That, one reason why information-processing models are attractive to neuropsychologists is that, in addition to providing a much needed intermediate level between brain and behaviour, they can, as Shallice (1988) puts it, 'be easily 'lesioned' conceptually'.
Different brain regions would perform different functions, and principally of localization of function has had vocal opponents. In the 1820's, Marie-Jean-Pierre Florens voiced objections to Gall, for which Gall's functional claim that differentiation of structure corresponds to differentiation of function in the cortex, this became widely accepted, while a century later neuropsychologist Karl Lashley argued that higher cognitive processes (ones involved in memory and learning) were not localized but distributed. Lashley (1929), introduced two alternative principles equipotentiality and mass action. `Equipotentiality' refers to the ability of brain regions to take no different functions as needed (e.g., if the region that previously performed a function were damaged), while 'mass action' refers to the idea that the ability to perform higher functions relates to the total available cortex, not to any one part traces in the very often cited paper, 'In search of the engram', in which he recounted the repeated failures to localize such major functions as habitual behaviour. Nonetheless, despite the doubts of Flourons, Lashley, and others, most researchers have assumed that - at some level of detail in the analysis of function - functions are localized in the brain. To obtain evidence for particular localization, researchers have had to develop a number of research techniques.
Even so, neuropsychology moves on. More recent studies using highly sensitive implicit measures of on-line comprehension and memory have called this dissociation into question. The finding of Loraine Tyler, a neuropsychologist at Birkbeck, London, illustrated some of these newer data. Tyler (1992) directly addressed the separability of comprehension and memory as distinct, sequentially ordered stages by testing whether the so-called comprehension deficit of Wernicke's aphasics is truly specific to comprehension or reflects both a comprehension deficit and a memory deficit that shows up only when testing via after-the-fact, explicit measures based on conscious judgments about prior comprehension.
However, the study of patients with memory problems has shown that there are so many different kinds of memory that this term may cease to have much value. The most obvious features for categorizing the different kinds of memory are 'content' and 'time'. We can remember many different kinds of things, telephone numbers or patterns, the meaning of words or what we had for breakfast yesterday, or poem by T.S. Elliot or how to ride a bicycle. The lengths of time we can remember something, remembering what we had for breakfast for a few days, and how to ride a bicycle for the rest of our lives.
For short-term memory and long-term memory suggested that there was an important distinction between short-term memory (minutes) and long-term memory (hours, days, years), but they assumed that material was passed through the short-term store into long-term memory. However, in the 1970's a series of patients who had a severe, specific impairment of short-term memory and no impairment of long-term memory were described by Elizabeth Warrington and her colleagues. One of these cases is R.W.. R.W. had a meningtoma removed from the temporo-paietal region of his left hemisphere when he was in his twenties. Some 20 years later he still has a severe deficit of short-term memory for spoken words. He cannot remember a string of random letters or numbers longer than about two items. In spite of this handicap he has no problem producing or understanding speech or letters or numbers presented visually is normal. Because his short-term memory deficit is so circumscribed, he has no difficulty with his responsible job as a medical orderly. Practical problems arise for him only when he is given a telephone number or a long unusual name over the telephone.
What this and similar cases suggest is that there are different short-term stores for different kinds of material, and that these short-term stores function independently of long-term memory. R.W. has a specific problem with short-term storage of verbal material (words, numbers, letters) presented through the auditory modality. This phonological store seems to be located in the left inferior parietal region of the brain.
Evidence that short-term memory involves a number of modality-specific stores which can function independently of one another can also be found in normal people, but this demonstration depends upon subtle and ingenious experimentation. Alan Baddeley and his colloquies have developed a model of working memory in normal people and have demonstrated the existence of independent components by using the dual task paradigm. For example, concurrent articulation (saying, blah blah blah blah) interferes with short-term memory for words but not with short-term memory for visual patterns. We can then presume that this interference occurs because the brain system involved in articulation overlaps with the brain area concerned with short-term memory for words but not with the system concerned with short-term memory or patterns.
Many of the developments prior to the actual birth of cognitive science - developments that resisted the label 'artificial intelligence' and referred to 'complex information, processing'. The very idea of information processing has an ecumenical conceptual feel to it, suggesting that just as humans process information, so, too, could computers, and perhaps even in the same way - concern with individual differences contrasted with Wundts structural and Kantian emphasis on intra psychic acquisition, researchers who emphasize language function and conversation versus and engagingly written work. Principles of Psychology, ten years in the writing and consciously experienced and how a figure in ordinary life are unappalled and still frequently quoted. In the continuous factor analysis of rating scale data, is that from its beginning with Wundt and James, psychology developed quickly as a discipline, that to say that which in not repudiated had incorporated early tools that involved mathematicians and engineers laying the foundations for artificial intelligence. But the new enterprise was christened artificial intelligence, however, as so often happens at christenings, the name served both to unify and to divide, artificial suggested that the form of intelligence exhibited by computers might differ from that of humans, and indeed, at the time, Minsky and McCarthy were not strongly committed to the idea that artificial intelligence would be particularly revealing about human cognition. Newell and Simon, who had the only working program, were more concerned with human cognition and were more directed toward psychology. Two other important pioneers in artificial intelligence were Marvin Minsky and John McCarthy. Minsky, after completing a dissertation on neural networks at Princeton in 1954, was drawn to modelling intelligence by writing programs for von Neumann-style computers. McCarthy also did his doctoral work at Princeton, first doing research on finite rules, progress through a finite number of different states. During which the too became attracted to modelling intelligent processes on digital computers. Minsky and McCarthy secured funding from the Rockefellers Foundation Project on Artificial Intelligence, which enabled Nathaniel Rochester and Oliver Selfridge who were using digital computers to simulate neutral networks. The organizers and most of the other participant were taking advantage of the computers ability to manipulate symbols to simulate thinking more directly, for which every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
One of the basic challenges in cognitive science is how to account for behaviour. If human behaviour were either entirely random or limited to a fixed repertoire of actions which could be memorized, then there would be little to explain. What makes the problem so interesting is that behaviour is patterned, and is often productive (we generalize these patterns to novel circumstances).
A natural way to account for the patterned nature of cognition is to assume that underlying these behaviours is a set of rules. Rules provide a compact and elegant way to account for the abstract, productive nature of behaviour. Rules also offer a way to capture system-level properties: That is, there exists the possibility that rules can interact in complex ways. The problem that can arise, however, is that some human behaviours are often only partially general and productive. A good example of this (and one which has been well studied and the topic of considerable debate) is the formation of the past tense in English verbs.
In 1986, David Rumelhart and James McClelland published the results of a connectionist simulation in which they trained a network to produce the past-tense forms of English present-tense verbs. Learning involved gradually changing the weights in the network so that the networks overall performance improved. Rumelhart and McClelland reported that the network was not only able to master the task (though not perfectly), but that it also exhibited the same U-shaped performance found in children. They suggested that this demonstrated that language performance which could be described as rule-governed might in fact not arise from explicit rules.
This paper generated considerable controversy, which continues to the present time. Steven Pinker and Alan Prince (1988) wrote a detailed and highly critical response, in which they questioned many of the methodological assumptions made by Rumelhart and McClelland in their simulation and challenged their conclusions. In fact, many of Pinker and Princes criticisms are probably correct and were addressed in subsequent connectionist models of past-tense formation.
A great many subsequent simulations have been carried out which correct problems in the Rumelhart and McClelland model. These simulations have in turn generated an ongoing debate about new issues. In recent writings, Pinker and Prince have suggested that although a connectionist-like system might be responsible for producing the irregulars, there are qualitative differences in the way in which regular morphology is processed which cam only be explained in terms of rules. This has become known as the dual mechanism account. Proponents of a single mechanism approach argue that a network can in fact produce the full range of behaviours which characterize regular and irregular verbs.
It has been pointed out that the digital framework, - mind as computer - has permeated work in cognition until recent times, and that connectionism can be understood, at least in part as an alternative which views mind as brain. The digital framework has also had a profound impact on the way we think about computation and information processing. Much of the formal work in learning theory, for example, draws heavily on results from computer science.
Interestingly, there is one subset of researchers who have not adopted the digital framework: These are people who study motor activity. Researchers such as Michael Turvey and Scott Kelso (to name only two prominent scientists from a large, active community) have instead used the tools of dynamical systems to try to understand how motor activity is planned and executed. This seems natural, given that motor activity has a dynamical quality which is difficult to ignore. For example, when we walk or run or ski, our limbs move in a rhythmic but complex manner and involve behaviours which change over time (and hence are dynamic). More recently, scientists from various other domains in cognitive science have also begun to explore the dynamical systems framework as an alternative to thinking about cognition in terms of digital computers.
What is a dynamical system? Most simply, it is a system which changes over time according to some lawful rule. Put this way, there is very little which is not a dynamical system, including digital computers. In practice, dynamical systems must also be characterized in terms of some set of components which have states, and the components must somehow belong together. (In other words, my left foot and the Coliseum in Rome, do not constitute a natural system - unless perhaps, my left foot happens to be kicking stones in the Coliseum.) The goal of dynamical systems is to provide a mathematical formalism which can usefully characterize the kinds of changes which occur in such systems.
There are a number of important constructs which are important in dynamical system theory. For instance, having identified the parts of a system which are of interest to us (e.g., the position of the jaw, tongue, and lower lip), we can assign numerical values to these entities' current state. We can then use (in this example) a three-dimensional graph (one axis each for jaw, tongue, and lower lip position) to visualize the way in which all these components change their state over time. This three-dimensional representation is often called the state space. If we are interested in a formal characterization of how the system changes over time, then this leads us to use differential equations. These capture the way in which the variables' values evolve over time and in relation to one another.
A final example of a construct used by dynamical systems theory is the notion of an attractor. An attractor is a state toward which, under normal conditions, a dynamical system will tend to move (although it may not actually get there). A child on a play-ground swing constitutes a dynamical system with an attractor that has the child and the swing at rest in the bottom vertical position. The swing may oscillate back and forth if the child is pushed or pumps her legs, but there is an attracting force which draws the child back toward the rest position. The goal of a dynamical systems analysis of this situation would be to describe the behaviour of the system using mathematical equations which tell us how the state of the system (e.g., the position of the child at any given moment) changes over time.
Over this time, take a moment to use mental imagery to perform the following tasks (1) decide whether an apple is more similar in shape to a banana or an orange. (2) Determine how to rearrange the furniture in your bedroom to make room for a new dresser, and (3) drive home during rush hour. Although we take our ability to perform tasks such as these for granted, they raise a host of interesting questions about imagery. For instance, what is too account for our ability to generate, maintain, transform, and inspect images? How do we characterize individual differences in imagery ability? What is the relation between imagery and spatial representations in imagery ability? What is the relation between imagery and spatial representation? What can we do with mental imagery? At this point our focus will be on visual imagery, because we know more about visual imagery than auditory, gustatory, or oblatory imagery. (Reisberg, 1992, focusses on auditory imagery).
Mental images are the ultimate in the subjective. An image can be directly experienced only by the imagery. For instance, the images you generate to perform the above tasks cannot be experienced by other people or compared directly with images generated by different people. Consequently, developing a cognitive understanding of imagery has been fraught with difficulties. Because mental images are neither directly shareable nor directly measurable, researchers have been forced to develop experimental that allow them to make inferences from behavioural and neuropsychological data about the nature of both the mental representations and processes involved in the generation, maintenance, transformation, and inspection of images.
Imagery has been a central cognitive concept, for good or for bad, since antiquity. Greek orators used imagery-based mnemonic devices to help them remember the data sequence of events when reciting long oral traditions and other pieces, just as modern-day speakers rely on notes and multimedia props to guide their presentations. For instance, to use the method of loci to encode a sequence of events, an orator would create a mental image of each event location in a series. To remember the events, the orator would then mentally revisit each location in sequence. Over the centuries, philosophers from Aristotle and Plato to the British empiricalists have examined the role of the mental image in thought. In the 1880`s, Sir Francis Galton provided the first psychological documentation that individuals varied greatly in the vividness of their visual imagery. To his surprise, some people could not (or would not) even attempt to form an image of their breakfast table when asked to do so, much less introspect upon the image to answer questions about it, such as the colour of the tablecloth.
Psychologists between 1889 and 1913 focussed on the study of imagery. However, that research stumbled badly, largely because the techniques for studying imagery were seriously flawed. Data obtained from the highly fallible technique of introspection led to increasingly vituperative arguments between those who believed that thinking was based on imagery and those who believed in imageless thought. Ultimately, the lack of satisfactory techniques for studying imagery led to the downfall of the experimental study of the mind and the rise of radical behaviourism by the 1920`s. By the early 1970`s, cognitive psychologists had developed a variety of reliable behavioural techniques to investigate the role of imagery in cognition. The results of several studies suggested that under certain circumstances imagery was critically involved in various aspects of memory and on-line cognitive processing. Although the results of any single study could be challenged, taken together, the evidence from a wide variety of experimental procedures converged to reinstate imagery as a valid cognitive concept.
The modern-day study of imagery originated in studies of memory. In the 1960s Alan Paivio intimated a programmatic study of the facilitating effects of imagery-related variables on memory performance (Paivio, 1986). To account for observations that memory is often enhanced when imagery is involved. Paivio formulated his dual coding theory. This theory, which is fundamentally about the representation of knowledge in (semantic) memory, posits two coding systems. The verbal system is specialized for the processing of linguistic materials, while the nonverbal system is specialized for the verbal system (logogens) and the nonverbal system (imagens) are richly interconnected. Hence, seeing a dog activates the dog-imagen, which can activate the relevant dog-logogen, thereby ensuring that objects can be named. Likewise, hearing the word cat activates the cat-logogen, which can activate the cat-image. Dual coding theory explains the facilitating effect of imagery on retention with reference to the beneficial effects of coding in two systems. That is, if verbal materials are concrete, or the conditions are conducive to the use of imaginable strategies, they activate processing in the nonverbal system as well as the verbal system. Because nonverbal codes are assumed to be more memorable than verbal codes, conditions that increase nonverbal coding theory has been challenged on some fronts (de Vega et al., 1996). Its legacy endures, as some version of dual coding theory is evoked, implicitly or explicitly, whenever the nature of the representation of knowledge is discussed.
Mental imagery appears to be closely linked to the relevant perceptual system. Sydney Segal and Vincent Fusella found that people have more differently perceiving something while simultaneously imaging something else in the same sensory system than they do when a different sensory system is involved: For instance. in one of their studies, detecting the presence of a faint visual stimulus (e.g., a small blue arrow) was impaired by concurrent visual imagery (e.g., a tree) but not by concurrent auditory imagery (e.g,, a telephone ringing), and vice versa.
Within the visual system, Martha Farah demonstrated that visual imagery can facilitate processing of particular content. In her study, detecting a letter (e.g., H) was enhanced when people imaged that letter rather than an alternative letter (e.g., T) during the test interval. In yet other studies, perceptual and imaginable versions of the same task yields the same pattern of results. For instance, Roger Sheppard and Peter Podgorny found that the pattern of results. For instance, Roger Sheppard and Peter Podgorny found that the pattern of response times for determining if a presented dot fell on a target block letter was the same whether the letter was presented visually or was imaged.
Neuropsychological evidence is also consistent with the presumed links between visual perception and visual imagery. For instance, Edoardo Bisiach and his colleagues have shown that patients with unilateral visual neglect in visual perception also suffer from comparable neglect in imagery. The advent of neuroimaging techniques, such as positron emission tomography (PET) to measure regional cerebral blood flow, has allowed researchers to demonstrate that visual imagery activates those portions of the brain used in visual perception. For instance, using PET scans, Stephen Kosslyn and his colleagues have shown that the primary visual cortex is activated when people image objects with their eyes closed.
Making the link between visual imagery and visual perception allows us to use our knowledge of visual perception to guide our thinking about the why and the how of visual imagery. That is, we can use our knowledge of the evolutionary history and adaptive significance of perception to think about the functions of visual imagery in cognition. In addition, we can use our knowledge of how visual perception operates to formulate effective models of visual imagery.
An evolutionary perspective suggests that visual imagery may be more prevalent than other forms of imagery may reflect the way in which ecological conditions have shaped the abilities of our visual system through evolutionary time. Among other things, visual perceptions allows us to inspect, reach for, and manipulate objects, as well as to navigate in space. Visual imagery presumably allows us to simulate these perceptual-motor activities in the service of solving problems, whether in ongoing behaviour (e.g., planning which route to take to avoid traffic while still at home). Among other things visual imagery presumably allows us to simulate these behaviours (e.g., anticipating accurately who will be in what lane while driving home during rush hour on a freeway) or in advance of the behaviour (e.g., planning which route to take on the freeway) or in advance of the behaviour (e.g., planning which route to take to avoid traffic while still at home).
Still, in doing so, that visual perception provides information about objects and their spatial relations. Neuropsychological research on visual perception has determined that different brain systems process information about object properties (i.e., what) and spatial relations (i.e., where). The ventral system, involving pathways from the primary visual cortex to the parietal lobe, is dedicated to the processing of object properties, such as shape, colour, and texture, independently of the location of the object. The dorsal system, involving pathways from the primary visual cortex to the temporal lobe, is dedicated to process-size, location, and orientation of objects.
Although historically there has been controversy as to whether the information in visual images should be characterized as visual or spatial, the evidence today is consistent with the conclusion that visual images represent both types of information. Much of this evidence stemmed from the ingenious efforts of Roger Shepard and his colleagues to develop behavioural tasks that could reveal the nature of both mental images and the transformations that can be performed on these representations (Shepard and Cooper, 1982). He and Susan Chipman demonstrated that people can use visual images to make decisions, using a task similar to the shape comparison task are performed. They found that people could use remembered information about visual appearances (e.g., shapes of provinces in Canada or the states in the USA) to make similarity judgments comparable to these judgments make when they actually viewed the shapes. Hence, visual images are visual information about shape and such that the image is analogous to the percept of that shape. Success in object comparison tasks depends upon how well the object properties are represented in the image. Independent of the location of the image of the object.
A vast number of studies have used variations of the mental rotation task, originally developed by Roger Shepard and Jacqueline Metzler in the early 1970's, to examine the mental transformation of spatial information in imagery . They asked people to determine whether a test stimulus was the same as, or a mirror image of, a comparison stimulus. The comparison between the two stimuli varied from 0 to 180 degrees. The time taken to make the decision increased with increasing angular disparity, suggesting that people mentally rotate visually presented shapes just as they would rotate a stimulus physically. These and many other studies have demonstrated that a visual image is analogous to a visual percept, in that it can be used to represent and process information about object properties and spatial relations. In addition, the processes used to operate on mental images appear to be functionally analogous to those used to operate on actual objects in space.
Neuropsychological evidence also supports the claim that visual imagery can present both objects and spatial relations. For instance, Martha Farah and her colleagues have described two patients showing a double-dissociation between the ability to recognize objects and spatial relations in vision and in imagery. One person had difficulty identifying objects in vision and in imagery but could process spatial information in both modes, while the other could recognize objects in vision and in imagery but could not process spatial relations in either.
Although visual imagery and visual perception share some neural subsystems, visual imagery must differ from visual perception in significant ways. After all, we rarely mistake images for percepts. A current, theoretically interesting problem concerns the extent to which mental images can be used to make discoveries (Finke, 1990; Cornoldi et al., 1996; Roskos-Ewoldsen et al., 1993). Despite a rich anecdotal record suggesting the centrality of mental imagery of the creative discoveries of many celebrated individuals (e.g., Kekule, Tesla, Einstein), research shows that it is generally easier to reconstrue percepts or drawing of images than images themselves. For instance, Stephen Reed showed that people were more likely to find a hidden part of a complex figure (e.g., a triangle in the Star of David) in perception than in imagery. Likewise, Deborah Chambers and Daniel Reisberg demonstrated that people were much more likely to find the alternative interpretation of classical ambiguous figures (e.g., the Jastrow duck-rabbit) in a percept of the figure than in an image.
Despite the apparent limitation on the ability to reinterpret images, most people are able to use imagery to make discoveries in the open-ended mental synthesis task devised by Finke and Slayton. This task requires people to use imagery to synthesize a novel pattern (not one specified by the experimenter) from three randomly selected simple geometric shapes and alphanumeric characters. For instance, given the parts, circle, triangle, and capital letter X, a person might report and then draw a Ferris wheel. Curiously, Anderson and Helstrup found that providing perceptual support does not seem to facilitate discovery performance in the mental synthesis task as much as it does in reconstrual tasks.
According to which it was out of separate recognition of the notion that persons and animals often see or perceive things whole was central conviction of Gestalt psychology (rather than a secondary theme, as it was for Wundt). In some cases, the whole is spatial, as when perceiving the roundness of an object, in other words it is temporal, as when an individual imagines goals and organizes behaviour as a means to those goals. An example of perceiving temporal wholes is Wolfgang Kohler`research from 1913 to 1917 with chimpanzees in the Canary Islands.
The research on sensation and perception had deep historical roots. How deep? In one area, colour vision, a major achievement in 1968 was Jameson and Hurvich's integration of two opposing theories first proposed by Hering and Herlkmholtz in the nineteenth century, that of analogously opposing processes could explain some aspects of colour perception, but the resulting theory was more complicated and less intuitive than proposed by Hermann von Helmholts Helmholts carried his day, but in the long run Hering turned out to be right.
Returning to mainstream research on sensation and perception, is that, it might cast light on colour perception that takes some explaining, it helps first to allocate descriptive vocabulary to three distinct levels: The physics stimuli, the physiology of receptors, and the psychology of post-receptorial processes. The visible spectrum is linearly ordered by wavelengths, ranging (the humans) from approximately 400 to 700 nanometres (nm). Newton's well-known experiments with prisms yielded the spectral hues, or hues each produced by a particular wavelength of electromagnetic radiation within the spectrum. Sunlight is a mixture of light of all those different hues, as hues when mixed, yield white are called compliments. But the ordering of colours is complicated immediately by the existence of extra-spectral hues - hues not found in the rainbow, such as the purple needed to connect the end points, or colours such as brown. Even the so-called unique red - a red which is not at all yellowish and not at all bluish - is nowhere to be found in the spectrum. Furthermore, we find that a given spectral hue can be matched by light composed of many different combinations of wavelengths, and that there is no simple rule of physics that yields all and only the combinations that match in hue. Those matching combinations are called metameres. The existence and constitution of metameres is an entry-level puzzle that any theory of colour perception must explain.
Details of the physiology of receptors can help, whereby, the optics writers after Newton confirmed that, if one chose carefully, any spectral hue could be matched using just three different lights - three different primaries - in different intensities. One had to take care that none of the primaries was a complement of the others, and that none could be matched by a combination of the others. Thomas Young took cognizance of this trichromatic character of human colour vision, noted that the retina of the eye was limited in surface area, and proposed in 1801 that the retina contained exactly three different types of colour-sensitive elements. The many different hues manifest in visual experience could be produced by suitable combinations of outputs of those three. Young's deduction was basically correct: There are three classes of cones in the normal human retina, which differ in the parts of the spectrum to which each is optimally sensitive. Short wavelength (S) cones are optimally sensitive to radiation of about 430 nm., middle wavelength (M) cones to 530 nm., and long wavelength (L) cones 560 nm. Each photo pigment will absorb photons of other wavelengths, but with less reliability.
The properties of retinal receptors can explain many of the facts of colour mixing and matching. Knowing the absorption spectrum for each of the three classes of cones and the energy spectrum of light entering the eye, one can calculate the likely number of absorptions in each of the three systems S, M, and L. This yields a point in a three-dimensional wavelength-mixture space, whose axes are numbers of absorptions in the three cone systems. Since the visual system has no inputs other than the absorptions in its receptors, stimuli that yield the same point in wavelength-mixture space will match. Complex combinations of wavelengths can be treated as vector sums, as long as the combinations, no matter how complex, eventually arrives at the same point, you have a metamere. The various laws of colour mixing - Grassman's laws - have an algebraic flavour, with '+' standing for mix together and '=' for match. For example, if A = B and C = D, then A + B - C + D. With retinal physiology better understood, all these laws can be interpreted literally, with '=' now meaning equal numbers of absorptions in S, M, and L, cone system. Sums become vector sums. Grassman, who was a mathematician, and would be pleased. Physics fails us, for this point in physiology of the retinas allows us to write simple rules for the constitution of metameres.
Retinal details do not constitute a theory of perception, but they do suggest a simple, intuitive model. From the retina there proceed three channels of chromatic information (three 'fibres'), one corresponding to each class of cone. These three channels are combined centrally to yield sensations of colour. Such was the proposal of the Young-Helmholz theory. References in the older literatures to the S cone as 'blue cones' are probably holdovers from this long-dominant theory.
By contrast, Hering's opponent process theory is complex and counterintuitive. Hering thought that there were four fundamental colours, organized in two parts: Red versus green and blue versus yellow. Hue information could be carried in just two channels, one for each such pair. Each information could be carried in just two channels, one for each such pair. Each channel takes inputs from, at least, two of the classes of cones and inhibited by inputs from the others. In this model, no cone is a blue cone, since blue only arises in a more central process, requiring inputs from , at least two classes of cones. In addition, the model proposes a third, achromatic channel, which sums inputs from all three cones.
It is important to recognize that Helmholtz and Hering could agree that the retina contains three distinct classes of cones, and could agree on all the facts about colour mixing and matching. Any similarities between colours that could be explained by similarities of retinal processes would also fail to distinguish between the theories. They agree on what matches what. Their dispute concerns only processes that commence beyond the retina, as they propose differing organizations for post-receptorial processes. How might one distinguish between such theories?
The answer lies in other aspects of the qualitative similarities among colours. If, for conveyance, one shifts to coloured paint chips and tries to arrange them so that there relative distance correspond to their relative similarities, one finds that hues are not ordered linearly, as along the spectrum, but rather form a circle (a hue circle), with extra-spectral purples connecting the spectral reddish blues to the wavelength reds. The centre of the circle will be achromatic - some point of the gray scale, which matches the lightness of all the chips in the circle, but is neither red nor green no yellow or blue. The distance of a chip from the centre reflects the saturation of that hue - roughly, the extent to which the hue of the chip is mixed with white. Colours at the end point of a diameter are complements: Their hues cancel to yield the achromatic centre. Each hue circle is two-dimensional with hue as the angular coordinate, saturation as the result. To capture the entire gamut of colours that humans can perceive, one must construct hue circles of different lightness levels, from white to black and then stack the one on top of the other. The entire order is hence three-dimensional, with dimensions of hue, saturation and lightness.
Having hypothesized that hue cancellation was due to opposing physiological processes. Processes set in motion by some stimuli could be inhibited by others. This requires that each opponent process receive inputs from more than one class of cone, and that some are excitatory, others inhibitory. Instead of using angular coordinates the organization of the hue circle could be captured by two orthogonal opponent processes: One running from red through the achromatic centre to green, the other from blue to yellow. The neural point - baseline activation - of the red-green process yields a hue neither red nor green, found at the achromatic centre point, and similarly for the yellow-blue process. If one of the achromatic centre point, excitation of the other yields one unique hue, and inhibition yields its complement. So if yellow-blue is quiescent, we get a colour sensation of either unique red or unique green - the hue if of the end points of the opponent process axis. Yellow and blue are the other unique hues: The remaining hues are binary, or produced by combinations of activation and inhibition of the two opponent process.
None of these facts about the qualitative similarities among the colours follows from the facts of colour mixing and matching. From retinal-based explanations we get at best, receptorial similarity, or proximity within wavelength-mixture space. But the perceptual similarities of colour do not map in any simple way onto such receptor-based identities of the unique hues are not determined by colour mixing and matching, or even by the structure of perceptual similarities among colours. Many parts of colours are complements, and so far as mixing and matching data go, could serve equally well as end points of the opponent axes. Even though the model is a model of colour perception, it proposes principles of organization that he rather deep within the physiology of the organism, remote from direct empirical test.
Of other cognitive studies, Neisser attempted to integrate Gibdson's perceptive with a more traditional cognitive one, others influenced by Gibson such as Michael Turvey, Robert Shaw, and Scott Kelso have pursued a or radical approach, one that is no leading to links with the dynamical systems perspective.
However, in its earlier point cognitive science tended to limit its focus to events presumed to be taking place within the mind or in the brain. What all researchers would acknowledge that minds exist within the cranial walls and that these walls have to deal with the external world (both physical and social), most researchers assumed that they could disregard these considerations when studying cognition. Cognition focussed on the processing of information outside the head of the person. In order for this to happen, information had to be represented mentally: Cognitive processes could then operate on these representations and subsequently, represented information had to be translated into commands to the motor system, but this took place after cognitive processing as such was finished. Jerry Fodor (1980) articulated such theoretical justifications for ignoring both the external world and the body in cognitive science, labelling the resulting framework methodological solipsism, but opposition was already gathering in a number of quarters.
One of the major inspirations for challenging methodological solipsism was the work of J.J. Gibson, a psychologist working at Cornell contemporaneous with the early period of cognitive science but whose impact fell elsewhere. Gibson studied visual perception, but instead of concentrating on the information processing going on within individuals as they see, he examined the information that was available to the organization from its environment. His major contention was that there was much more information available in the light than psychologists recognized, and that organisms had only to pick up this information (Gibson,1966). They did not need to construct the visual world through a process of inference or hypothetical information. He argued, for example, that people do not need to construct a three-dimensional representation of the world: Rather, there is information specifying the three-dimensional nature of the visual scene in the gradients of texture density, change in our conclusion of objects as the perceiver moves about in the environment, and so forth. One of Gibson's major contentions was that the perceiver must be understood as an active agent using its own notion to sample information about the environment. Gibson also stressed that not all organisms pick up the same information from the environment, but rather would resonate with information that is coordinated with their own potential for action. Accordingly, he introduced that the notion of an affordance: Different objects afford different actions to different agents (e.g., a baseball affords throwing to us, but not to frogs) and it is these affordances which organisms are continuously shaped by what we perceive.
Thus conceived, the major problem in explaining perception is: How do all these resources get coordinated to let the system as a whole perform its function in the relevant circumstances? Flexibility requires that resources can be used in a variety of ways. Just as a hand can be used as a shovel or a hammer depending on the situation, so neural tissues can operate as line detectors and as components of a mental image as well.
The systems approach, which is grounded in the experience of everyday perception, is not the standard approach within psychology. For several decades, the predominant approach had a different experiential basis. This is the constructivist approach, which models perception after reasoning about what it believed to be the case in the world. There are important differences between the systems and constructivist approach of perception. Such that the constructivist approach assumes that evolution, learning, and development have direct significance for how the perceptual system works, if the system were created just a split second ago, it would behave in exactly the same way. if changes in the system beyond the time scale of perception can be ignored, components of the process can be regarded as fixed, and their function can be studied in isolation. According to this view, a first outline of what the components of the perceptual system are can be given from general design considerations. These start from the observation that perceptual processes mediate between the physical world and what we believe to be the case. Perception starts from a pattern of external physical stimulation (e.g., the photons that reach the eye) and is complete when this pattern is matched to a internally kept set of beliefs or representations of the world. A conceptual distinction is therefore, needed between sensory processing and an inferential reasoning stage, which could be called perceptual in a more narrower sense of the word.
In the constructivist account, sensory processes provide only the lines and angles of interaction: Perception tells us what objects we are looking at. Perceptual processes operate on the sensory features to construe a perceptual representation, unlike sensory features, perceptual representations do not depend faithfully on stimulation: Ambiguous patterns illustrate this. The existence of alternative responses to the same pattern of sensory stimulation requires two alternative perceptual representations for the sensory stimulation. Different patterns of sensory stimulation may also elicit the same perceptual responses, in particular, it is important that the perceiver recognizes an object as the same underlying different orientations. An elephant is an elephant whether one is looking at the front, back or side. For this reason, perceptual representations are often assumed to have a viewpoint-independent frame of reference. Even in non-stable circumstances, such representations will provide a stable basis for further evaluation against the background of what we know about the world.
The major problem from the constructivist point of view is how to get from objects and events in the world to the perception of them. The fact that sensory processes, being indifferent to object structure and meaning, mediate between the world and experience imposes severe restrictions on perceptual models. By contrast, the need for mediation is denied by a systems account, is that, our view, perceptual systems operate and have evolved in close interaction with the world. So the perceptual system fits like lock and key with the patterns of the environment. A crucial distinction between systems and constructivist approaches to perception concerns the construal of sensory processes.
Our actions are continuously shaped by what we perceive, such that the immediacy of these experiences makes it easy to take perception for granted. Yet, perception requires the flexible cooperation of complex neuro-anatomical resources, for which the eye - the optic nerve and also a significant portion of the brain are involved in vision - we may further consider the eye muscles that are used for focussing and targeting of the gaze to be part of the visual system, as well as the muscles of the neck and shoulders with which postural adjustments are made. Nonetheless, in sensory processing the identification of each of such features will still not be influenced by the overall pattern of which it is a component.
The notion of sensory processes has its historical roots in the concept of sensation. A sensation is the phenomenal awareness of a primary quality (the brightest and hue of a colour, the loudness and pitch of a tone). Phenomenal awareness means that the perceiver experiences what it is like as the colour or the tone: Primary refers to the fac t that these are the operants, presupposed in the notion of constructive operation. The concept of sensation has found its justification in classical a priori conception about.
The notion of sensory processes has its historical roots in the concept of sensation. A sensation is the phenomenal awareness of a primary quality, as phenomenal awareness means that the perceiver experiences what it is like to sense the colour or the tone: Primary refers to the fact that these are the operants, presupposed in the notion of constructive operation. Exponential (Fechner and Weber) or power functions (S.S. Stevens). Where Fechner's proposal results from his assumption that just noticeable differences, proportional to physical intensity, are the units of sensation. Which may well be based on false assumptions. However, the study of sensation has evolved in a separate domain with its own research methods. Classical psychophysics, which started in nineteenth- century Leipzig with Gustav Theodor Fechner, tries to establish lawful connections between how perceives judge their experience, on the one hand, and physical quantities, on the other - (as a function of intensity) to describe sensory quality, as proportional to physical intensity, are the units of sensation. This involves subjects detecting a weak signal (a light intensity, a sound) or discriminating between two signals. But what if a perceiver is just cautious, for instance, in reporting the observed signal. The next question, is, therefore: How much are sensations a by-product of judgmental factors? Signal Detection Theory (Green and Swets, 1966) has provided a technique for distinguishing sensory sensitivity from judgmental bias.
The outcome of a sensory process must have a fixed functional significance for the perceptual process. Suppose a perceptual system uses retinal extension as a cue to infer object size. Let us assume that the inference process ascribes a specific degree of validity to this cue, proportional to the correlation between extension and true object size (not to worry about how this correlation was ever established). It is crucial for the inference that the degree of cue validity doesn’t change during the inference. This is not to deny that the sensory detector could have different functions - for a different perceptual process. Even the next time that extension is used as a size cue, its validity may have changed as a result of experience, but within the context of the perceptual inference, the function of the cue will have to be fixed. An account of sensory proceeding that fulfills for such conditions, reducing sensory processes of sensory processing that fulfills these two conditions, reducing sensory processes to faithful feed-forward propagation of a set of independent signals.
The neuroscience have provided a classical description of the visual system, which is in good agreement with the notion of sensory processing and is therefore, frequently discussed as the view of the neuroscience. On this view the visual system is a feed-forward processing hierarchy which exhibits convergence. Rods and cones, the receptor cells involved in the registration of light intensities at neighbouring positions on the retina, combine their signals to generate on-off patterns in ganglion cells. These are projected through relay stations in the thalamus called lateral genicular nucleus onto cells of the visual cortex. These cells respond most actively to contours or line segments in a specific orientation, resulting from the combined projection of several partially overlapping lateral geniculate cells. Some processing, therefore, seems to combine physical signals into features of increasing complexity (Hubel and Wiesel, 1962) but, still without global information. The global pattern is not represented in the individual cells of the cortex, but available for further processing, because each retina projects to the visual cortex in a systematic manner that respects the typographical organization of the retina.
Modern neuroscience in general suggests a division of labour in the brain according to different sensory modules, each specialized for a certain modality (colour, contrast, odour, temperature, and pitch). Many important attributes of perception, however, are a model (duration, rhythm, shape, intensity, and spatial extent) or multi-modal (such as being a brush fire, which involves the heat, the smell, and the glow). So the notion of sensory modularity increases the need for perceptual integration.
This apparently is still in agreement with the principles of constructivist, which maintain that integration is achieved by processes of a post-sensory, inferential nature. Unimodal perception will, therefore, precede integration across the modalities in development. According to a system point of view, it is the other way round. Arnodal and multi-modal aspects of perception are primary properties, precisely because of the importance of these structures in the environment. The child will therefore start responding to the multi-modal structures, and development is aimed at differentiation.
David Lewkowicz and his colleagues have, over several years, collected ample evidence that young infants (4 months old) perceive inputs in different modalities as equivalent if the overall amount of stimulation is the same. These infants, due to the immaturity of their nervous system, appear to react to the lowest common denominator of stimulation, which is quantity. Quantity is, therefore , modality-unspecific - that is, not associated with a specific sensory quality or process. Lewkowocz proposes that these early equivalences may form the basis for later, more sophisticated equivalency judgment processes. For the attributes of time, for instance, infants differentiate according to synchrony first, and this differentiation forms the basis for the subsequent differentiation of responsiveness to duration, rate, and rhythm.
Research in sensory development suggests that perceptual integration is not achieved according to the constructivist picture of sensory processing as feed-forward signal propagation. Rather, significance of amodal and cross-modal information early in development suggests that integration between the sensory modules occurs early in processing. Such a notion of intersensory processing is in accordance with a systems account of perception, which emphasizes the role of coordination between the components of the system rather than their isolated constrictions to perception.
The neuroscience support the notion of intersensory perception at all possible levels of description. At the smallest scale, this is realized through interneurons, which provide individual cells within the visual pathway with lateral, mostly inhibitory connections. Lateral inhibitions are useful for instance, to selectively enhance boundaries in the pattern of sensory stimulation, because identically stimulated neighbours will cancel out each other's activity. This example illustrates that integration of sensory stimulation into a coherent pattern does not wait until sensory processing is completed but begins in the earliest stage. Lateral connections also occur between different sensory modules and may serve to flexibly enhance or reduce the contribution of a sensory module to the process.
In addition to feed-forward and lateral connections, there are also backward connections, which are likely to play an important role in perception - for instance, from the higher visual areas back to the primary visual cortex and from there back to the thalamus. This is in accordance with the downstream operation of semantic information. Pattern code could be mapped downward in the sensory detection system to correct its output. This would make sensation dependent on background knowledge and meaning. Constructivist would deny that such context has great significance for early sensory processes, but the effects of categorization as on the shade of a red colour patch perceived and of word meaning on perceiving is in accordance with the notions of self-organization favoured by the systems approach.
The central problem of constructivist - how to get from isolated sensory features to the representation of integral object structures - appears to be a misconceptualization. Isolated sensory features do not seem to exist. The close interactions observed, both within and between the sensory modules, appear more in accordance with the view that the sensory systems structure of an object. Communicating with the world on the level of patterns than that communication is on the level of isolated signals. On the other hand, perceptual object structure doesn't appear to have the abstract characteristics that constructivist attributed to it. It may therefore seem that a system approach to perception could provide a better explanation for perceptual phenomena. But the systems approach is not without problems of its own. From a systems point of view, it may appear a miracle that perception functions so well in situations where the conditions require us to go beyond the information given, like limited vision conditions or conditions where the goal of the action is way beyond the horizon of visual stimulation. The constructivist approach explains this from the overall tendency of perception to make sense of a situation. Pictures and film exploit this tendency of perception, including that of being misled by expectation. We see a bank robbery when in fact, there is only a film-set of a bank robbery.
A systems account cannot simply treat this as a separate class of phenomena beyond the domain of perception. So, as James Cutting stresses, perceptual processes operate in culturally mediated contexts, of which the significance obviously goes beyond the here and now. By denying a sensory-perceptual distinction, perception is guided by structure and meaning. This implies that there is no strict separation between the phenomena of perception and cognition. Major figures in Gestalt psychology like Wertheimer and Kanizsa have accordingly applied the generic organizational principles of perception to issue of problem solving. For that purpose, insight into the structure of a problem was treated as an analogy to the perception of the Gestalt structure of an object. Gibsonian ecological realism insists that cognitive and proto-cognitive forms of behaviour are approached with the same notions and methodology: The Gibsonian principle of affordance, for instance, is applied to tools use to symbolic social interaction. Some progress has been made in attempts to characterize rigorously affordances point of departure, these approaches seem to reject a methodological distinction between perception and other information processes. The domain of perception is the bulwarks from which other domains of mental functioning are to be conquered.
In the process of setting up a system approach to perception, brain processes cannot be neglected. The problem is to find a broad, general characterization of these processes. In accordance with the systems approach, the dynamics of perceptual organization in the brain could be approached from the perspective of self-organization. Such that is proposed hologenesis as a uniform principle of self-organization in the perception of object structure and suggested that this principle is embodied in the chaotic activity of the brain (van Leeuwen et al., 1997). Hologenesis is the nonlinear counterpart of a notion we are all familiar with: The idea that the brain is an instrument for stepwise creative synthesis. This notion from the basis for the constructivist approach, which requires that inference processes be posited to explain how the perceiver makes sense of a situation. Alternatively, the principle of hologensis illustrates that a systems account of these phenomena is possible.
It is, nonetheless, that the goals of linguistic theory are to answer such questions as 'What is language'?' And 'What properties must something (an organism or a machine) have in order for it to learn and use language'?' Different theories provide different answers to these questions, and there is at present no general consensus as to what theory give the best answer. Moreover, most linguists, when presses, would say that these questions have not yet been answered satisfactorily by any theory.
However, the functional approach to language holds that the forms of natural languages are created, govern, constrained, acquired, and used in the service of communicative e functions. As, perhaps, no one would deny the importance of functions in human language. We constantly use language to communicate intentions between one person and the next. For example, we can use language to tell another person how to drive a car, where to look for edible mushrooms, and how to avoid falling into crevasses when walking over glaciers. We can also use language to foster social solidarity by greeting and acknowledging of people with salutations and standardized phrases. Both inner speech and external written expression allow us to talk to ourselves in ways that help foster creativity, invention, and memory. Additional artistic functions of language include drama, poetry , and song.
Given to the importance of these various functions of human language, it may be surprising to learn that there is a major debate in linguistic and psycholinguistic circles regarding the extent to which functions determine the shape of language. To what is considered as external, it would seem almost obvious that the shapes and forms of human language use that are determined by the function being served. We use nouns to refer to things and verbs to refer to action. By choosing one word order over another, we distinguish who did what to whom. In this way, the most basic forms of human language are functionally determined. But exactly how does function have its impact on forms? Is the impact direct and immediate, or only indirect and delayed? Is there only one basic way in which functions determine forms, or are there various types of form-function relations? Is it possible that the system of forms could become freed from the shackling linkage to function and take on some type of autonomous existence?
The antithesis of functionalism is formalism. The formalist position holds that although language may serve a variety of useful functions, the actual shape of linguistic form is determined by abstract categories, that have nothing to do with particular functions or meanings. On this view, language is a special gift to the human species whose formal contours such as ‘verb’ or ‘subject’ are abstract objects that are processed and represented in a separate mental module devoted to grammar. The objects of this module are universal and derive not from functional pressures or ongoing conceptualization of the world but from the innate language-making capacity. The language module is informationally encapsulated. The means that it relies only on its own abstract category and rule information to process and represent language: It does not depend on or upon information from other aspects of cognition. According to this view, the liberation of linguistic form from any tight linkage of function and has to the modular architecture that produces the power inherent in the human mind. Because language is being used inside a separate module in the mind, it is not subject to the functional pressures of communication.
A major stumbling block in understanding the extent to which we want to emphasize the functional determination of language has been the existence of a variety of naive functional analyses. Formalists find it easy to dismiss these naive analyses as pre-scientific and empirically flawed. Unfortunately, formalist critiques of functionalism tend to focus exclusively on these naive formulations, while ignoring more complex and powerful versions of functionalism. Perhaps, the oldest naive approach to the relation between form and function is the notion of sound symbolism that we find first expressed by Plato in th e Cratylus, asking why the word fo r table has the sound it doe ss in the Greek language. Socrates replies that this sound is inherent in the nature of the thing itself. The problem with Plato’s approach to the relation between sound and meaning is that different languages are radically different sounds to name the same object. If the English word table had some privileged relation to the object being named, we would have to conclude that the Spanish word ‘mesa’ and the German word ‘Tisch’ are simply impoverished or degenerate attempts to capture a relation that is best expressed by the English word ‘table’.
By contrast, researchers concerned with mediated action and distributed cognition view such assumptions as limiting believing efforts to examine complex phenomena. While these resemblances on or upon the recognized view that of such levels as moments in a more inclusive picture. They do not believe that their basic unit of analysis can be reduced to these moments, or that these moments can be examined in isolation and subsequently combined into a more complex picture.
An understanding of mediated action involve an analysis of the setting or context, that imbued the actions. Some actions are loosely structured, such as when two people who did not know each other improvise a conversation in a novel environment. At the other extreme would be a thoroughly domesticated, formalized setting that imbues a system of activity with a historically entrenched set of roles and tool-using practices. Examples would include navigational procedures followed on a navy ship (Hutchins, 1995), bureaucratic dynamics in a business office, and experimental practices and theory construction in scientific laboratories.
Nevertheless, the contrast has to itself, many contemporary linguistic analyses that focus on the structure of language, Vygotsky’s primary concern was with the instrumental role which language plays in social and individual functioning. He was particularly concerned with the form-function relationships that characterize human social and individual functioning examining such issues as the use of language in complexive and conceptual reasoning and the emergence of inner speech. In outlining his claims about such issues Vygotsky relied on genetic, or developmental analyses, arguing that an understanding of mental functioning must be derived from the study of its origins and the transformations it has undergone in various genetic domains (Wertsch, 1991). In this later regard, Vygotsky investigated changes that occur over sociocultural history, as well as micro-genetic transformations that occur during the performance and regression, especially as they occur during childhood.
Going slightly beyond Behaghel’s first law, which words that belong together mentally are placed close together syntactically, conversely, words that appear next to each other in sentences are usually related conceptually. Virtually any sentence can be used to illustrate this effect. Consider a simple sentence such as ‘All my friends like to eat goat cheese’. The word ‘goat’ is not closely related to friends but is mentally highly related to ‘cheese’, which is why it appears next to cheese and not friends as, in a sense, we can think of sentence structure as arising from the compression, of a three-dimensional graph structure onto a one-dimensional linear chain, for which this compression results in a great deal of ambiguity, but the basic impact of conceptual determination is still clearly evident. Languages like classical Latin that maintain a rich set of inflectional markers to transcend Behaghel’s law for stylistic effects by separating related words. However, this can be done only when the markings are enough to allow the reader to recover the original relations. Less fully inflected languages like English or even Vulgar Latin are more strictly governed by Behaghel’s first law.
Furthermore, we can look at the serial order in sentences such as ‘Travel over the bridge and through the forest’, as indications of the way in which sentences lend to map the order of real-life procedures onto the left-to-right order or words in a sentence. This sentence provides us with instruments to first go over the bridge and then through the forest, rather than the reverse. In general, language tends to provide instructions for action by putting first things first. These principles of natural ordering and iconicity represent a certain level more basic functionalism in language that no one would deny. But we cannot push syntactic iconicity too far.
Functional linguistics have explored a wide variety of interesting correlations between form and function. Some examples of functional syntactic relations that have been studied include the grounding of relative clauses o deictic elements such as ‘that’ and ‘have’, ‘go’, or ‘he’, and the evolution of temporal conjunctions from analogous spatial p repositions. Among the most intriguing patterns studied by functionalist grammarians are the patterns that give rise to ergative syntactic and inflectional marking. This fairly occurs in languages such as Samoan and Mayan. Ergative syntax arises in a fairly straightforward functional fashion from the fact that people tend to delete subjects when they are well known and topical. Th e more a given participant has been mentioned in a narrative sequence or a conversational exchange, the more likely we are to delete or pronominalize that participant. If we were to take an English sentence like ‘the boy chased the girl’ and delete the subject, we would end up with a phrase like ‘chased the girl’, in which the patient is elevated to the primary unmarked case role. In this a y, functional conversational pressures can force a fundamental reorganization of the shape of the grammatical system. It is also interesting to find that many languages that have developed some form of ergativity have confined use of ergative marking cases in which the patient form the ergativity have confined use of ergative making to cases in which the patient is in the third person. These split ergative systems retain nominal marking for first-and second person subjects, but ergative marking in the third person. O split ergative systems marking for first-and second-person. Other split orgative systems mark ergativity differentially across tenses and aspects. These complex interactions between ergativity, tenses evidentially, sand person are excellent grist for the mill of functionalist analysis.
The presence of ergative syntax in some languages, not others, raises still other important questions that must be addressed. If the functionalist pressures arise from conversation and narration are similar in different cultures, why do languages have such widely varying grammatical systems? Perhaps the formalists are correct in saying that grammar takes on an autonomous life of its own inside the syntactic module, without any direct linkage to functional pressures. The functionalist answers to this is much like the answer to similar questions in biology. One can argue that all species of birds instantiate particular adaptations to the functional pressures of food sources, territorial competition, predation, and reproduction. The fact that all species do not lookalike does not mean that these functional presences are not operative in all cases. It simply means that the exact form of the functional pressures varies from one ecological niche to the next. The same must be true of human languages and human cultures. Although all languages are functionally determined, the exact form of the complex interacting pressures varies in detail from culture to culture.
Although many linguistists would agree in assigning a major role to communicatuive function in determining forms such as lexical extensions, word-order patterns, syntactic constructions, or case role making, they would assign a much more peripheral role to functional determination of complex grammatical paradigms. It would be difficult to find an area of language that involves more nonfunctional arbitrariness that the marking of declensional paradigms in language like Latin, Russian, or German. As Mark Twain complained in his essay on ‘The Aweful Germamn Language,’ it seems unfair for the German language to decide that the sun (die Sonne) should be feminine and the moon (der Mond) masculine, while relegating a beautiful young girl (das Mãdchen) to the neither gender. However, even in this hotbed of anti-functionalism, we find a rich set of cues or determinants at work to assign nouns to one of the three genders of German. Some of these cues are semantic in nature. For example, alcoholic beverages are masculine as are rocks and minerals. But the major determinants of assignment of gender are not semantic but phonological cues. Words ending in a -e are typically feminine. Whereas, words ending in -er or containing amlauts show that the system is a complex, but predictable lattice of interlocking cues.
But why should such perplexuity exist at all, if the goal of language is to express communicative functions? Although it is true that the gender contrast in German often provides useful cues for grammatical role and sentence interpretation, the same effect could easily achived through a simpler gender system. For example, Spanish marks many mascukione nouns with -o and many feminine nouns with -a. Spanish achieves the same functional effect, using a smaller set of cues thasn does German. Perhaps we shouldf view the German system as an example of formal determination run amok. However, we need to bear in mind the fact that the linkage of mouns to gender class is bought at a minimal processing cost. Although these systems are difficult for foreigners to learn, they cause little trouble to German children. What means is that acquisition of meaningless form classes is a basic part of our language-marking capacity, as long as the assignment of words to form classes can be achieved on the basis of superficial features such as phonological structure or minor semantic features. Thus, although grammatical gender is predictable, we would certaintly not want to say that it is fully functionally motivated.
Even though, questions about the nature of word meaning have drawn attention across the cognitive science disciplines. Becasuse words are one of the basuic uints of language, linguists working to describe the design of human langusge have naturally been concerned wiuth word meaning. Perhaps less obvious, though, is the importance of word meaning to other disciplines. Philosophers seeking to identiufy the nature of knowledge and its relation to the world, psychologists trying to understand the mental representations and processes that underlie language use, and computer scientists wanting to develop machines that can talk to people in a natural language have all worked to describe what individual words mean, and more generally, what kind of thing a word meaning is.
At first glance, one may wonder why there is enough mystery to this topic to have held the attention of scholars in all these fields over the years. After all, dictionaries are filled with definitions of words. Aren’t these definitions word measning and couldn’t the nature of word meaning be determined just by examining these definitions? If the answer to these questions were yes, the job of cognitive scientsts in this domain would be much simpler. Yet, we are to consider a number of issues that have been raised in the study of word meaning, it will become clearer why dictionaries don’t tell cogntive scientists all they need to know about word meaning.
The two masjor questions for theories of meaning - How can the meaning of individual words be described? And, What kind of thing in general is a meaning? - are difficult to discuss independently. Although ideas about how to describe individual meanings overlap across different views of the nature in which they are embedded. Not surprisingly, that people intuitively think of word meanings as something that they have in their heads, yet psychologists are much interested in how knowledge is represented and used by humans, this view of meaning is consistent with how most psychologists treat word meaning. That is, they consider a word meaning to be a mental representation, part of each individual’s knowledge of the language he or she speaks. In fact, psychologists typically have not distinguished between the meaning of a ord and a concept for which, they treat the meaning of bachelor as equivalent to a person’s concept of bachelorhood. The approach is also shared by linguists in the cognitive linguistic’s camp, who view knowledge of language as embedded in social and general conceptual knowledge.
Given this view of word meanings, th e central question becomes, ‘What is the nature of the meaning representation’ and ‘What kinds of information do word meanings (or concepts) consist of’?’ An answer adopted by many psychologists in the 1970's (Smith and Medin, 1981), dating back to Plato’s quest to define concepts like piety, justice and courage, came into psychology by way of a linguistic theory that we will touch on a word later. This answer is that what a person knows when he knows the meaning of word set of defining (or necessary and sufficient) features: That is, features are true of all things called by other names. For instance, defining features for the word bachelor might be adult, male, and unmarried. If someone’s representations of the meaning of bachelor considered of this set of features, the he or she would consider all and only people with those features to be bachelors. Although this sort of analysis was most often applied to nouns, psychologists George Miller and Philip Johnson-Laird, in that 1976 book, applied a similar kind of analysis to a large number of verbs.
A problem for this possibility, though, is raised by an earlier analyst by the philosopher Ludwig Wittgenstein in 1953, when, argued that for many words, there is no single set of features shared by all and only the things that the word refers to. His famous example is the word ‘game’. Some games involve boards and movable markers, often involve balls and hoops or bats, still others involve singing; furthermore, some involve a winner and some don’t, some are purely for fun and others are for monetary reward, and so on. The psychologists Eleanor Rosch and Carolyn Mervis, drawing on Wittgenstein’s analysis, suggested in 1975 that what people know about many common nouns is a set of features having varying strengths of associations to the category named by the word. For instance, most fruits are juicy, but a few (like bananas) are not; many fruits are sweet, but some (like lemons and limes) are not; some fruits have a single large pit, while others have many small seeds. The most common features, like sweet and juicy fruits, are true of prototypical examples but do not constitute necessary and sufficient conditions by using the word. In support of their suggestion, they found that a sample of college students could not list features shared by all the members of several categories, but the students’ judgments of how typical the objects were as members of a category were strongly correlated with how many of the more common category features each had. Linguists Linda Coleman and Paul Kay argued in 1981 that verbs such as ‘lie’ may work in a similar way. They found that the lies considered most typical by their subject sample involved deliberate falsehoods with the intent to deceive, but some acts that subjects’ verified as lies lacked one or more of these features.
This prototype view, although capturing more of the apparent plexuity associated with many common words, shares with the defining features, view an assumption that the meaning of a word is a relatively constant thing, unvarying from situation to situation. Yet it has long been noted that the same word can have more than one meaning. For instance, ‘foot’ can refer to a human body part, and end of a bed, or the base of a mountain, which are uses distinctly enough to warrant thinking of them as involving different, albeit related meaning. Further, it is clear that the context in which a word occurs may help to determine how it is interpreted. In the 1980's, Herbert Clark argues that context does more than just select among a fixed set of senses for a word: It contributes to the meaning of a word on a particular occasion of use in a deeper way.
Specifically, Clark argued that many words can take on an infinite number of different senses. For instance, most people have the knowledge associated with the word ‘porch’, but in the context of the sentence ‘Joey porched the newspaper’, a new meaning is constructed: Namely, ‘threw onto the porch’. And in ‘after the main living area was complete, the builders porched the house;, the meaning ‘built the porch onto’ is constructed. Because there is no limit to the number of contexts that can be generated for a word, there can be no predetermined list of meanings for a word. Other authors have made related points for less unusual cases of context, arguing for instance, that the meaning of the word ‘line’ is subtly different in each of many different contexts (e.g., ‘standing in line’, ‘crossing the line’, ‘typing a line of text) (Caramazza and Grother, 1976), and that the variants of the word in combination with the context in which it occurs.
Although this view differs from the defining features and prototype views in that it doesn’t treat word meanings as things that are stored in their entirety in some one’s head, as, three approaches share the basic assumption that some critical knowledge of meaning is held by individuals, of which several issues from this assumption. That one is how people understand each other, since meaning must somehow be shared among people in order for communication to take place. The defining features view can easily account for how meanings are shared by assuming that everyone, in proposing that meaning is a much broader set of features with varying strengths of association to the word, it opens the possibility that individuals will differ from one another in the features that they represent and the strength of the association to the word. Each person’s experience with bachelors will be the strength of the associations to the word. Where each person’s experience with bachelors will be slightly different. One person may thick of them as bachelor-farmers of Lake Ontario. Similarly, this version of meaning opens the possibility that each person’s meaning will change over time as their experiences change. The third view of meaning, by taking meaning to be context-dependent, and likewise, implies that a word meaning may differ from person to person and, notably, from situation to situation. And if meaning is person- sand situation -dependent , then it is difficult to know of anything should be called the meaning of a word and what the mental representation of a word consists of. The idea that there is some core part of meaning that is invariant across all contexts or instances of a category offers a useful solution to this principle, but in practice, cores for many words may be difficult or impossible to identify, just as were defining features. Thus the assumption that meaning is something that belongs to individuals, while having intuitive appeal, at the same time raises a number of difficult issues which must be resolved.
In that of saying, that most linguists and many philosophers view word meaning not as something inside individual people’s heads, but as part of a language in a more abstract sense. Many computer scientists likewise seem to take this view of meaning though they are typically less explicit about such assumptions. Meaning, on this view, are treated as attached to words, regardless of the individuals who use them or what they know about them. The most extreme way of formulating this position is to consider meaning to be part of a system that can be characterized in terms of its properties without reference to language-users at all, just as the properties of the solar system might be described without reference to its relation to humans (a view expressed, for instance, in the title of linguist Jerrold katz’s 1981 book, ‘Language and other Abstract Objects’). A more moderate formulation is to think of meaning as things fixed by convention within a language community. A word can then be characterized as having some particular meaning within the linguistic community even if some, or even many , members of the community do not know that meaning or have incomplete knowledge of that meaning. For example, the word ‘turbit’ might be characterized as meaning muddy, cloudy, or dense in English, even if not all people who speak English know its meaning.
In the 1960's and 1970's, substantial effort was made by linguists (and also anthropologists) to describe meaning in terms of features that define the conditions such as Jerrold Katz, Jerry Fodor, and others, is in fact the source of the example of defining bachelor as male, adult, and unmarried used by psychologists (adapted there to a more psychological perspective). Although primarily applied to nouns, this sort of defining features analysis was also applied to verbs by a number of linguists such ss James McCawley and Ray Jackendoff.
A major benefit of this approach is its usefulness in attempting to specify how words are related to other words. Within linguistics, doing so has often been taken to be a major goal for a theory of meaning. Thus, linguists have wanted to capture meaning in a way that would allow them to identify what words are synonymous with other words, what words are synonymous with other words, what words antonyms(opposites), what words name things with part-whole relations (as, for example, arm and body), what ones name things with inclusion relations (as, for example, dog and animal), and so forth. Characterizing meaning in terms of defining features provide a way of doing this: Two words are synonymous if they have the same features provide a way of doing this: Two words are synonymous if they have the same defining features, two words have an inclusion relation if the defining features of one are included in the defining features of the other, and so on. The defining features approach has also provided a convenient way of representing meanings and their relation to each other for use in computer programs that attempt to deal with natural language input, and featural approaches along these lines have been widely used within artificial intelligence.
Another benefit of this approach is that we can then treat some of the individual differences in knowledge about word meanings by saying that a person might not fully grasp whatever the meaning of the word actually is. So, someone who doesn’t understand bachelor to mean adult, male, and unmarried but only adult and unmarried doesn’t fully grasp the meaning of bachelor. To the extent that successful communication and consistency in individual representations of meaning occur, they are presumably achieved in individual representations of meaning occur, they are presumably achieved because people aim to acquire the meaning given to the word by linguistic convention.
Nevertheless, several potentially serious problems arise for the defining features version of meanings as public entities. A major one is that, it seems impossible to provide an analysis of many words (such as ‘game’) in terms of defining features. Another is that, also along lines as we might want to include other features such as ‘likes’ to ‘party’ and ‘derives a sporty car’, as part of the meaning of bachelor. One solution to these problems is to expand the notion of meaning to encompass a broader range of features, such solutions have sometimes been proposed and have been incorporated in some artificial intelligence to describe where word meanings ends, and general knowledge begins: They also undermine the attempt to provide an account of relations like synonymy and antonymy between words. Another solution, adopted in the 1980's by the linguist George Lakoff and others, is to view a word as having a set of distinct but so specifiable meaning that may have a variety of relations, including metaphorical relations, to one another. This solution likewise makes it more difficult to see how relations like synonym can be specified and it requires enumerating a potentially very large number of meanings for each word.
So far, we have made mention about meaning in the way in which it is used in everyday language: As something that can be described in conceptual terms, so, whether we want to say that the meaning of bachelor resides in individual heads or belongs to a language in some more abstract sense, we can describe its meaning in terms of concepts like ‘adult’ and ‘male’. However, scholars of meaning since the philosopher Gottlob Frége in the late 1800's have distinguished between two components or aspects of meaning. One, the sense or intension of a word, is the conceptual aspect of meaning that we have in presence of mentioning. The other is the reference or extension of a word, the set of a things in the world that the word refers to. For the word bachelor, for instance, the reference of the word is the set of all (real or possible) bachelors in the world. In other words, the reference aspect of meaning is a relation between a word and the world.
Psychologists, linguists, and computer scientists holding any of the views of meaning discussed so far could generally consider the sense of a word to be the primary concern for a theory of meaning, although they would also agree that the theory should account for what entities the word is used to refer to. A view of meaning quite distinct from this perspective, though, has recently been influential, and that is a view that says, essentially, that the meaning of a word is it s relation to things in the world: That is, meaning is reference.
An important argument for this view, de rived primarily from analyses of meaning by philosophers Hilary Putnam and Saul Kripke, is based on the observation that the features that one thinks of as constituting the meaning of word could turn out not to be true. For example, a person (or a language) might specify features like ‘sour’ and ‘yellow’ the meaning of the word ‘lemon’, but it could turn out that these features don’t accurately reflect the truth about lemons. Research could reveal that pollution makes lemons yellow and sour, bad normally, as they would be green and sweet. The word lemon would still refer to the set of things in the world that it did before everyone revised their knowledge of the properties of lemons. Similarly, new scientific discoveries could add to or alter beliefs about the properties of many objects, but those changes in the properties associated with words could not change the set of things correctly named by the word. Putnam suggested, on the basis of these and other arguments, that word function simply to pick out sets of things in the world. On this referential view, the properties constitute a stereotype of what the object is like (or seems to be like), but they do not constitute the meaning of the word. As Putnam wrote in advocating this view in 1973: ‘Cut the pie any way you like, ‘meaning’ just ain’t in the head’ (And likewise, according to this view, they ‘just ain’t’ definitions held by a linguistic community).
A benefit of this referential view of meaning is that it provides an account of stability in meaning and communication: A word refers to the same set of things in the world regardless of variations in knowledge among people, and use of a word to refer to a particular set of things can be passed from generation to generation regardless of changes in beliefs about properties of the objects. However, it also has weaknesses and one prominent one is that the analysis does not seem to apply to many property of bring unmarried. Although we can imagine researchers discovering that lemons really are green, it just isn’t possible for researchers to discover that bachelors really are married people. Even if all men previously thought to be unmarried turned out to be married, we wouldn’t change the properties associated with bachelor, we would say that these men weren’t bachelors after all. Likewise, ‘island’ seems to intrinsically refer to things with the property of being surrounded by water; researchers can’t change that property. And although discussion of this view is usually restricted to nouns, the some point would apply to verbs: ‘Run’, for instance, seems to intrinsically refer to a certain kind of motion, and any activity not involving that motion just wouldn’t be running. In such cases, having the associated properties does seem to be critical to whether or not the word can be applied to the object. If the referential view is correct for some words, this observation raises the interesting possibility that the nature of the meaning may differ for different words, and one analysis of meaning may not be appropriate for all words.
It should now be clear why dictionary definitions don’t tell cognitive scientists all that they need to know. At the broadest level, dictionaries don’t address what kind of thing a meaning is: A mental representation in some one’s head, something that is part of a language in some more abstract sense, or a relation between words and the world. At a more detailed level, dictionaries don’t reveal the status of the pieces of information they offer about a word. Is given property truly defining? Or is it part of a stereotype that is not the actual word meaning? Nor do they address the role of context meaning and the extent to which words may take on new meanings in new situations, or the full extent to which words may have many subtly different uses. Finally, they don’t reveal whether some words, like ‘lemon’, may differ fundamentally in the nature of their meaning from other words, like ‘bachelor’ or ‘island’.
One historical stumbling block to a full account of meaning has been that scholars in each discipline often were not aware of issues raised by the other disciplines and so on, were satisfied with proposals that were relatively narrow in scope. However, the emergence of the multidisciplinary cognitive science effort has already increased shared awareness of some of the plexuities to be dealt with. In doing so, has provided a push toward broader perspectives on how to tackle these issues. Psychologists have begun to incorporate aspects of philosophical theories into their views of mental representation and processing; linguists have to make use of the information provided by laboratory experiments on meaning and so on. Although it is not as yet clear what form a more integrative theory of meaning will take, however, progress may be in the horizon.
No comments:
Post a Comment