Gardenfors, Peter [Gärdenfors];
Conceptual Spaces: The Geometry of Thought
MIT Press, March 20, 2000, 317 pages (Cognet) [gbook]
ISBN 0262071991
topics: | cognitive-psychology | language | computer | categories | semantics | grammar
In the beginning there was the Word. And the Word was with God, and Logic was God. God was hard and merciless. God was "scientific". And god created Meaning as the thing referred to by Word. But then there was Frege, and Evening Star and Morning Star, though they referred to the same object, were not the same. And then there was Russell, and then God-el, and the end of innocence. But we must not forget early Wittgenstein, who was with Logic and God. And then late Wittgenstein, who turned apostate: soft and mushy. And then there was Chomsky's Logical form, and Montague's Quantifiers. Again, Predicate was the Word, and Logic was God. But when Chomsky turned away from meaning. God parted the waters, and he led the flock into the land of Syntax. And there came to be cognitive linguists, who desecrated the Word. Meaning was all the associations in the mind, they said. Unhappy God sent Gardenfors to set things right. And Gardenfors sought to repel the evil forces by building a new temple so the God of Logic could survive the tremors of prototype theory and connectionism In his temple, Logic was God, but Similarity was Head priest, The divine Path was strewn along many Dimensions And Truth was diffused Such were the Symbols of the Lord. Amen. - amit mukerjee mar 2009
A tour de force of the literature on the mapping of concepts, symbols, words, and the like to the senses, experience, and the like. The main hypothesis is that the semantic pole of a symbol does not admit of a single-attribute description (where attribute is a sensory input, spatial coordinate or the like), but is multi-attribute which can be expressed only in multiple dimension. Thus, a colour consists of hue, saturation and intensity. A spatial preposition involves a complex set of relations.
In the language of psychology, these concept maps are often called image schemas. Image schemas form when the brain is exposed to a number of salient similar situations (salient = affecting outcome, or attracting attention). For instance, if one is often sitting in the garden, and leaves are fluttering in a light breeze, you might eventually form a pattern or image-schema for this sensory experience. Image schemas reflect the underlying patterns and variabilities of the input, and keep updating themselves as new input arrives. Also, while such image schemas may affect behaviour, one may not be aware of it. This happens only when they are communicated or at least dealt with consciously; this is when they "reify" or come to awareness. If it is reified through communication then it also becomes associated with a bit of language (a "phonological pole"), which may be an utterance, such as "wind in the trees". The combination of such a label and the image schema becomes a symbol.
But the same image schema may map in different ways in different languages. Consider these spatial prepositions:![]()
Language and Image-schemas: Languages are idiosyncratic in the way they map image-schemas into lexical units. English on combines situations that in German might be called auf or an. In the native American language Mixtec, the difference may reflect a variation of relative sizes which English speakers are not even aware of. The main claim here is that meaning, or image schemas if you will, are defined along several dimensions, and that the region mapping a schema is mostly "convex".
One question not addressed is the nature of the dimensions. In the examples, they are mostly accessible, i.e. attributes that we commonly cognize. Yet most patterns, even for appearances (say "cat") would not exist on the high dimensional space of images, but in some lower-dimensional patch that is embedded in this high-dimensional domain (manifolds). These manifolds constitute implicit dimensions; thus not only are the patterns implicit, but even their dimensions may be implicit. This negates a lot of the power of the dimensional model, because the problem can no longer be accessed in terms of a finite set of parameters associated with accessible dimensions. For example, the class "numeral", or the concept of "slot". Things may get more complex when talking of temporal predicates - most verbs and other lexical units that deal with events (e.g. "jump", "bungee jump"). The temporal dimension is not one of the "dimensions" in the sense of Gardenfors; rather attributes abstracted on the temporal dimension may be (e.g. "gobble" vs. "eat"). These parameters are also likely to be implicit. Since most abstract concepts are constructed by abstracting processes (e.g. "democracy" from "vote"), this also means that abstract concepts would also ultimately rely on some implicit dimensions. Convexity claim: One of the original claims made in the text, that the mappings or image-schemas of a symbol onto these multi-dimensional spaces are constituted of convex spaces, may be true to some approximation in these more abstract cases as well. One argument for this is from learnability, where convex sets are far easier to describe than non-convex ones.
Within cognitive science, two approaches currently dominate the problem of modeling representations. The symbolic approach views cognition as computation involving symbolic manipulation. Connectionism, a special case of associationism, models associations using artificial neuron networks. Peter Gardenfors offers his theory of conceptual representations as a bridge between the symbolic and connectionist approaches. Symbolic representation is particularly weak at modeling concept learning, which is paramount for understanding many cognitive phenomena. Concept learning is closely tied to the notion of similarity, which is also poorly served by the symbolic approach. Gardenfors's theory of conceptual spaces presents a framework for representing information on the conceptual level. A conceptual space is built up from geometrical structures based on a number of quality dimensions. The main applications of the theory are on the constructive side of cognitive science: as a constructive model the theory can be applied to the development of artificial systems capable of solving cognitive tasks. Gardenfors also shows how conceptual spaces can serve as an explanatory framework for a number of empirical theories, in particular those concerning concept formation, induction, and semantics. His aim is to present a coherent research program that can be used as a basis for more detailed investigations.
TWO GOALS OF COGNITIVE SCIENCE:
- explanatory - formulate theories that explain empirical aspects,
tested by experiments or simulations
- constructive - construct systems that demonstrate cognitive tasks
TWO REPRESENTATIONS:
- symbolic - formal - Turing machines
- associationism - connectionist
Neither is appropriate to CONCEPT ACQUISITION ==>
requires notion of similarity - Conceptual Spaces
Desiderata for an analysis of abstraction in concepts:
- members of each concept have something in common (they are all
shapes, or all colours)
- but the elements also differ in that very respect (they are diff
shapes)
- resemblance order based on intrinsic nature (triangularity is
like circularity, redness is like orangeness etc)
- form incompatibles - the same particular cannot be both
triangular and circular, or red and blue.
In symbolic representations the roles of similarity has been severely
downplayed
(role of Aristotle's necess and suff conditions - law of
excluded middle)
QUALITY DIMENSIONS:
==> explanatory - phenomenal (psyhological - e.g. colour circle)
==> constructive - scientific dimensions (e.g. weight vs mass)
vertical dim looks bigger - (gravity?) - moon looks bigger near the
horizon - though it is the same objective size
e.g. temperature, weight, brightness, pitch
geometrical or topological structures - e.g. time
---- Past ---- now ---- future ---->
may be culturally specific - some cultures time is circular.
[Aymara - time's arrow is backward]
circular : hue - orange - yellow - green - blue - violet -red - orange... radial : chromaticity vertical : brightness -- shrinking up/down as a double-cone visible range : 420-700nm violet==>red HUMAN COLOUR VISION - TRICHROMATIC - cones - maximally sensitive at 455nm (blue-violet), 535nm (green), 570nm (yellow-red). [see Robert Pollack's Missing Moment, (about the Unconscious brain), chapter 2, for why the red-green gap is about 40% of the green-blue gap - red-green is a recent (duplicate gene) mutation. See these excerpts, dichromatic - many mammals tetrachromatic - turtles / goldfish pentachromatic - pigeons / ducks [Shepard 62: analysis of proximities: multidimensional scaling] took 14 hues -- asked to rate the similarity of each pair of hues on a scale from 1 to 5. Results analyzed using multi-dimensional scaling (e.g. Kruskal, 1964 KYST algo), with two dimensions ==> results in colours lying along a circle - based on two opponent axes - red-green and blue-yellow - very good fit to the colour circle.
SOUND pitch - 1-D from low to high timbre - function of higher harmonics of the fundamental frequency MISSING FUNDAMENTAL: even if fundamental tone is removed by artificial methods, the pitch of the tone is still perceived as that corresponding to the missing fundamental. ==> harmonics essential to how tone is perceived. TASTE four types of receptors - salt, sweet, bitter, sour ==> do these form a 4D space? Tetrahedron [Henning 1916]? But may have > 4 fundamental tastes. BETWEEN-NESS and EQUIDISTANCE between(a,b,c) - b between a and c; B(abc)^B(bcd) ==> B(abd) line L_ab: set of points x s.t. B(a,x,b) density: forall a,c, exists b st B(abc) equidist (a,b,c,d) ==> dist(a,b) = dist(c,d); B(abc)^B(def) ^ E(abde)^E(bcef) ==> E(a,c,d,f) ==> model of "sum" operation Metric spaces: minkowski distance = (dx^k + dy^k )^(1/k) Similarity (i,j) = e^(-c*dist(i,j)) claim that gaussian is better model (Nosofsky 86): similarity (i,j) = e^(-c*dist(i,j)²) MULTIDIMENSIONAL SCALING (MDS) : input - ranks of similarity judgments for k points - Given dimension n, guess initial set of coordinates - adjust coords based on "stress function" (degree of mismatch). Convergence when no change in stress. Theorem : the higher the n, the less the resulting minimal stress. Dimensions may have no psychological explanation.
integral dimensionality - high cross-dimensional similarity - distance
is euclidean
separable : the dimensions are independent - distance is city-block (manhattan)
experiments:
- Speeded Classification: categorize squares on shape while colours
may or may not (control) vary. Filtering condition - categorize on
colour while shape changes. Q. if speed is affected then one dim
interferes with other ==> high cross-dim correlation ==> integral.
- redundancy task - redundancy gain in categorizing single variation
over multiple.
- direct dissimilarity scaling subjects must judge dissimilarity
(on scale from 1 to 10, say) of pairs of points from a 2D space.
construct MDS space;
if Euclidean dist fits data best ==> INTEGRAL
if city-block (manhattan) dist ==> SEPARABLE
DOMAIN - spaces with integral dimensions. e.g.
- colour - hue / sat / brightness - integral ==> single domain
- tone - pitch / loudness
domains may have similarity, but these may not be expressible as
metric - e.g. no single scale on the entire space.
cross-correlations between domains - e.g. in space of FRUITS, colour
and ripeness are correlated ==> LEARNING
INNATE: Are quality dimensions innate?
some sensory dimensions - colour sound taste -
- structuring principles of topographic mappings for sensory modalities
are basically innate across species
- reprs of ordinary space also may be innate
"lower" animals - may not be cognitively weaker in their formal
characteristics - but are more impoverished - a diff in quantity rather
than quality. Experimental evidence that even insect maps are
metric maps. 27
Children's cognitive development - [L.B.Smith 89: From global similarities - construction of dimensions in development]: working out a system of perceptual dimension, a system of kinds of similarities, may be one of the major intellectual achievements of early childhood... The basic developmental notion is one of differentiation, from global syncretic classes of perceptual resemblance and magnitude to dimensionally specific kinds of sameness and magnitude. [This view, of global to more detailed, is challenged in Rosch:1978] [Goldstone/Barsalou:98]: 2-year-oldchildren have difficulty identifying whether two objects differ on their brightness or size, though they can easily see that they differ in some way. Both differentiation and dimensionalization occur through one's lifetime. [Shepp 83]: Younger children [perceive] objects as unitary wholes and fail to attend selectively [characteristic of perceptions in integral dimensions for adults]. older children - perception more characterized by specific dimensions ==> selective attention. ==> dimensions that are separable to adults and older children are perceived as integral to the young child. 28 Some dimensions may be culture-dependent - e.g. time as circular [or the directionality - Aymara]. Western notion of time is comparatively recent [Toulmin and Goodfield 'The Discovery of Time' 65] Some dimensions are introduced by science [theory?] e.g. distinction between weight and mass - can only be learned by adopting newtonian theory. 30 Cog Sci has two predominant goals - to explain cognitive phenomena, and to construct artificial systems that can solve various cognitive tasks. Conceptual spaces are static - they operate together with processes that are dynamic - can then generate falsifiable predictions. [Pot/vanGelder:65, Kelso 95, vanGelder: BBS-98: Dynamical hypothesis in Cog Sci] 31
COMPUTATIONALISM - symbolic paradigm:
info processing - essentially the manipulation of symbols according to
explicit rules. symbol contatenations = language of thought or Mentalese
[Fodor 75].
Pylyshyn 84: "If a person believes (wants, fears) P, then that person's
behaviour depends on the form the expression of P takes rather than the
state of affairs P referst to..." Thus, the manipulations of symbols are
performed without considering the semantic content of the symbols. 36
Fodor 81:
Insofar as we think of mental processes as computational (hence as formal
operations defined on representations), it will be natural to take the
mind to be, inter alia, a kind of computer. That is, we will think of the
mind as carrying out whatever symbol manipulations are constitutive of the
hypothesized computational processes. To a first approximation, we may
thus construe mental operations as pretty directly analogous to those of a
Turing machine. [Fodor 81: 230]
The material basis for the symbolic processes, be it logical, linguistic, or
of a more general psychological nature, is irrelevant to the description of
their results - the same mental state with all its sentential attitudes can
be realized in a brain as well as in a computer. assumes a FUNCTIONALIST
philosophical position - once the input is given to the agent, the logical
machinery produces the output
ANTI-PHYSICALIST: cannot be reducible to neurobiological or other physicalist
categories, because the functional role of the symbols and the inference
rules can be instantiated in many ways - neurophysiological or electronic,
say - causal relations involving the physical substrate cannot be diff for
for different realizations of the same logical relations. hence, not causally
connected with physical processes. [elaborated in Churchland 86:
Neurophilosophy, sec 9.5] Pylyshyn 84 Claim: My brain states are not, as we
have noted, causally connected in appropriate ways to walking and to
mountains. The relationship must be one of content: a semantic, not a causal
relation. 37
FRAME PROBLEM - [McCarthy Hayes 69, Dennett 87 [in Pylyshyn: The Frame
problem in AI, Ablex. Also has Janlert 87]:
Problem of specifying what stays the same and what changes when an action
is performed. Some of these changes are "relevant" - others are not -
how to distinguish these? Various escape routes were tried but the frame
problem persisted in one form or another. The entire program of building
planning agents based on purely symbolic representations more or less
came to a stall. 37
Predicates are "theoretical primitives" so they cannot emerge - there are no categories available for specifying the situation prevailing before symbols happened. A successful system must learn radically new properties from its interactions with the world and not merely form new combinations of the given predicates. 38 In his despair of circumventing the problems of the genesis of the predicates, Fodor (75) goes so far as to claim that all predicates an agent may use during its entire cognitive history are innate to the system. 39 But then how did they originate evolutionarily? And what of notions which were not there in our pre-history? PREDICATES ARE DYNAMIC - the meaning of a concept changes - how is this to be captured?
Grounding: How are symbols connected to the world, instead of being defined in terms of other meaningless symbols? Harnad 90: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded by anything but other meaningless symbols? ... The problem of connecting up with the world in the right way is virtually coextensive with the problem of cognition itself. [Stewart 96]: Linguistic symbols emerge from the precursors of the semiotic signals of animal communication, they always already have meaning, even before they acquire the status of symbols. On this view, formal symbols devoid of meaning are derivative, being obtained by positively divesting previously meaningful symbols of their significance. This process occurred historically in the course of axiomatic mathematics from Euclid to Hilbert. From this point of view, the "symbol-grounding problem" of computational cognitive science seems bizarre - why go to all the bother of divesting 'natural symbols' of their meaning, and then desperately try to put it back... 38-9 [Fodor 75]: All predicates that an agent may use during its entire cognitive history are innate to the system. 39
Induction was very important to logical positivists to meet their verificationist aims. Led to famous problems - [Hempel 65]: Paradox of confirmation and [Goodman 55]: Riddle of induction. (chapters 3 and 6) Natural kinds: are realistic, following aristotle - exist in the world indep of human cognition. 39 Induction - not suff if the properties exist out there - must form in our mind ... how do predicates arise? 39 The symbolic approach to concepts has no place for creative inductions, no genuinely new knowledge, no conceptual discoveries. What is being denied [is that] one can learn a language whose predicates express extensions not expressible by those of a previously available representational system. [Fodor 75, p. 86] ["extensions" here are contents of the predicates, i.e. the set of objects that belong to this concept]. 40 Rather than computers, cognitive systems may be dynamical systems; rather than computation, cognitive processes may be state-space-evolution within these very different kinds of systems. [van Gelder 95] 42
There is a tension between Philosophy Of Mind and Cog Psych [macro] and Neuroscience [micro]. what is needed is a medium-scale theory, [between] the small scale theory of neurology and the large-scale theory of psychology. 50 For many cognitive processes, for example concept formation or word recognition, the neuronal or connectionist scale is too fine-grained to be of explanatory or constructive value. 57 Mandler [92]: on language learning and concept formation in infants - contends that "image-schemas provide a level of representation intermediate between perception and language that facilitates the process of language acquisition." on p. 587 in [Jean Mandler, How to build a baby II: Conceptual Primitives, psychological Review v. 99, 1992: p.587-604] [Mandler 92]: human infants represent information from an early age at more than one level of description. The first level is the result of a perceptual system that parses and categorizes objects and object movements (events). I assume that this level of representation is roughly similar to that found in many animal species. In addition, human infants have the capacity to analyze objects and events into another form that while still somewhat perception-like in character, contains only fragments of the information originally processed. The information on this next level of representation is spatial and is represented in analog form by means of image-schemas.... This level of representation also allows the organism to form a conceptual system that is potentially accessible, that is, it contains the information that is used to form images, to recall, and eventually to plan. A similar level of representation apparently exists in primates as well. ... Humans of course, add still another level of representation, namely, language. Whatever the exact nature of the step required to go from image-schemas to language, it may not be a large one, at any rate, not as large as would be required to move directly from a conceptless organism to a speaking one. p. 602
To have a concept is, among other things, to have a capacity to find an invariance
across a range of contexts, and to reify that invariance so that it can be
combined with other appropriate invariances. 59
In many semantic theories no distinction is made between properties and concepts.
[But] properties should be seen only as a special case of concepts.
- A property is defined with a single dimension or a small number of integral
dimensions forming a domain (e.g. colour space)
- A concept may be based on several separable subspaces. 60
For many properties, one can have empirical tests to decide whether it
is present or not in an object.
In particular, we often perceive that an object has a specific property. 61
EXTENSIONAL SEMANTICS: Tarski's model theory: property is a set of objects with
that property. (Mapping from language L to model structure M, and each predicate
in L is mapped to some subset of objects in M, where M represents "the world").
Problem: INTENSIONAL properties. E.g. small - is not just a set of small
objects - e.g. emu is a bird, [Properties are ?intersections- so "red emu" is
a "red bird", but] a "small emu" is not a "small bird". 61
INTENSIONAL SEMANTICS: L is mapped to a set of possible worlds instead
of a single world. [David Lewis, Montague 74, Hintikka 61 Kripke 59]
- PROPOSITION: function from possible worlds to truth values
determines all pw's where some P is true.
PROPERTY: function from possible worlds to sets of objects - but can also
define it as a function from sets of objects to pw's 62
The latter is the extensional set; the former the intensional notion
of property.
[The terminology function from X to Y - does not? tally with figure]
1. Completely formal - Has no perceptual basis -
Cognitively unhelpful - what happens when a person notices that two
objects have the same property in common (e.g. why colours look
similar).
2. More serious problem: Induction
How does one relate two objects that share some property - how does a person
perceive the similarity in this case - implies definitions may not be
[Goodman 55] Riddle of induction:
emeralds - all found so far (recognized as emerald based on some other
property) are green. Now consider the predicate grue: green until
2006 and then green. Why do we call emeralds green and not "grue"
? Properties that function in induction are called "projectible"
Properties like grue are not projectible (since we don't know what
might happen in other possible worlds - but are not
distinguishable.
3. Can't express the ANTI-ESSENTIALISTIC doctrine - that things have
none of their properties necessarily. Stalnaker 81: can't even
express this aspect in an intensional model.
Essential property - class of being self-identical - e.g. "being the
same age as Ingmar Bergman" - is essential to Ingmar Bergman.
Because there is no diff between world-indexed and world-independent
properties - cannot define independent distinctions corresp to the
intuitive ones needed to state a coherent version of the
anti-essentialist thesis. ==> Needs an account of properties without
using PWs and individuals.
4. [Putnam 81] the definition of property do not work as a theory of
"meaning of properties.
Assumes: a. Meaning of a sentence is a function that assigns it a truth
value in each possible world, and b. Meaning of parts of a sentence
cannot be changed w.o. changing the meaning of the whole sentence.
1. A cat is on a mat
2. A cat* IS on a mat*
Three classes of possible worlds:
a. Some cat is on some mat and and some cherry is on some tree
b. Some cat is on some mat and and no cherry is on any tree
c. Neither (a) nor (b) holds
3. x is a cat* iff (a) holds and x is a cherry, or (b) holds and x
is a cat, or (c) holds and x is a cherry.
4. x is a mat* iff (a) holds and x is a tree or (b) holds and x is a
mat; or (c) holds and x is a quark.
Given these definitions, it turns out that (1) is true in
exactly those possible worlds where (2) is true. Thus, according to
the received view of meaning, these sentences will have the same meaning.
There are always "infinitely many interpretations that assign the
'correct' truth-values ato the sentences in all possible worlds, no
matter how these 'correct' truth values are singled out" Thus
"... truth-conditions for whole sentences underdetermine reference."
Reason: there are too many potential properties if they are defined as
functions from objects to propositions.
Let conc space S = D1..Dn of quality dimensions. A point in the space is v = (d1.. dn). property = region in C-space. A subset C of conceptual space S is STAR-SHAPED w.r.t. point p if for all x in C, all points between x and p are also in C CONVEX: A subset C of conceptual space S is convex if for any x,y in C, all points between x and y are also in C Criterion P: Natural Properties is a convex region of a domain in conc space. 70
Shepard: [natural kinds] in the individual's psychological space... although variously shaped, are not consistently elongated or flattened in particular directions. Why? Cognitive Economy - handling convex sets put less stress on learning... Note: "between x and y" requires a mapping. E.g. in Colour space, which is circular in hue, such a mapping for "between-ness" must be along curved lines (constant r inspace) rather than straight lines. 72 Colour property - iso-semantic lines (constant chromaticity / blackness - are convex. 74 most linguistic concepts follow criterion P 76 Re: problems with intensional smeantics model of property, Natural Property is: 78 1. perceptually grounded 2. helpful for induction - since only convex properties can be induced, it eliminates many unlikely candidates for generalization 3. Since the property definition is embedded in a conc space, it is rich enough to state the anti-essentialism thesis (maps properties into a logical space) 79 4. Also Putnam and Goodman functions are non-convex and go away. 80 Evolutionarily - needed in order to make right inductions (wrong-inducing organisms go extinct) 82
Prototype theory: Graded Properties ==> Convex Voronoi regions Aristotelian -- necessary and sufficient conditions - hence all members are equal. Graded Properties: e.g. "red" or "bald" are well known to be graded, but so are "chair" or "bird" - robin is more prototypical than, e.g. a penguin or an emu. e.g. well known that vowels are characterizable by first two formant frequencies (F1/F2). Plots of vowel distributions acceptable in Am English show them to be convex regions in F1-F2 space. Criterion P makes falsiviable prediction: That other langs also will have convex-vowel property. [IDEA: Even without knowing the word, I can say that Aristotle ==> Aristotelian and not, Aristotle-ian. How do I know this rule? ] If we consider prototypes as the "centers" of different clusters in conceptual space, then based on an euclidean metric for any domain, and a similarity measure that is of the type e^-dist^k (or any f(dist)) the resulting regions become convex VORONOI tesselations. [88]
Stop Consonant Phonology: Dimensions = voiced/unvoiced and
labial-dental-velar ==> boundaries analyzed between /b, /d, /t. /g, /p, /k.
The map [Petitot 1989] indicates that /p/ and /d/ are adjacent, whereas /b/
and /t/ are separated (i.e. having a higher contrast).
.
/|\
voiced | b | d | g
| __/ _/
| --' \_.-' \__,---
| | /
un | p | t | k
voiced | |
|_________________________\
labial dent velar /
[petitot 1989]
For the city-block metric, then the boundaries are 0, 45, or 90 deg - and the
tesselations are STAR-CONVEX. [91]
[NOTE: However, considering that manhattan metrics are likely in non-integral
domains, distances would correspond to non-integral paths, and in this sense,
these tesselations are also convex. ]
HEIR: Complex concepts in kinship, such as "heir" - are also convex - ie. if
x is a heir to z then y on the path from z to x is also a legal heir.
["Shape bias" in new words - Jones and Smith Cog Dev v.8 1993]
DYNAMICS [Given the Marr and Nishimara 3D human model] - Marr and Vaina 82 -
extends to actions - differential equations for movements of the body parts -
related to FORCES applied to body parts.
Forces ==> underlies much of our understanding of actions and verbs [Talmy]
Can extend to "Social Force" - [Kelso 95 book]
FUNCTION - e.g. chairs - mostly defined by function and not shape. Can
relate functions to SET of ACTIONS that the object can afford. May be
reducible to force dynamics [98]
[NOTE: OR may be reduced to usage CONVENTION / CREATIVE usage w similarity to
sitting posture. ]
Properties - based on one domain (may be 1D or have integral dimensions) Concepts - involve multiple (separable) domains Similarity : weight wi for dim i : d(x,y) = sqrt (SUM( wi . (xi-yi)² )) The wi are context-dependent - can vary based on PERSPECTIVE - taking a particular perspective is giving some domain more attention. For example, whether something is perceived as a cup or a bowl may depend on the context in which it is being experienced. SPATIAL SCALING : subjects when sensitized to certain areas of a dimension, find lengths in this area to be longer ... say objects 1 or 2 cm are in one category and objects 3 to 4 cm are in another. By attending to the gap between 2 and 3 cm, subjects will selectively highlight this diff, so that the perceived dist between 2 and 3 cm objects is greater than that between 1-2cm or 3-4cm. Also - competition between dimensions - x axis is more sensitized when categ depends more on x... this effect higher for separable dims than for integral. [Goldstone 94 Influences of categorization on perceptual discrimination J Exptl Psych 123:178-200; Also 94; role of sim in categorizn - Cogn v 52] Aristotle - theory of essences: - essential properties of "human" : "rational" and "animal" - peripheral properties: "featherless", "bipedal" May not be very germane.
Theory-Theory models of similarity - base similarity on causal interactions. Representation = Representation of similarities [S. Edelman BBS96] Similarity - Distance-based? SYMMETRY? Tversky : similarity is asymmetric [See also Langacker] ==> Tel Aviv is judged to be more similar to New York than New York is similar to Tel Aviv. [112] This may be because salience of dimensions change - when Tel Aviv is compared to nY, other dims will be more prom than when NY is being compared to TA. [In Langacker terms - when NY is the trajector, it will profile diff aspects of the domain 113]
[classes against which an element has to be contrasted] white wine - not white - role of CONVENTION 115 [the colour just distinguishes it from darker (red) wines; so it does not have to adhere to larger notions of "red" or "white". Similar to "small elephant". ] iron cast vs cast iron 116 [Murphy 88,90; Wisniewski 96] "porcelain cat" (like "stone lion" or "fake gun")
- "brown apple" ==> induces wrinkles [Smith /Osherson et al 88: Cognition: combining prototypes] - "wooden spoon" ==> is bigger [Medin / Shoben 88 Context and structure in conceptual combination Cog Psych] [Kashmiri has a word: chonche ==> used in cooking] (similar to "big dog" "small dog" ==> induces coloured exemplars...] (see [Medin/Shoben:1988,] eric.ed.gov): Three experiments evaluated modifications of conceptual knowledge associated with judgments of adjective-noun conceptual combinations. The subjects included 109 students at the University of Illinois (Champaign). Results indicate that models that attempt to explain combined categories by adding or changing a single feature are not successful.
PET BIRD: [Hampton 97 habitat domain - "domestic" or "cage" - inherited from "pet"; for prototypical bird, habitat is in the wild skin domain - "feathered", taken from bird, as opp to "furry" which is the prototype for pet. The general principle appears to be that in cases of conflict, select the region with the CONTEXT: Bathwater - temp range is shorter, and towards the hotter end of "tapwater" - hence, "hot bathwater" is a hotter range than "hot tapwater" RED:[121] red book : purely compositional red wine : more purple red hair : copper red skin : tawny red soil : ochre redwood : pinkish brown [Analyzes it by positing a subspace (smaller colour spindle embedded in larger one) for skin colours inside the colour cone for all colours. Inside this skin, the side more towards the colour is being talked about - thus white is beige, BLACK SKIN is darkest colour (still brown), etc. [REDCOAT - neither red nor coat - class of vahuvrihi compounds - [IDEA: why must we assume Productivity ? Langacker: those of these that are units are fixed by CONVENTION. Convention: requires one to model diachronic processes. Why was a name chosen? requires a model for the user group. Not sufficient to address synchroinic data alone. However, the productive mechanisms may still provide constraints on the structure - there should be some relation, though even this is not there, e.g. "paTal tolA" or "kick the bucket"... ]
PROTOTYPE UPDATE - center = mean - updated each instance by (xi-pi)/(n+1)
NONMONOTONIC UPDATE - when shifting from "basic" to subordinate, can remove
some properties - i.e. specifics overrule the general
CONTEXTUAL prototypes - from [Barsalou 87] When ANIMALS are talked of in the
context of MILKING, cow and goat are more typical, whereas when animals are
considered for RIDING, horse or mule are more typical.
Handles this using "contrast classes" - a partitioning of the set of animals
into "riding" vs "milking" etc. [130]
[Labov 73] images of cups of diff breadths and heights - subjects asked to
name it in "neutral" context,
food context ("imagine it filled with mashed potatoes"), and "flowers"
context ("it has cut flowers in it"). For the same width, food context
showed BOWL much more likely, whereas for the flower context, for the same
depth, VASE showed much more likely.

Variations in cup and bowl; The degree of BOWL-ness depends on whether
food is in it or not.
Similarly, degree of VASE-ness depends on whether it contains flowers [Labov 73].
This is handled by changing the weights of the similarity measure based on
context. Thus points widely sep in y but closer in x may be viewed as similar
if x-distance has higher weight than y-dist. [133-4]
A problem exists however, with the mechanism for handling new information
... Let robins be the most protoypical birds. If I first learn that Gonzo is
a bird, and then that it is a robin, I have indeed received new information.
However, according to [the Dim Space] proposal I will locate Gonzo at the
same point in space both before and after the information that it is a
robin. [132]
[This results in a change in CONFIDENCE - needs to account for VARIABILITY of
concepts - e.g. the s.d. of birds is > than that of robins p.140 ]
VARIABILITY: How to measure? use GV instead of PV:
- PV: Prototypical Voronoi - nearest prototype center
- GV: Generalized Voronoi - each prototype has a different frequency ==> larger
and smaller circles ==> V.D. computed w.r.t. circle boundary
- NN: Nearest Neighbour - same cat as nearest example neighbour
(appears to work quite well - expts [148-150]
- AV: Average distance - belongs to cat with smallest avg dist
In Gardenfors' expts with shell shapes - expt1 - users can see all the
exemplar shells while choosing - here NN has best predictive power, and
GV's are not good - but diffs are minor (5% T-statistic). In expt2 they have
to remember. Same classification as the majority of subjects:
PV:25, GV:26, NN:30, AD:22. Thus NN is better even here.
In both expts, borderline cases were presented; perhaps results may be
otherwise in other cases. [148-9]
Wittgenstein's Philosophical Investigations (1953) defended a view that is
often summarized by the slogan "meaning is use".
SEMANTIC MODELS:
Frege/Tarski:
truth-value
Language ------------==> World
Intensional Semantics:
truth ;------==> possible world2
Language ------------==> possible world1
`-------==> possible world2
Situational Semantics [Barwise and Perry 83]
partial description of the
world - facts are true or false in each situation.
Aristotle: Spoken words are the symbols of mental experience and written
words are the symbols of spoken words. Just as all men have not the same
writing, so all men have not the same speech sounds, but the mental
experiences, which these directly symbolize, are the same for all, as also
are those things of which our experiences are the images.
[De Interpretatione, tr. EM Edgehill, opening paragraph]
Note - use of the word "image" -
De Saussure: A linguistic sign is not a link between a thing and a name,
but between a concept and a sound pattern.
[i.e. names are sounds, and things directly are not what is referred to]
[** Thompson 1995: Colour vision, evolution, perceptual content, Synthese 95]
[T]he biological function of colour vision is not to detect surface
reflectance, but rather to generate a set of colour categories that have
significance for the perceptual guidance of activity. In my view, the
categories that give structure to colour perception are indeed modes of
presentation in visual perception, but they are not modes of
representation, at least not in the typical computationalist [realist]
sense, because colour perception does not represent something that is
already present in the world apart from perceivers; rather, it presents
the world in a manner that satisfies the perceiver's adaptive ecological
needs...
Assuming a traditional truth-functional account of semantics, Quine writes
(1979):
What were observational were not terms but observation sentences.
Sentences, in their truth or falsity, are what run deep, ontology is by
the way.
The point gains vividness when we reflect on the multiplicity of possible
interpretations of any consistent formal system. For consider again our
standard regimented notation, with a lexicon of interpreted predicates
and some fixed range of values for the variables of quantification. The
sentences of this language that are true remain true under countless
reinterpretations of the predicates and the range of values of the
variables. [158]
[This is the same point that was made by [Putnam], and highlighted by the
Goodman "grue" Puzzle of Induction. ]
Neither Lakoff nor Langacker, who use the notion extensively, give a very
precised description of what constitutes an image schema. 163
SYNTAX from semantics 164-5
**Petitot 95: Morphodynamics and attractor syntax: constituency in visual
perception and cognitive grammar, in [Port/vanGelder:Mind as Motion 95]
syntactic structures linking participant roles in verbal actions are
organized by universals and invariants of a topological, geometric, and
morphological nature... We show how constituent structures can be
retrieved from the morphological analysis of perceptual scenes. ... The
formal universals, which are not characterizable within the theory of
formal grammars, need not necessarily be conceived of as innate. They
can be explained by cognitive universal structures. ... In so far as
these structures are not symbolic but of a topological and dynamical
nature, there exists in syntax a deep iconicity. At this level, syntax
and semantics are inseparable. [Petitot 95 p.256]
ANALYTIC-IN-S Certain statements become analytically true in conceptual space S - e.g. if something is green it cannot be red, as these are disjoint regions in S. Thus "analytic-in-S" can be defined in terms of the topological and geometric structure of the conc sp S including its partitioning into regions for concepts. Since diff conc spaces do not have the same underlying geometrical or topol structure, they will yield differing notions of analyticity. Cognitive semantics ==> Computational: Implementing the diagrammatic representations of Langacker or Lakoff's image schemas are difficult. [Holmqvist 93,94,99] develops implementable representations of image schemas - based on superimpositions of image schemas. [Regier 96] and [Zlatev 97] are also steps in this direction.
criterion L: lexical expressions are represented semantically as natural
concepts.
LA: basic adjectives are natural properties
LV: basic verbs are dynamic natural concepts - e.g. "leave" -
trajector follows path from inside to outside. Verbs like "sit"
or "support" have no trajectory, but involve force dynamics.
LN: basic nouns are multi-domain, non-dynamic, natural concepts
prepositions -
[Landau and Jackendoff 93] - two distinct cognitive systems - the "what"
system for objects, and the "where" system for places.
"LEAVE" - image schema of tr leaving bounds of lm and over time
(horiz-axis), separating in distance (vert axis). Different physical
paths may instantiate it. However, it is reasonable to imagine that
if path p and q are LEAVE, then a path between them would also be
LEAVE.
[NOTE; does not hold if there are two exit points from lm :
**IDEA: Componential convexity - can be decomposed into disjoint
components, each of them convex.
In a larger sense, convexity fails for all disjunctions, and all we can claim
may be piecewise convexity, with "higher" concepts being composed out of such
pieces. ]
[Bailey-etal 98]: "Executing Schema" rather than image-schema. Labels
(hand) actions and a simulated agent carries out similar actions.
[Does not attempt to show convexity in this space]
LN: "apple", "thunder", "family", "language" : entities that nee not
constitute a region in some space, but show "correlations" in a num of
domains ==> clusters. Those with potential pragmatic significance -
ie. whether they are helpful in choosing actions - are those that get
named.
[IDEA**: Paper - "It's only Pragmatics" - no role for indep semantics]
LOCATIVE PREPOSITIONS: "in-front-of-castle" - maps RO (castle) to a region -
modeled as set of vectors [Zwarts 95]. Turns out that "simple" prepositions
["far behind", "3m behind" are not] - hold even if the vector is shortened.
"between" for vectors can mean angularly, or translationally ==>
define translational or radial convexity. All simple locative
prepositions are convex in both; all loc preps are convex radially.
HERSKOVITS (pers. comm.) on problems of region view of prepositions:
supporters of the region view generally assume, without
justification, that the meaning of a prepositional phrase is fully
"reducible" to a region; i.e. this region depends strictly on the
prep, the lm, and sometimes an observer; and an uniform relation of
inclusion relates the target to this region. In other words, the
spatial prep is true iff the object is in such a region.
1. Many spatial preps such as "on", "against", "upon" and "on top of"
require contiguity. Not reducible to a region [Why? may be a
lower-dim manifold]
2. The region is CONTEXT-DEPENDENT. may involve environmental
characteristics beyond the frame of reference [see Hersk 86]
3. Such a region when definable, inclusion may be nec but not sufficient:
- on requires support also
- throughout, about, over (covering): target must be distributed or
extended all over the region, not at point
- "alongside" e.g. a flowerbed along a fence - must have its length
parallel to the fence.
- static senses of motion preps - e.g. "cable over the yard" (must
extend beyond the yard's edges, "path along the ocean" must
be approximately parallel to ocean
- "among": target must be commensurable to other objects in lm
4. Applicability is not uniform within such a region: context-dependence
involving more than a frame of ref. [172-3]
[Bickerton 90, p.44-6] spatial contiguity constraints:
words used to express concepts e.g. "existence", "location", "possession",
"ownership" - only the last involves spatial contiguity.
English be for first two and ownership, have for possession. In no lang
is the same word used for loc and poss but diff verb(s) for existence and
ownership, and vice versa.
[Broschart 96] p.174
can be adapted to:
CLOSENESS: horiz dimension
identity - contact - possession - separation - comparably-located
TRANSFER: vert dimension
agent (source,controlling) - neutral (non-control) - patient (goal)
both Eng and Germ prepositions appear to be convex in this space. [Fig.5.10]
Preposition image-schemas: The same relation parameters of the
constituents may have different phonological terms in different languages.
For example, German von covers several English terms.
[Bowerman and Pedersen 92] - cross-linguistic study - 38 langs - of six
spatial situations:
1. support from below (cup on table)
2. clingy attachment (band-aid on leg)
3. hanging over/against (picture on wall)
4. fixed attachment (handle on door)
5. point to point attachment (apple-on-twig)
6. full inclusion (apple in bowl) (see fig 2 in Bowerman and Choi 95?]
In no language was there a term that covered 1 and 5 but not 3. ==> terms in
all languages constitute convex (contiguous) sets over the single dim.
Lakoff 94 p.203: "The generalizations governing metaphorical expressions are
not in language, but in thought: they are general mappings across
conceptual domains... In short, the locus of metaphor is not in
language at all, but in the way we conceptualize one mental domain in
terms of another...
p.215] metaphorical mappings preserve the cognitive topology (i.e. the
image-schema structure) of the source domain, in a [manner]
consistent with the inherent structure of the target domain."
etymology: metaphor: meta (over, across) + ferein (carry) ==> literally,
to carry across
"peak of a career" - maps peak in height to peak in social (/corporate)
status - and space to time. Extends to "higher" rank, "climbing" the
hierarchy, etc. [L&J 1990]
Space to time - many metaphors - "longer" and "shorter" intervals;
"distant", has a great future "in front of him", etc. future ==> front.
Length dimension [space] is more fundamental than time.
Two conceptions of time:
Speaker movement: tasks are "in front of us", some events are "behind us",
etc.
[Shyan Munshi, co-bartender with Jessica Lal : it "happened a long time ago"
and he had "moved on". ]
SPATIALIZATION OF FORM HYPOTHESIS [Lakoff 87, p.283]: a variation of the
invariance principle: "strictly speaking the spatializn of f hyp requires a
metaphorical mapping from phys space into a "conc space". Under this
mapping, spat structure is mapped into conc structure. More spec, image
schemas (which structure space) are mapped into the corresp abstract
configurations (which structure concepts)."
Also draws the more radical conclusion: "Abstract reasoning is a special case
of image-based reasoning... [1994, 229]
METAPHOR: Ling expression, originally applicable in domain D1 (source or
"vehicle"), is used in domain D2 (target, subject or "tenor" of the
metaphor). If it is a "creative" metaphor, then hearer adds the corresp
structure to her knowledge of D2. e.g. "viruses" applied to computers ==>
acceptance, and then extension - "disinfect", "vaccinate" etc. Metaphorical
meanings become entrenched when the speakers view E as a natrual expression
in D2. Often the original meaning is lost altogether - e.g. "touchstone",
"scapegoat".
TOUCHSTONE (1481)
was black quartz, used for testing the quality of gold and silver
alloys by the color of the streak made by rubbing them on
it. Figurative sense is from 1533.
SCAPEGOAT :: 1530, "goat sent into the wilderness on the Day of Atonement,
symbolic bearer of the sins of the people," coined by Tyndale from scape
(~escape) (n.) + goat.
[Verbrugge 80] Metaphoric processes are not solely language driven -
perceptual experiences can also be metaphoric -- e.g. recognizing a familiar
object in the guise of a cloud, or seeing the undulation of ocean waves in a
field of grain.
RR Verbrugge - Transformations in knowing: A realist view of metaphor
In R. P. Honeck & R. R. Hoffman (Eds.), Cognition and figurative language,
1980
[IDEA: ocean waves as prototypical of wavy motion. Pendulum as prototypical
of oscillating motion ==> perceptual prototypes drive linguistic metaphors.
Note: Ramachandran: may be hardwired - synesthesia. ]
red book - compositional
red wine / hair / skin / soil / wood ==> ALL METAPHORICAL?
==> used merely to distinguish the contrast classes. Note also that mostly
"basic" colour terms (in the sense of [Berlin and Kay] are used - "lilac"
will be used only if "blue" is already there.
[Brostr\"om 94]:
If we regard the reference and meaning of colour terms as relative rather
than absolute, we avoide the conclusion that we are deling with
metaphor. There is no understanding in the prototypical metaphorical sense
involved. We do not understand caucasian skin as though it were paint
white, we call it "white" to distinguish it from other skin colours, such
as "yellow", or "black", or "red".
Similarly for adjectives like "hot" and "cold" - "hot water" is very diff when
it is for tea vs for a bath.
[Berlin/Kay ordering: see notes in [Taylor:2003].
[Harder 96]: The point is that over and above the conceptual dimension which linguistic meanings tend to have, there is another dimension of the meaning of a word which in certain cases is the only one: the meaning of the event of using it. Words like hello fit directly into a patterns of life (including experiential qualia) without requiring conceptual meditation (like alarm calls in monkeys)... If it were not for the mental skills, meaning could not exist, but neither could other intelligent coordinated activities: put meaning inside the head, if you like, but then football goes with it. SEMANTIC/PRAGMATICS interplay: Basic semantic meaning of red is given as a region in the colour spindle. The contrast class, which is pragmatically given, then determines a restricted colour spindle (e.g. "black skin" p.121 above) which determines the contextual meaning of "red". The contrast class may be determined by the immediate noun head, or by a more general context present in the speech act. 1. Mainstream Linguistics view: Semantics Syntax - primary object of study; semantics is added when grammar is not enough, and pragmatics is what is left over. 2. Disciplines such as anthropology, psychology, situated cognition etc: actions are basic; pragmatics are rules for linguistic actions; semantics is conventionalized pragmatics [Langacker 87, Sec 4.2] - and finally syntax adds markers to help disambiguate when context is not enough.
[Brostr\"om 94]: If when looking at a dog, I think of his "face" is that the result of metaphorical categorization? When looking at a caterpillar? Or, to recast the question in domain terms, does the concept of face belong to the domain of the human body, to the more general domain of animate bodie, or to a domain of intermediate scope, say mammalian bodies?... Even if the concept of a domain may be a useful approximation, as in the term "metaphor" itself, it does not provide us with an explanation of the difference between litera [literal meaning] and metaphor. The reason why is that it is not possible to individuate domains, to tell one domain from another, independently of establishing which expressions are literal and which metaphorical. T- when is a categorand sufficiently different from its category to warrant the term "metaphor" is merely recast in other terms: when is a categorand sufficiently different from its category to warrant the positing of a domain boundary? That there is a "qualitative difference between litera and metaphor" is a "traditional misconception." DOMAINS ARE GRADED: Pedersen 95: When a child extends the meaning of "leg" from its own leg, to its father's leg, and then to human legs, animal legs, table legs, the domain of application of the word is constantly changing. Similarly for "face" - caterpillar face, face of a clock or a mountain - it is impossible to draw a line between the domain of literal meaning and the metaphor domain.
Language is conventional: connection between ling expression and meaning is
arbitrary - and has to be learned. 187
Realists tend to separate the learnability and communicative questions:
[Lewis 70]: I distinguish two topics: first, the description of possible
languages or grammars as abstract semantic systems whereby symbols are
asociated with aspects of the world; and second, the descriptions of the
psychological and sociological facts whereby a particular one of these
abstract systems is the one used by a person or population. Only confusion
comes out of mixing these two topics.
Meanings are mentally represented in the same conceptual form as the
meanings of words. If we can understand how the semantic links are learned,
we can translate back and forth between the visual form of representation and
the linguistic code.
[Freyd 83: Shareability: The social psychology of epistemology, Cog Sci
7:191-210] Knowledge, by the fact that it is shared in a language community,
imposes constraints on indivudual cognitive representations. Structural
properties of individuals' knowledge domains have evolved because "they
provide for the most efficient sharing of concepts," and she proposes that a
dimensional structure with a small num of values in each dim will be
particularly shareable.
An object C (e.g. a car) described in terms of two other objects B and C
known to both (e.g. a tomato and another car) - can result in a change
(distortion) of the hearer's original rep compared to the speaker's. Thus, A
may be sharing the colour of B and the shape of C - this would bring it
closer to the speaker's representation. Over time, a lang community would
come to stabilize concepts by forming a grid - [NOTE: dims often have
polarity pairs 195 opposite words "hot/cold", "small/big" etc.] 192
We are evolutionarily predisposed to detect correlations among objects to
form clusters [and therefore, hierarchies]. [Smith and Hesse 92]:
If we imagine multiples of local and dimension-wide distortions of the
similarity space -- distortions resulting from real-world correlations
between specific properties, from co-relations between material kind and
kind of motion, eyes and texture, eyes and kin of motion, shape and motion,
and so on -- then what emerges is a bumpy and irregular similarity space
that also organizes itself into multiple categories at multiple levels in
context dependent ways.
STABILITY: While individual concepts may move around, clusters are more
stable - though members may change, appear, or disappear, clusters are more
reliable as references for words. 193
- using Nouns ==> spakers are acquanted with the same cluster - a much less
severe assumption than that they know the same individual 194
[IDEA: even proper nouns are transient - our cells change over every so many
days]
[IDEA: UNITS - those that have prototypes: and are associated with a list of
properties - though no specific instance may have it - e.g. "bird" - small,
sings, flies, and builds nests in trees. These properties form the
expectations generated by the word "bird". Similarly for "small dog" etc.]
ADJECTIVES: when class of objects fall in same nominal category -
need to distinguish one element - refer to some domain such as colour ("red
bloc") or size ("big block")
[Givon 84] - adjectives serve a contrastive role.
Grid of domains ==> class of communicable references.
representational availability of such domains - precedes explicity
awareness of the domain - e.g. children may learn contrasts in one domain,
but may confuse them - "high" with "tall", or "big" with "bright" etc. [Carey
85] ==> [IDEA: Simulation studies? ]
Humpty Dumpty: "When I use a word, it means just what I choose it to mean -- neither more nor less." - but language is like a game - we win when the other person assigns the same meaning - else we would lose. 199 Linguistic powers concern who is the master of meaning in a group - - oligarchic or dictatorial - small group (e.g. experts writing dictionaries) determine lang usage. - democratic - ling usage determined by "common usage" - e.g. slang - has emergent properties - like prices in a free market EMERGENT FEATURE of a collective system - Wiener's 1961 example of "virtual governor" - system of AC generators - each is unsteady for 60Hz, but networked together, they behave much more stably. [NOTE: is this why when they fail, the whold "grid" fails?] The "virtual governor" or "mutual entrainment" is a self-organized property - entire system has causal effects on individual generators in the system. [IDEA: DYNAMIC SYSTEMS studies - how does this occur?] Language - consists of sublanguages - e.g. prof groups - may be mostly oligarchic - e.g. entomologists' or lawyers' languages. TEST: if the word "Technically" can be used ==> oligarchic - e.g. "Technically, a spider is not an insect" but does not hold for slang: "*Technically, a hooker is a prostitute" 198 Zlatev/Gardenfors: SOCIO-COGNITIVE SEMANTICS: GAME THEORETIC perspective - Ling. Conventions are Nash Equilibrium: indiv user's strategy can't be improved given strategies of the others. 200 There is no linguistic meaning that cannot be described by cognitive structures together with sociolinguistic power structures. Semantics per se does not need external objects - meanings of natural-kind terms can also change - e.g. before copernicus, Earth was "unmoving", before Einstein, "mass" was a fixed property of an object. Or in orwellian "Newspeak". [Or for children or inadequate exposure]. 201