Monday, July 27, 2009

2. LEARNING FROM EXPERIENCE, LEARNING FROM EACH OTHER.

2. LEARNING FROM EXPERIENCE, LEARNING FROM EACH OTHER.

2A. MAN IN THE MOON: processing the world.

Making sense of the what we see and hear.
• We sort new information into categories and patterns.
We use schemas to focus on what’s important and filter out what’s not.
• Schemas help us fill gaps in the information we get from the world.
“Can you read this?”
• We match patterns to solve problems.
We’re not aware of all the work our brains are doing.

There are big advantages to thinking the way we do.
• Reducing the world to stories and schemas helps us make decisions efficiently.
• Don’t spit into the wind: Our worldly knowledge keeps us on track.
• Our skill at seeing and matching even incomplete patterns lets us develop new combinations.

Gremlins in the data stream.
• WMD, anyone? It’s easier to see what we expect than what we don’t.
Like selective perception, social roles simplify our lives by limiting information and options.
• How many angels can dance on the head of a pin?-- drawing conclusions from poor or incomplete information.
It’s hard to see or anticipate massive “threshold effects”.

We’ve got to trust our personal experience while stretching beyond it.
• Sometimes we exaggerate the likelihood of what comes easiest to mind.
"Risk Assessment"
We tend to remember the instances which support a claim and overlook contrary or missing data.
Sometimes we see meaningful patterns where there is only coincidence.
• It’s hard to imagine very large-scale risks.
Sometimes we apply the wrong patterns or analogies to a problem.
We need imagination; but don’t be handing over the keys to con men.
The made environment short-circuits the worldly reality check.
“Some information environments.”
# In a world without automated media, frequency and intensity is a fair approximation to a signal’s importance.
# In a world without automated media, repetition usually signifies multiple independent sources, implying a greater chance of accuracy.
# As institutions standardize communications, we lose the richness of meaning.
# By overwhelming us with information, the media machines distract us from key issues.
# Repetition can make lies more acceptable as well as more believable.


* * * * * * * * * * * *

MAN IN THE MOON: processing the world.

We are pretty smart creatures. Our ancestors knew how to find water in deserts, make tools from rocks, get high from plants. Thousands of generations survived in conditions that would kill me in a few days. But the brain that evolved to meet these natural challenges acquired some quirky ways of operating, that affect how we learn and decide-- and make political choices.

With our limited brains and senses, we cannot fully see the world, and sometimes we see things that aren’t there. There’s no Man in the Moon, canals on Mars, dragons in clouds, or Jesus in the soap suds at the car wash. But we see these things because we are, above all, pattern-seeking animals, and for the most part, that has had a high survival value. Our minds are structured not so much for seeking truth for its own sake, as for solving problems.

Because I’m interested in how we learn, I looked up what the psychologists have to say. They’ve got all sorts of interesting ideas, but there’s one model in particular that explains a lot and dovetails with much of my own experience. It helps me understand how we develop the stories that power political action.

I draw much of this discussion from a diffuse body of psych research called schema theory. A fellow by the name of Frederic Bartlett was writing about how our minds organize memory in 1932, and a little later Jean Piaget applied the idea of schemas to how kids learn. Many others have explored it since then. The work owes a lot too to theorists of decision-making ranging from Herb Simon to Thomas Kuhn, and focuses on the way we process information to solve problems. The way these researchers see it, the fundamental challenge is that there’s too much information, but a lot of it’s not very good; while we have too little of the kind we need to make decisions.

Think of the sort of problem that might have faced the earliest people. You’re out there on the savannah. Sweat trickles down your neck and your knees hurt and you’re hungry. You smell and look and listen for lunch. That spotted ripple in the tall grass-- is that the wind, or something good to eat, or something that will eat you? Do you chase it with a spear, or sneak away and hope it doesn’t pounce?

Or how about longer-term problems? That pain in your side, should you just wait for it to go away by itself, or do you need to trade in your best hunting kit for a cure ceremony? When are the antelope going to be back? Should you set out now for the place with the fruit trees, hoping you’ll get there just in time for the season’s peak, but before the birds have eaten everything?

If the world were simply a collection of items, if we had perfect information and vast amounts of processing power, we could retrieve exactly the relevant fact or probability. We could crunch the numbers like a computer. We could use logic and proven facts to induce best solutions to clearly defined problems. But the world is full of uncertainties, and everything in it is related to everything else. So our minds, for the most part, and for most problems, work in quite a different manner.

Making sense of the what we see and hear.
Our senses constantly stream input from the world. At any one moment, we are simultaneously engaged in many related information processing tasks.

• We sort new information into categories and patterns.

. . . a fundamental characteristic of our brain tissue . . . is its capacity to learn associations among events. . . . it strongly shapes the quality of our learning processes. Because brain tissue can learn which events tend to occur together, we are capable of dividing up the world into categories. Things that tend to occur together get grouped together in the mind. In fact, much of our cognition can be simulated rather well using computer systems called connection machines that do nothing but form categories by strengthening the links between events that occur together frequently. As simple as this process sounds, it leads to remarkably powerful learning (Cummins 174).

From our accumulating experience of, say, what’s good to eat, we abstract prototypes from the common features, which we can use later to recognize more examples. We assemble action routines for finding food-- places to look in the wild or at the grocery store, words and behavior suitable for restaurants. We acquire stereotypes by which we judge complete strangers-- from their clothes, the way they walk, we can tell the kind of person we can hit up for a free meal, and the kind who probably won’t give us the time of day. We develop analogies between old problems and new, to find shortcuts to solutions.

Paradigms, prototypes, perceptual sets, roles, rules of thumb, stereotypes, scripts -- different writers use different terms and focus on somewhat different sets of thought and behavior. All involve sets of information that some researchers refer to generically as schemas or schemata. The main idea is that these lasting knowledge structures, not random collections of data, are the true units of our understanding. To what extent these are separate and fixed sets, or overlapping, fluid, and continually evolving, is still being debated.

Up to now, schemas have been depicted as hard structures in the symbolic domain, suitable for running on a von Neumann machine [a standard digital computer]. Increasingly, however, the notion of a schema is being ‘connectionized,’ so that schemas are not seen as Tinkertoy assemblies, but as emergent properties of the soft, subsymbolic realm. A schema is not a ‘thing,’ in this view, but a manifestation, a sort of oversimplified summary of the activities of a network which settles into a stable interpretation of the world by satisfying as many as possible of a vast number of different constraints . . . . Only in the most superficial sense can a schema of this kind be described as a mental object, a ready-made interpretation that is stacked in memory like a book on a shelf, always the same no matter how often it is taken down from the shelf and read. In fact, it is more like the pattern of waves on the surface of an ocean, reflecting the countless influences and forces at work beneath the surface of the water, and in the shifting, restless depths (Campbell 1989 p. 197)

Schemas can be as trivial as knowing what clothes to wear at a party (though, now that I think about it, it’s clear that such knowledge must arise from a fairly profound understanding of the social structure) or as elaborate as a mechanic’s procedures for repairing a car. We can get very anxious when we’re in territory our schemas don’t cover-- walking into a bar in a different neighborhood or a different country, for instance. I remember in one highly chemicalized environment being virtually unable to speak, because I didn’t know how to interpret and respond to what my friends said to me; their words had been stripped of all the familiar markers of tone and context.

Stories, in the sense that I use the term, are a particular kind of schema-- our conscious, visible explanations for the icebergs of implicit true and false knowledge. We tell stories to guide and justify our choices.

Why I care is that our political decisions are based at least as much on our schemas for authority and trust and agency in the world as on any economic event or political maneuver. These are the underlying structures that propaganda and “spin” try to tap into. We know very well that voting records or crime statistics or assassinations make no sense except in the context of what folks already understand about the world. For example, what I know about President Halliburton draws as much on my childhood reverence for authority, my ideas about oil tycoons, the image of Dr. Strangelove in his wheelchair, and my knowledge of the revolving door between government and business, as on recent news articles about Halliburton contracts.

Jeffrey Victor describes how reformers develop “perceptual frames” that both draw from and shape our existing schemas:

When moral crusaders strive to arouse public awareness about a newly recognized social evil, they must be able to offer explanations of the causes of that evil and propose credible ways of getting rid of it. They must cut through the inevitable complexity and ambiguity by framing the problem in a way that can be widely comprehended. Framing the problem sets the evil within a much broader scope of moral concerns. It provides the basic interpretive assumptions through which the evil can be redefined and linked to other social evils in society. . . .
A frame functions like a cultural model or paradigm which organizes people’s shared preconceptions. . . . Most importantly, the framing of the problem in a convincing way encourages volunteers to spend some of their time, energy, and money in the joint effort of a social movement (217-18 ).

Here Victor is talking about the Christian campaign against “Satanism”. But it’s a useful way to see other kinds of political work as well. It was only when I looked up Adam Smith for my GED class that I understood how closely related from the git-go were the emerging Enlightenment theories of democracy and capitalism-- relationships between individual self-interest and social order, between economic contracts and social contracts. Wasn’t it Locke and them who talked about human beings as infinitely malleable, that is, as directable to virtue as to vice?


We use schemas to focus on what’s important and filter out what’s not. Cummins (72) offers the example of conversation at a cocktail party, the background buzz that we mostly ignore, until, amidst the hubbub, we hear our name and zoom in on the speaker: "The ability to ignore all information except that which is high priority is vital to our ability to function." If we couldn’t screen out the “combinatorial explosion” of irrelevant information (Jeremy Campbell 108)-- the itch behind the knee, the mote cruising the surface of the eye, the noise of the computer fan, our body odor, the lawnmower across the street-- we would be overwhelmed by the “blooming, buzzing confusion” of the world (Jeremy Campbell 98), unable to understand or act. Apparently autistics can’t perform this screening process very well.

At the same time, these schemas help us know what to look for. Contrary to common understanding, science is not primarily inductive, does not develop theory from comprehensive surveys of accumulated fact; and neither do we. Ideally scientists, we’re told, start with a problem (necessarily defined by prior understanding and values), observe related events, and develop clearly defined hypotheses to guide their research. Our day-to-day learning is similarly grounded in and shaped by practical concerns. We don’t go off any which way. “Our conjectures and opinions, expectations and beliefs, are, in Popper's words, 'nets in which we try to catch the real world'". (Jeremy Campbell 133). In a restaurant we expect food, not circular saws. We listen to the boss differently than we listen to our friends, or the folks who come to fix the roof. As a teacher, I ask different questions of the teens and the older single moms. “Free trade” and “fair trade” are very different stories about the same phenomenon, and the New York Times asks very different questions about it than does a textile workers’ union.


• Once we have these schemas and the contexts they inhabit, they help us fill gaps in the information we get from the world. You might have had the experience of hearing the loudspeaker in an airport. Often the noise is barely recognizable as human speech. But usually we can make out the words, because we know what to expect: departure times, flight numbers, destination cities.

Visually, we do this all the time. Each one of us has blind spots in our eyes, where the retina attaches to the optic nerve, but we’re rarely aware of it because our brain automatically fills in the missing information (Franzen 83). The lens of our eyes turn the images of the world upside-down, but again we have no trouble operating because the brain adjusts. Few Americans misinterpret the two dots and a curved line that signify the smiley-face, :), annoying as it is. Remember the movement in the tall grass? From the patterns in our minds we can recognize animals from a comparatively few photons of information. The art of camouflage is based on breaking up these patterns.

*~*~*~*~*~*~*~*~*~*~*~* Can you read this? *~*~*~*~*~*~*~*~*~*~*~*~*
Msot ppleeo cna fgruire otu waht tihs stnneece syas, buaeecs our brinas are gdoo at mcthnaig dirtotsed inoamrftion ot prtteans we knwo, and cetorrnicg it.*
*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*

On a grander scale, we make up stories all the time, often without being aware of it. Perhaps the most impressive everyday examples are the stories we weave from our dreams. While we sleep our brains do something like a system check, riffing through the circuits, triggering neurons like lights on a Christmas tree. Nightly we assemble complex and vivid narratives from unsorted fragments of memory, dimly perceived sensations, and minute changes in the chemical balance of our bodies. Occasionally I wake suddenly, remembering only the images; immediately, automatically, I start ordering them into a story. Sometimes I wake from a bad dream, and go back to sleep, and change the ending to one I like better. Our dreams are often so compelling we see them as allegories of the real world --as well they might be, since of course we use our knowledge of the world to impose order on these nearly random sensations-- or even as prophecy. But dreams are not messages from the world, they are the order our minds impose on unrelated signals.


• We match patterns to solve problems.
Wolfgang Köhler pioneered food-reaching experiments with apes.

The smartest of the chimps was named Sultan. One of the tasks Sultan was given required him to retrieve some bananas that were outside his cage. He readily used a long pole that was made available to him to pull the bananas toward him. Then he was given two poles, each of which was too short to reach the bananas but which could be fitted together to make one pole long enough to reach them. Sultan spent a long frustrating time trying to reach the bananas with one pole and then the other, culminating in his stomping off in frustration and sulking. His keeper later observed him playing with the poles, during which he managed, quite by accident, to put the two together. Sultan immediately recognized the enormity of his discovery and took off for the edge of his cage, where he used the newly elongated pole to fetch the much desired bananas (Cummins 186).

People aren’t so different. We’re tinkerers, a try-it-and-see kind of critter. But we don’t try just anything, because we start with a fund of knowledge about what’s likely to work. "Most of our cognition consists of primitive yet extremely powerful and fast pattern recognition and classification processes" (Cummins 186).

At one level, we recognize that that ripple in the grass is a leopard, and not a tasty bird thrashing around. Or, drawing on previous experience, we see the kind of ground and vegetation that signifies tasty tubers in the neighborhood. We recognize complex social patterns as well, and use them to navigate the social landscape.

We compare problems, to see what features are the same, and if solutions can be transferred or adapted from one to another. As I argue below, we need more Stories; and part of my thinking about this comes from my understanding of the massive redundancy of natural systems, from autoimmune mechanisms to sperm counts, to compensate for all the things that could go wrong. I’m applying a pattern I’ve seen in one context to quite a different realm. For that matter, schema theory itself grew in part from earlier resource-based models of nature and society; but here, instead of sunlight and ecosystems or power and institutions we talk of information flows and neural networks. “One very powerful mechanism for integrating new knowledge with old knowledge is analogical reasoning. Reasoning analogically basically means working out how 'this is like that" (Cummins 180).


We’re not aware of all the work our brains are doing.
When you tie your shoelaces you probably don’t think too much about it, but suppose you did? How would you write the rules for tieing shoelaces? In fact it’s an extraordinarily involved process, which most of us perform daily without paying it the least bit of attention. A large part of our thinking and action goes on automatically, below the level of our awareness. Considered in terms of functions and levels or simply in terms of the physical parts of the brain, our minds are very complicated systems, of which our consciousness is only a small part. Our minds are active every moment, processing information from our memories, our bodies, and the rest of the world, but only a small proportion of those operations makes it to our conscious consideration. And a good thing, too-- so we’re not bogged down in decisions about breathing, or tieing shoelaces, or the way to drive to work.

“The mind, considered simply as a very fancy computer, hides many of its operations behind an opaque screen. Far from being otherworldly, many of these backstairs operations are ‘this-worldly,’ in the sense that they are part of everyday intelligence, enabling us to deal with the complexities of ordinary existence in a partly automatic way, as if life were a bicycle and we have learned to ride it. . . . Unconscious thinking seems to be different in kind from conscious thinking, but that does not mean it is irrational, in Freud’s sense of the word” (Jeremy Campbell 206).


* Most people can figure out what this sentence says, because our brains are good at matching distorted information to patterns we know, and correcting it.

No comments:

Post a Comment