Artificial Unintelligence: Computer Uses in Language Learning

JOHN HIGGINS
University of Bristol

Many approaches to language teaching seem to assume a teacher who is both proficient in the subject matter and intelligent about deciding how to present it, while also assuming a learner with no proficiency and no intelligence. Under such a model, nothing is learned unless it is explicitly taught; learners have to be given, since they cannot take. Paradoxically, if one adopts an approach which respects the learner's intelligence, it may turn out that the learner wants and needs an unintelligent partner, a partner who will behave in a totally predictable and rule-governed way.

GOD AND NATURE

Some evidence in support of this was offered to me by Mario Rinvolucri while discussing a recent article of his describing experiments in adapting computer exercises to ordinary blackboard classrooms, in other words, playing computer games without a computer (Rinvolucri, 1984). The exercises involved reconstructing a masked-out text by trying to guess the words in it or trying to guess the title of a text from seeing a tiny fragment of it displayed and then gradually increasing the size of the fragment. What Rinvolucri's teachers discovered was that they could not face the class; in a period of sustained guessing, the involuntary signals of approval and disapproval which the teachers transmitted were destroying the exploratory instinct and actually reducing motivation and success. The teachers had to break eye contact and turn their backs on the students (personal communication, April 1984).

One way of describing this phenomenon is to say that teachers facing the class play the role of GOD, loving, caring, guiding, knowing, chiding. Teachers with their backs to the class are in the role of NATURE, cruel and unforgiving, sometimes appearing arbitrary but in fact governed rigidly by laws of cause and effect. Computers can readily be given the role of NATURE if we use them as "unintelligent partners" rather than as pseudo-intelligent tutors. Machines give us the power to dabble, experiment, waste time, but not to bend rules, get shortcut solutions, or get away with errors. It worries me that people who clamour for artificial intelligence solutions to language learning problems may not fully understand the nature of this power. Computers are responsive devices, with no desire to talk to us, to initiate conversation. Such devices can be of enormous use, provided that we take on the responsibility of initiating and that we get into the habit of asking lots of questions.

COMPUTER AS DEMONSTRATOR

Perhaps the most neglected role of the ones we can readily give to computers is that of demonstrator. This first occurred to me five years ago when I was writing my very first BASIC program, a little exercise on word order. The screen displayed four animals, elephant, crocodile, cat, and mouse, crudely drawn in character graphics, and assembled a question in the form of "What has the cat eaten?" or "What has eaten the cat?" The user should answer "The mouse" to the first question or "The crocodile" to the second question. The answers to "What has eaten the elephant?" and "What has the mouse eaten?" would be "Nothing." As I was putting this together, I said to myself, let's have 10 items in the exercise, and we had better have 2 examples. Then I stopped to ask myself, why 2 examples? why not 3? or 300? Since the machine could generate examples randomly forever, why should I decide in advance when the user should switch from observing to doing?

For learners often want to be receptive, want relief from being challenged and potentially humiliated. They may need drill and practice but with an important reservation. Drill as a service, drill on demand but only on demand, is a very different entity from drill which is structured and imposed by an outside agency. Drill itself can become a form of linguistic play, and play is the missing element in much organized learning, an element the computer can give back to us. And to a great extent, what I am proposing is not that the machine should drill the learner but that the learner should drill the machine.

This can be illustrated with a program written during a programming course in Yugoslavia several years ago which I designed as a memory aid for learning the days of the week in Serbo-Croatian. How nice it would be to have a slave to tell me what I wanted to know in context on demand and then to test me if, but only if, I wanted to be tested. So I asked one of my Serbian students to help me write the program I call SERBDAYS, and this is what we produced.

Mon Tue Wed Thu Fri Sat Sun
Danas je petak.
Sutra je subota.
Juce je bio cetvrtak.

L = left      M = menu     R = right

The user points the arrow at a day, and the three relevant sentences ("Today is … Tomorrow is … Yesterday was …") appear in the box. One can spend as long as one likes just shifting the arrow and looking at what comes up. The program also incorporates a testing phase in which the word for the relevant day in the first sentence is blanked out and has to be typed in. The unusual thing about this phase is that the input is controlled so that only the correct letter is accepted, although it will supply the next letter if you are baffled and press the space bar. If you press something else, nothing happens at all: No bells ring; no little messages appear telling you what a bad girl or boy you are. Similarly, no congratulations appear when you get it right. If I employ a slave, I do not want the slave to criticize me or to congratulate me. I just want the slave to obey orders and report facts. I do not want my computer to be user-friendly, or to put it another way, I do not want my slave to be master-friendly.

The demonstrator role extends to that of playmate or stooge in the kind of program in which we ask the machine to produce language randomly. We then look at what it produces, laughing at what is ridiculous, pondering over what is accidentally profound. The best-known form for such programs to take is the poetry generator, but I always feel disappointed that so much attention goes to free-form poetry when the machine can very readily produce simpler and more everyday language. All it is doing, after all, is making selections from a substitution table, and these can take the form of conversation, of zany narratives as in the game of MADLIB, or of anything else.

Muriel Higgins and I have been working on a suite of such programs called NONSEQUENCES. The machine assembles new proverbs, original advice for tourists in London, pithy proverbial Scottish wisdom, or samples of reported conversation. With an editor, it can handle a great variety of sentence or discourse types, anything indeed that is on a structural syllabus, but the mode of use reverses the roles, turning the learner into the drillmaster. The machine simply churns out samples, while the learners look at the output, decide what they like and want to preserve on the printer, and reject anything which is dull or simply inconsequential. With the PROVERBS component, I have run competitive sessions in which each group of learners tries to assemble the 10 best new proverbs and submits its collection to the vote of the whole class. Here is a sample of the sort of printout the program gives.

The next stage, of course, is to ask learners to supply new components to extend the data list. When they do this, they may well find that what they have put in leads to ungrammatical language (the group will quickly spot this even if an individual does not). This may give them insights into the grammar or semantics of the piece of language, which they might not have got so easily from a teacher's correction of their own mistakes.

PARSING

Even though the programs I am talking about synthesize language, I doubt whether anyone would describe them as artificially intelligent. The process of synthesis is entirely rule-bound, and the machine has no means of "understanding" what it is "saying." Yet there is intelligence present during the interaction—the learners' intelligence in assessing, responding to, criticizing, or enjoying what the machine sets up—and it seems that the learners' recognition of the machine's stupidity is a factor in releasing their own intelligence and zest for experiment.

One gets a little closer to the domain of artificial intelligence if one designs programs in which the communication is more genuinely two-way, programs in which the learner has to get messages across to the machine as well as receive messages from it. This means equipping the machine with a parser. The parser, however, does not need to be perfect, provided that we have not endowed the machine with an aura of omniscience. Users will be perfectly ready to modify their language and try other formulations to find ones which work as long as they see the machine as basically stupid. This is already an observed factor in the educational use of adventure games; Daniel Chandler, for instance, has reported real learning benefits among young first language learners who have to learn to reduce their input to two word commands when they play with adventures (Chandler, 1982). In the process they start thinking about the nature of communication in a new way; they realise that they have to take the listener or reader into account when they speak or write.

My current work is concerned with devising very simple parsers to cope with user input to logic games and other kinds of exploratory programs. One of these is for a program called TIGLET, in which a tiger cub asks for food and the user must offer it different things to eat. The program works with a classified vocabulary in which every kind of food belongs to one major type, such as meat or dairy product; is associated with one or more adjectives, such as sour or expensive; and is also associated with its typical colours. At the beginning of the game, TIGLET chooses a category, an adjective, and a colour. He will answer "I quite like that" if the food matches on one count, "I like that" if it matches on two, "I love that" if it matches on all three, "I don't like that" if he cannot find a match, and "I have never tasted that" if the user enters a food outside the known vocabulary. The user is trying to guess the three categories; it is interesting that the most useful information comes from what TIGLET refuses rather than from what TIGLET likes. The learner is also trying to communicate in a subset of natural English which TIGLET has been "taught" through the parser. It is a very limited, brute-force parser, needing only to cope with about 300 words and expecting nearly every sentence to be an offer, which greatly simplifies the semantics. The limitations of the parser, the fact that it does not cope with every possible way of making an offer, need not be a drawback, provided that it can handle notions of quantity and count versus noncount.

There is more to TIGLET, however, than just the logic game; one can also teach him new vocabulary and therefore new ways of classifying food. This, from the evidence of early trials, is the most engrossing part of the activity, since it forces learners to think about the consequences of description. Is a tomato a fruit or a vegetable or both? What colours can an apple have? Should one include brown, since rotten apples go brown? TIGLET won't help with answers; he just waits to be taught. Whatever the learners learn has to be learned from experience or induced from the output they eventually get from the program.

OVERT INDUCTION

If one wants to classify acts of reasoning and learning into deduction and induction, that is, as moving from general to particular or from particular to general, then there is little doubt that TIGLET and his kind belong to the inductive approach. TIGLET supplies examples and data, but the user has to create the larger classifications and principles that will make sense of them. Learning can-usually does-involve both deductive and inductive procedures, but styles of teaching usually show a preference for one or the other. I think there can be little doubt that a strongly inductive style of teaching (notice I do not say "learning") is rather rare, though one could perhaps find it in the Silent Way, in the Total Physical Response method, or in a whole-hearted application of Krashen's Natural Approach. Such inductive approaches, however, share the characteristic of being carried out in an unaware fashion; the learner is being encouraged to think about the meanings to be conveyed and not about the means of conveying them. The kind of exploratory and problem-solving approach which I have been describing differs by encouraging reflection about the means, an overt analysis of language itself. This in some ways parallels the difference between the two major deductive approaches: structural, or drill-based, which promotes habit formation while suppressing conscious analysis of language, and cognitive code, which allows attention to be given to the means as well as to the end. We could display this classification as a grid:

Deductive Inductive
Unaware Pattern practice Immersion
Aware Cognitive code Exploratory

The explicit and aware use of induction is not common in language teaching, except perhaps in the Silent Way. Perhaps we could relate this to the other element which is missing from much formal tuition but which is very common in infant learning—conscious linguistic play. Play, however, needs a playmate, a mother, say, or a sibling or a friend. Teachers are not really fitted for this role; they are too intelligent and take too many initiatives. They bend the rules of the game, often in a well-meaning effort to help.

But that is something the computer will not do. It follows the rules; it is too stupid to do anything else. What we have to realize is that its very stupidity can be turned into an asset, since it releases the learner's intelligence, the learner's hunger for self-knowledge, and the instinct to explore. Until we know that we can make demands on a slave, however unreasonable, we will refrain from demanding enough. What the machine can do for us is turn language learning into an experimental subject, a subject in which the learner tries things out, measures the effect of linguistic choices, and derives perceptions and insights by making sense of authentic data, data which the machine can organize. Just as chemistry students have their chemistry laboratory, so the computer can provide us with a language laboratory, namely an experimental environment for language learning. What we currently call a language laboratory is nothing of the kind, since no experiments occur in it; regardless of what you say into the microphone, the tape will give you its prerecorded response. Computers, in contrast, facilitate and encourage experiment. I would hesitate to say that they are going to make language learning easier, but I am sure that they will make it more exciting.

ACKNOWLEDGMENT

This is a revised version of a paper presented at the 20th Annual TESOL Convention, Anaheim, CA, March 3-8, 1986.

REFERENCES

Chandler, Daniel (1982). "The potential of the microcomputer in the English classroom."" In Adams, A. (ed;. New Directions in English Teaching, Palmer Press.
Rinvolucri, M. (1984).”Computer ideas for the chalkboard classroom.” Practical English Teacher, 5(4), 19-20.