Imagine a game played by two players who sit back to back or, in some way, out of sight but within earshot of each other. Each has a game board consisting of a sixteen by sixteen grid of squares. On player A’s board, and only on that one, some of the lines have been thickened to create a maze. Each thickened line represents an impassable barrier.

Player B’s task is to move a token from, say, the bottom left square to the top right in as few turns as possible. The method of moving is to call out a number of squares and a direction, e.g. ‘5 squares right’. Player A now announces if that move is legal, ie if it does not cross a thickened line. If it is, and only then, player B makes the move. (Player A must also mirror the move on his own board so that he knows where the next move will start.) If the move is not legal the tokens stay where they are but the turn is counted against B’s score.
Let us further assume that player B may not mark his board in any way. He has to rely on inference and memory to work out the route. Player A acts purely as informant and arbiter and cannot influence the outcome of the game. In this form of the game there is very little interest or fun for A but quite a lot for B. What is more, by playing repeatedly with the same maze B can improve his performance as he gradually memorises the locations of the barriers and, if he plays with different mazes, he will probably find himself evolving search strategies which lead to better overall success rates. He will develop a feel for the kind of routes that are likely to work. However, working by trial and error exclusively, this may take a long time.
Roles
We could allow A rather a more active role, say as adviser. He could now say things like `There are a lot of obstructions towards the middle of the board; better keep to the edges.’ The trouble with this is that the boundaries of such general advice are fuzzy, and A would not know when such advice was necessary or wanted, or whether it had been helpful. However, it would serve to equalise the contributions, and therefore the enjoyment, of the two players. We could go even further and let A give specific instructions, ‘Move two squares up, then four right …’, until the optimum route had been announced. B’s role is now confined to executing these instructions. This maximises A’s part in the game, and therefore his enjoyment, but, since he has full sight of the board, his task is still much less interesting than the task originally set to B. Meanwhile B’s activity has been turned into something very mechanical and boring. One consequence is that there will be no point in playing the game more than once for each form of maze, and that B will end each round without any mental visualisation of the board that A has been using.
Time
Another consequence is that the game will be over much more quickly. This could be presented as an advantage, or at least as a necessity imposed by time constraints. ‘We’ve got twenty of these mazes to do in the next hour. I haven’t got time to listen to your wrong guesses.’ The game has now been turned into a duty or a chore, but if player B is obliged by social convention to play it, he may well accept this explanation and settle to the task of moving his counter over twenty prescribed routes.
Teaching and learning
By now it should be obvious, from the context in which you are reading this, that I intend the game to be an analogy of some parts of the learning/teaching process. Like many analogies it over-simplifies a complex process, and nothing can be proved with it. It leaves out a great deal. I have, for instance, said nothing about how B comes to know the rules of the game, whether A has taught him, or both have been taught by C, or whether the rules are part of their shared cultural background. I have not said who draws the mazes, though evidently this cannot be B. I have not said why A and B are playing, although I have suggested that it is for enjoyment. Languages, though, are not learned just for enjoyment but for practical use in communication. The process of learning to move a counter across a grid under the constraints of arbitrary rules is clearly not an accurate model of the process of mastering anything as complex in structure, in origins and in application as a natural language.
Dichotomies
What the game analogy does suggest, however, is the way in which we tend to think about teaching and learning in simple black-and-white oppositions. In recent years we have been presented with an abundance of dichotomies: control versus freedom, accuracy versus fluency, teacher-centred versus learner-centred approaches, conscious learning versus unconscious acquisition, and formal versus informal learning (see, for instance, Brumfit 1983, Krashen 1982, Ellis 1985). In many of these discussions evidence is presented that something important is lost when the teacher says `This is the way to do it’; learners get bored or fail to transfer learning to life. So why do teachers go on saying it?
Mass instruction
In a classroom situation they have to, at least for a part of the time. Imagine player A’s problem if he has to respond to thirty different player Bs, all simultaneously clamouring for rulings. He is almost bound to say at some point, ‘Shut up all of you and do what I tell you.’ Could we get round the problem by having learners work in groups or pairs? To continue the analogy, we could have the various player B staking turns to go up to Player A and then collaborating by pooling the knowledge they have acquired. But, even if we grant that the mazes they are trying to explore are identical, this is still a makeshift. One may say to another ‘This is how I got here’ or ‘I tried that and it didn’t work’, but these pieces of help depend on a very partial view of the maze and on possibly imperfect recollection. However accurate they may be, the other member of the partnership cannot rely on them as he or she can on player A’s sight of the board.
Exploratory learning
In spite of the over-simplification, the game analogy and the exploration metaphor underlying it describe a process which is very familiar to the parents of a toddler learning its first language. The caretaker, i.e. the parent, playgroup teacher or baby-sitter, is sometimes treated like a puzzle to be solved. New utterances are tried out to see if they work and, if they do not work the first time, are repeated louder and more reproachfully while the caretaker casts around trying to understand what the child wants. Both sides try to modify what they say until there is a breakthrough and the message is understood. Once a child has found out how to formulate questions, it often goes through a phase of asking questions continually. There is no coherent line to the questioning. Anything which enters the field of consciousness is tackled as it is perceived and discarded the moment something looms larger. The child does not seem particularly grateful for the answers or even very interested; it just asks more questions.
When you learn a foreign language you do not normally let yourself behave like this, partly because you have already learned rules of conversational behaviour, one of which is ‘be relevant’. However, a pattern of activity rather like that of the toddler is used in some of the so-called fringe methodologies, notably Community Language Learning (CLL) and The Silent Way. In CLL any member of the group of learners can start an exchange in any way they please, calling on the teacher to help them express the utterance in the foreign language. In Silent Way the learners have the task of communicating with a teacher who, apart from a tiny initial demonstration, refuses to speak at all, responding only with nods and shakes or by moving coloured blocks around. This puts the learner in a position similar to that of the toddler trying to make its parent understand what it wants.
The machine as teacher
The conventional labels for Player A and Player B are ‘teacher’ and ‘learner’. These labels, however, are overlaid with associations gathered from centuries of mass education in institutions, places where the teacher is a salaried and trained authority figure. Inevitably, when computers began to be developed with enough flexibility and sophistication to be used for education, it was the role of ‘teacher’ that was assigned to the machine. This was in spite of considerable and understandable emotional resistance to the idea of a machine replacing a human being. Much was written in the sixties and seventies claiming that the machine was only an auxiliary whose main task would be to take over tasks that human teachers (namely the trained professionals) could not or would not handle, in particular individual work with students who had special goals or severe remedial needs. It was also claimed that the machine would handle the tedious tasks of drilling, releasing the teacher for creative work in free forms of activity such as discussion.
None of this defence could conceal the reality that what those early computer programs were doing was presenting facts, setting exercise tasks, and evaluating responses. They were definitely cast as Player A in his most active and interfering role.
Since I would like to reserve the word ‘teacher’ for a wider use, I have had to coin a special term for Player A when he takes over this directive function, when he tells player B where to move. I call him magister. He (the classical nomenclature is my excuse for the masculine pronoun) wears an academic gown to show that he is qualified in subject knowledge. Visible in his top pocket is his salary cheque, symbolising the security of tenured appointment. In one hand he holds a handkerchief, symbol of the care and concern which (we hope) he feels for individual learners. In the other he carries a cane, symbolising his authority to evaluate, praise and censure. In front of him is the book, the symbol of the order of events, the structure which is imposed on him by the syllabus makers and which he will impose on the learners by means of the lesson plan.

The machine as magister
The computer is very good at performing some of the typical activities of a magister, and very poor at others. It is good at presenting statements and illustrating them with examples, since it can continue drawing on a store of ready-made examples or assembling random examples from a substitution table for a period which would outlast the patience of a human. Similarly it can continue with a questioning sequence or repeat explanations beyond the point where humans would lose their tempers. But it is poor at assessing answers, since it cannot store every wrong answer that might be given and make an appropriate response to each, nor can it judge the reason for a particular wrong answer, whether carelessness, faulty grasp of a principle, sheer perversity or, perhaps, experiment. It is very poor indeed at conveying enthusiasm or showing love, whether love of the learner or love of the subject matter.
The human as magister
At this point the simplicity of my original maze-game metaphor breaks down, since it suggests that magisterial teaching is always wrong. After all, in the game it destroys the point of the activity. In practice, however, magisterial teaching can often be what is demanded and needed. It is as if Player B asks to be ‘talked though’ a few mazes as preparation for the time when he starts to solve them without help. In the same way learners will often demand a structured order to events or a magisterial exposition of new matter. They may rightly scorn the teacher who says ‘What would you like to do now?’ and respond ‘You tell us; you’re the expert’ or ‘Why haven’t you prepared a lesson.’
Human teachers, therefore, assume the magisterial role, not only because it is forced on us by the circumstances of mass teaching, but also because it is sometimes right. Several millennia of experience with it have made us fairly good at it. The best teachers have enthusiasm and can communicate a love of their subject. They are highly sensitive to student needs and flexible in their responses to need. But even this flexibility may not redeem entirely the loss of autonomy that player B, the learner, suffers when the teacher tells him what to do next. If I wanted to learn a game, say chess, I would probably read a book or attend lessons which explicitly taught the rules rather than try to deduce the rules by watching or joining in a game in progress. I would be ready, at that stage, to accept authority and to adopt a purely responsive role, answering questions rather than asking them. I would need a magister and would accept a person, a book, or even a computer in that role. Sooner or later, however, I would become discontented with the magister and would want to take on a more active role for myself. In the case of learning to play chess it would not be hard for me to do this, but in many curricular subjects the pleasure of ‘playing the game’ may be deferred for an unreasonable length of time.
Pedagogue
Let us think of a different kind of teacher, the assistant, often young and only partly trained, a native speaker of the language his students are learning, visiting their country mainly to improve his own grasp of their language. His classes are often labelled ‘conversation’ and are not part of the examined syllabus. Lesson time with him is unstructured, since he has not had the training to conduct magisterial teaching. He may well function just as informant: ‘tell us about food, marriage, pop music, the police … in your country.’ Or he may entertain with songs, with talks and reading aloud, or with games.
The obvious way in which we can computerise the functions of an assistant is in language games. But again it is worth looking at the underlying image we have of the games-playing computer and of the assistant. The name I have for this kind of teacher is pedagogue, a word which originally meant ‘the slave who escorts the children to school’. So think of a man in sandals and a cheap cotton robe, walking five paces behind the young master. He carries the young master’s books for him, but no cane. The young master snaps his fingers and the pedagogue approaches. He answers the young master’s questions, recites a poem, translates words, plays a game, or even, if that is what the young master demands, gives a test. The young master snaps his fingers again, and the pedagogue goes back to his place. He hopes he has given satisfaction, since otherwise he may starve.

The language assistant and the au pair girl are the last survivors of a pedagogue tradition that stretches from classical times, through the private tutors employed by wealthy families and the ballet and fencing masters attached to aristocratic courts, to their much diminished role nowadays. Pedagogue is no longer a career. We cannot afford pedagogues, and we have a bad conscience about slavery and the exploitation of people in menial roles. Instead we use libraries or, increasingly, computerised information systems. One of the clearest ways in which we have given machines the slave role is in word-processing; the computer becomes the scribe, copying out the master’s words neatly but not daring to alter them, though it may raise a tentative question about spelling or even grammar. We seem, however, to have lost the knack that we once possessed of exploiting pedagogues, of asking questions and making demands on a slave without stopping to ask if the demands were reasonable. Such behaviour may be despicable towards another human being. It is surely reasonable when your slave is made of transistors and plastic.
Curiously enough, one of the consequences of employing a slave is that one has to do the slave’s thinking. When a slave is trained to give unquestioning obedience, it becomes the master’s responsibility to think out the task in advance and to give effective orders. Many people who have employed servants or supervised junior office staff have tales to tell of instructions which have been obeyed literally, often with comic or disastrous results. Typical is the account of the junior clerk, sent to cash a large cheque and asked, on his way out, to ‘bring back some parking meter change’. He returns several hours later with sacks of coins. The computer behaves to a great extent like this clerk, and it is the user who must develop the skill of anticipation and exact description of a task.
Programmed Learning
The pioneers of computer-assisted learning never considered giving the machine any role other than magister. They saw teaching as a process of dialogue between teachers and learners, and looked only for ways of enabling their limited and basically stupid machines to conduct the teacher’s side. They found the solution in the system known as Programmed Learning (PL). PL fitted well into two prevailing orthodoxies of the fifties, behaviourist learning psychology and structural linguistics. Underlying PL is a belief that a body of knowledge can be reduced to a set of very small steps, each of which can be expressed in a brief verbal message and is easily learnable. Each step is turned into a frame which contains a small amount of exposition, followed by a question or task. The learner attempts the task, checks the answer and proceeds to the next frame (or, in the case of branching or Crowderian PL, is directed via a multiple choice format towards the next appropriate frame). PL does not in fact need a computer or any other machinery; it can be used just as effectively in paper forms, and computers used exclusively for PL are sometimes known disparagingly as ‘page-turners’. The real magister is not the machine but the person who wrote the material and imagined the kind of conversation he or she might have with an imaginary student. It is this displaced magister who is doing the initiating, task setting and evaluating; the machine is merely the medium for transmitting the script.
Machine worship
In our general awe of modern science and of computers, it is easy to forget that a computer is essentially a responsive device. It does not naturally initiate; it does not want to talk to you. Forgetting for a moment the complex logging-on sequences which one must go through with a time-shared mainframe or mini, the word it often displays when you switch on is READY. From that point it is waiting for you to use whatever may be in its memory. It will only pose questions or demand responses if a programmer has created a program which makes it do so. Responding to and obeying the machine’s commands ought to be a deliberate and temporary act of will on our part, something we agree to do because we anticipate some satisfaction from it. There are dangers if we forget this and fall into an inappropriate pattern of respect towards the machine.
The man who has appreciated these dangers and has written on them most cogently is Joseph Weizenbaum. In his book Computer Power and Human Reason (1976) he takes as his starting point the reactions on the part of colleagues and laymen towards his famous program ELIZA.
ELIZA, written in 1966, demonstrated that it was far easier than had been supposed hitherto to mimic natural language dialogue on a computer. The user could type in sentences and would then see, either on a screen or a teletypewriter, plausible answers. A series of such inputs and their answers resembled a conversation in which the machine appeared to have understood the human.
DOCTOR
The program actually consisted of two elements: a general algorithm which identified phrase boundaries and carried out a limited range of grammatical transformations (such as replacing ‘I’ with ‘you’), and a database which Weizenbaum called the script. The script consisted of a list of keywords or phrases which the program would look for when the user typed in a sentence, and a set of responses or response frames which would be printed out. Most of the responses were associated with particular keywords, but there were also some non-committal responses, such as ‘Please go on’, which the program would print if it had not found any of its keywords. Weizenbaum drew an analogy between ELIZA and an actress, capable of conversation but with nothing to say until she had learned her lines (1976:3). The best known script, and the one which was most often used for demonstration, was DOCTOR, in which the computer takes over the role of the psychiatrist in an interview conducted according to the principles of the Carl Rogers school of psychotherapy. This was particularly appropriate, since it is characteristic of such interviews that the psychiatrist directs as little as possible and echoes the patient’s words frequently.
The pathetic fallacy
Weizenbaum found three reactions to the program which disturbed him, two of which concern the present discussion. The first, perhaps less important, was the tendency to anthropomorphise and then to become emotionally involved with the program. He reports, for instance, how his secretary once asked him to leave the room because she wanted a private conversation with ELIZA. She clearly expected some kind of comfort and intimacy from the experience, in spite of the fact that she had been present while the program was being developed and therefore knew in outline how its effects were achieved (1976:6).
There is, of course, a universal tendency for us to anthropomorphise and form emotional bonds with machines and tools, and to treat any unpredictability in their behaviour as if it showed will or understanding. This does not usually reach the level of delusional thinking, but, Weizenbaum writes:
If [man’s] reliance on such machines is to be based on something other than unmitigated despair or blind faith, he must explain to himself what these machines do and even how they do it. … Yet most men don’t understand computers to even the slightest degree. So unless they are capable of very great skepticism (the kind we bring to bear while watching a stage magician), they can explain the computer’s intellectual feats only by bringing to bear the single analogy available to them, that is their model of their own capacity to think. No wonder, then, that they overshoot the mark. (1976: 9-10).
The magisterial fallacy
Even more disturbing, though, was a reaction that Weizenbaum encountered among some professional colleagues, namely that the DOCTOR script or a development of it up to some more powerful level could be used as a serious therapeutic aid, providing mechanised psychiatric interviewing at a price far below what a human interviewer would charge. This involved not only an overly optimistic view of the development of which ELIZA-type programs were capable, but a hideously pessimistic and mechanistic view of the human interviewer’s role in therapy and counselling. If the machine could capture all the significant elements of the therapist’s contribution, then the human therapist was capturable, i.e. was contributing nothing but the type of data-processing which was understood and represented in the machine’s program. The same fallacy (I hope it is a fallacy) underlies the efforts that have been made to create bigger and better teaching computers which will solve all educational problems with elaborations of tutorial dialogue, described as ‘intelligent tutoring systems’.
The components of an intelligent machine tutor
To be characterised as intelligent an educational system needs five components. The first is a representation of the subject knowledge to be taught, about which O’Shea and Self say:
The large majority of the thousands of computer-assisted learning programs in existence ‘do not know what they are doing’ when they teach. The idea of computers ‘knowing’ anything is a difficult one, but for the moment let us simply say that most programs do not know the subject under discussion in the sense of being able to answer unanticipated questions, and do not know enough about the individual student to be able to adapt the teaching session to his needs. If we want to build computer-assisted learning programs to answer unanticipated questions and to individualise teaching--and we assume we do--then we must try to make the necessary knowledge available to the computer. (O’Shea and Self, 1983: 3-4)
The second component, also touched on by O’Shea and Self, is a model of the learner, i.e. an account of strategies that learners may adopt in solving problems together with a means of storing the history of how the current learner has dealt with the problems so far presented. The third component is a means of self-adjustment; the machine, too, has to be able to ‘learn from experience’. The fourth is a channel by which the machine can explain its own decisions and procedures in a way which corresponds to human ways of thinking; this is what Donald Michie describes as the ‘human window’ (Michie and Johnston, 1984:71). The fifth, and in some ways the most important, is a language understanding system, or parser, so that the machine can make sense of the learner’s inputs in natural language. Obviously there can be no ‘unanticipated questions’ if the machine has no grammar with which to understand them.
All of these components are gradually being developed, though only with considerable effort. The effort itself is worth making; as a result of the research, we are beginning to understand a little better how knowledge can be represented and stored, what learning strategies learners use, what kind of explanations they demand, and how language itself works. As I have suggested elsewhere, we gain insights by using the computer as a mirror. But the product, the intelligent tutoring systems which are created by this work, are still feeble imitations of the human teacher. They are superior, certainly, in one respect: they do not forget things. But they are miserably deficient in three ways. In the first place they have no breadth of knowledge, no ability to find illuminating comparisons in the everyday experience shared by learner and teacher. Secondly, they are insensitive, with very few channels by which they can get messages from the learner. At the moment [1987] the available channels are the keyboard, pointing devices like light pens and mice, and embryonic voice recognition devices which do little more than identify words from a list. None of these provide ways of reading the learner’s immediate feelings. The third of the computer’s deficiencies is that it has no love or enthusiasm to share.
The customers
I do not know when, if ever, we will get mechanical teachers which can outperform human beings. However, there is no reason to wait for machines to become intelligent; the machines already have intelligent customers. A good deal of the effort devoted to intelligent tutoring systems seems to be based on what I call the ‘magisterial fallacy’, the belief that nothing is learned unless it is explicitly taught. Textbooks, machines and, all too often, human teachers tend to treat learners as ignorant idiots who need every problem solved for them. On the contrary, learning requires only a stimulating environment which provides feedback, one in which the laws of cause and effect work normally.
Role reversal
A computer which supplies such an environment can be treated as something to experiment with. One example would be a grammatical parser, i.e. a program which analyses sentences and shows how each word is functioning (its part of speech) and how the words are grouped into phrases and clauses. I have used simple devices of this kind with advanced learners (overseas students at a British university) purely as a means of getting them interested in grammar, but I am sure they could be used at lower levels. One of our programs which uses a simple parser, DODOES, asks the user to type in a statement, and then adds a question tag to it. If the input sentence is ‘HE LOVES CREAM’ it would print out ‘HE LOVES CREAM, DOESN’T HE?’
As it analyses the sentence, it may come back to the user with questions; for instance, if the input statement was ‘BABY SWALLOWS FLY’, it would ask ‘IS SWALLOWS A NOUN OR A VERB?’ If you answer ‘VERB’ it will then ask ‘IS THE PRONOUN FOR BABY HE, SHE OR IT?’ and then produce ⊓︀BABY SWALLOWS FLY, DOESN’T HE?’ according to your answer. If, on the other hand, you had told it that SWALLOWS was a noun, it would ask if it was singular or plural and then produce ‘BABY SWALLOWS FLY, DON’T THEY?’ It has no recognition vocabulary apart from pronouns, auxiliary verbs and negative particles (NO, NOT, NEVER), so it depends on the answers it gets.
The point about this is that learners can actually see the effects of different answers, and they can readily be prompted into trying out ambiguous sentences and are keen to discuss the way the program works to achieve its effects. One by-product of the work is that learners make use of reference grammars to check the machine in doubtful cases. This is a very different use of a parser from those foreseen by most designers of parsing utilities. When asked, they may tell you that their parsers can be used for essay correction. I believe it makes more sense to ask learners to correct what the machine does than vice versa, since in the process they are likely to learn a good deal about the underlying grammatical realities and the rules which the machine is using. In ordinary classrooms opportunities for experiment may be limited; what the machine can do for us is to put the ‘trial’ back into trial-and-error.
The BOOH factor
Too much respect for the machine will inhibit this kind of experimental approach, but fortunately there is in most of us a very healthy counter-instinct, which leads us to want to insult the machine when it is being magisterial. We see this most clearly when the computer prompts us to type in our name. If we can perceive a proper reason for this, then we will enter the name accurately. Otherwise, particularly when we are playing with demonstration programs, most users will enter something playful, such as Brigitte Bardot, Adolf Hitler or Kiss-me-quick. Later we can relish being addressed as ‘Adolf’, particularly if the computer’s messages sound patronising or cosy. ELIZA in particular lends itself to this form of exploitation; one can enjoy oneself trying to enter the most grotesque exaggerations and watch the machine respond in a deadpan or pseudo-caring fashion. Or one can enter nonsense, and laugh as it reflects these inputs as if they made sense. One may learn a certain amount about ELIZA’s language analyser in the process, which may in turn lead to observations about the grammatical structure of language itself; used this way ELIZA can be an interesting piece of language learning material. The instinct towards deliberate irreverence has been christened the BOOH factor by Muriel Higgins (1982), and there are numerous ways in which it can be harnessed to language learning ends.
Magisterial thinking and CLOZE procedure
The habit of thinking magisterially is difficult to shed, particularly among trained teachers. This became very apparent to me during a recent public discussion when Christopher Jones was demonstrating and describing his CLOZEMASTER, a program which creates cloze exercises out of texts in a very flexible way.
Cloze is now a very familiar technique with numerous variations of detail. Originally it was intended to measure the readability of text. A prose extract was printed with the first sentence intact but with words deleted at a fixed interval (usually six, seven or eight) thereafter. The passage was then given to a target group of readers. If they failed to restore 38% of the missing words, the book from which the passage was taken was judged too difficult. If they could restore more than 53%, the book was thought to be one which the group could read independently. Between these figures, the book could be read with support from the teacher.
Cloze quickly moved on to being used as a testing technique, evaluating individual learners’ performance rather than materials. Much of the appeal of the technique lies in the ease with which cloze tests can be constructed, but this was reinforced by experiments which showed that cloze scores correlated extremely well with tests of global ability. This is no doubt due to the fact that in order to restore arbitrarily deleted words the learner must call on a wide range of knowledge: sentence grammar, discourse connectedness, and common sense and general knowledge. However, the individual scoring immediately put into dispute the nature of a right answer. Should the marker accept only the original word, or an acceptable synonym? Several research projects showed that the practical difference between the two styles of marking was negligible, but some notion of fairness and authority has led many teachers to prefer the latter. Meanwhile varieties of cloze exercise proliferated. These included multiple choice cloze; rational cloze, in which words of a given grammatical class would be deleted (bringing cloze much closer to discrete-point grammar testing); a form of cloze without gaps in which the location of the deleted words had to be identified as well as the words themselves; and CLOZENTROPY, an interesting variation developed at Moray House on a computer, in which all student answers are stored, and the individual’s score for an item is based only on his or her closeness to the majority of answers, not on comparison with a nominally correct answer or a table of acceptable answers (Cousin 1983).
Computer CLOZE
A cloze exercise on a computer makes an already easy process even easier; the machine can read in text from a disc or tape and can insert gaps at whatever interval is asked for, thus making different exercises from the same text. The gaps can be suppressed, shown in a standard form, or shown with dashes for each deleted letter, thus giving a clue to the length of the missing word. The exercise can be printed out for completion away from the machine, or tackled by a student at the screen and keyboard. In the latter case students can have more than one attempt at each answer, and can be offered help (e.g. word length or first letter) after a failure. The computer can process the texts so rapidly that many of the decisions, e.g. about the deletion interval, the kind of help wanted, or about the scoring, can be made by the students themselves; the machine will produce a tailor-made exercise to the student’s specification within a few seconds. But the one thing that a small micro-computer cannot provide is assessment of the acceptability of alternative answers. There is not the storage space to equip the machine with the knowledge that blonde may replace fair if the next word is hair but not if the next word is play.
When I attended Christopher Jones’s demonstration, I was astonished at the extent to which this shortcoming, if it is one, was resented by the teachers present at the demonstration. The machine was inadequate, they felt, if it could not give authoritative rulings on acceptability, if it appeared to mark a ‘right’ answer as ‘wrong’. Many of them could not bring themselves to accept Jones’s counter- argument that the machine’s challenge did not involve notions of rightness or wrongness in language. The program was inviting the learner to restore a piece of written text which had been created by a particular writer on a particular occasion. The learner would win the game by guessing correctly what that writer had written, not by creating an acceptable piece of English with the same meaning. Indeed the effort of guessing often makes students aware of stylistic variation and paraphrases which they might not notice otherwise. None of this carried any weight with some members of the audience, who clearly expected the computer to mirror what they would have done in class, namely give an absolute judgement on each proposed answer.
STORYBOARD / ECLIPSE
I have encountered similar reactions from teachers who have seen or used my own STORYBOARD program (most recently issued as ECLIPSE), or variants of it such as TELLTALE in the Longman QUARTEXT package. This is a development of CLOZE to its logical extreme, where every word of a text is deleted, leaving only indications of word length and punctuation, together with a title to indicate the general semantic area. The learners (this is usually a group task) have to enter words which they think might occur, either content words suggested by the title or function words, some of which are bound to be present. Each correctly guessed word is entered on screen in all the right locations, and the text is built up gradually in jigsaw fashion. The puzzle is challenging and engrossing, and much lively discussion is generated.
A common question from teachers, however, is What happens when a learner puts in a spelling mistake? The answer is that the machine obeys its instructions. It hunts for, say, the word FREIND and reports that it failed to find it. It has no means of knowing that the learner should have typed in FRIEND. There are some teachers who find this unacceptable. A machine which fails to administer a metaphorical rap over the knuckles when a language error occurs is one they see no point in using.
Spelling
The topic of spelling is one which has strong magisterial overtones, particularly for English. This is due to the arbitrariness of some of our spelling conventions and the difficulty of deriving the correct spelling of a word purely from sound clues. The teacher and the dictionary, therefore, have great authority, and, apart from a few derivation rules, teachers offer few learning procedures other than memorisation. One of the hangovers from the behaviourist era is a fear of mistakes, since behaviourist theory maintained that correct behaviour had to be reinforced and any uncorrected error would be a counter-reinforcement.
Another program of mine, called PRINTER’S DEVIL, takes a set of words assumed to be familiar (e.g. the vocabulary of a coursebook) and systematically mutilates them. The learner’s task is to identify the principle behind the mutilation. Given the words FARMER, APPLE, LOVING, BEFORE and YELLOW, the program might print each word backwards:
REMRAF ELPPA GNIVOL EROFEB WOLLEY
or it might switch the first and last letters:
RARMEF EPPLA GOVINL EEFORB WELLOY
or it might remove all the vowels:
FRMR PPL LVNG BFR YLLW
At more advanced levels the rules may be applied conditionally; the word is left unmutilated unless it has, say, an even number of letters, or ends in a vowel, or begins with a letter in the first half of the alphabet. The activity is closely related to a more generalised game known as Kolodny’s Game or ‘Find the Rule’. The process one asks the learners to go through is:
- recognise some of the words in their mutilated forms
- work out a process for restoring the correct form from the mutilated form;
- apply this process to other mutilated forms to see if it yields familiar words;
- test the hypothesis on sufficient further examples to confirm or refute it
The activity is fun, and its educational value may be as much in the thinking and discussion it generates as in any direct learning from the program. Whenever I use or demonstrate this program, however, I can be sure of that some teachers will be indignant at me for showing spelling mistakes on screen, thus encouraging students to learn and remember the mutilated forms. The computer seems to have inherited some of the respect that many of us still feel for print, expecting everything in print to be correct and true. Here again is a symptom of our giving the machine a magisterial role, and of crediting learners with too little common sense.
Exploratory programs
Perhaps the area of greatest misunderstanding is that of exploratory programs, to use the term coined by Johns (1982). These are typically very short programs which will execute a morphological change, e.g. add a third-person or plural S-ending or an ING-ending to an input word, or select A or AN in front of a noun or noun phrase. The DODOES program discussed above is an example of an exploratory program. They embody a minute fragment of a grammar, a small set of rules. The learners’ task is to explore the adequacy of the program by giving it a variety of inputs; ultimately they are trying to find the exceptional cases that the program cannot handle, to force the program to make a mistake.
The machine’s role is obviously non-magisterial, since it is responding to any input in any order and makes no evaluation. What worries teachers is the magisterial vacuum. In the A/AN program, for instance, if a student enters FURNITURE, the machine will respond A FURNITURE, meaning, roughly, given that there is a count-noun FURNITURE, the correct form of article to use with it is A. The machine makes no judgement about whether FURNITURE is a count noun. Indeed it will respond just as readily to nonsense words as to English words, and much can be learned about the underlying algorithm by feeding it with nonsense.
So, if the magister does not reside in the machine, where can he be found? Who can prevent the activity from becoming a chaos of uncertainty? One obvious answer is a hovering human teacher supervising the group’s activity and intervening when necessary. Another answer is the collective knowledge of the group, who, unless the activity is wildly wrong for their level, will usually make a relevant ruling or summon the teacher in cases of doubt. A third answer is reference books, and one merit of exploratory programs is that they often drive learners to reference books in the search for exceptional cases which the program may not be able to handle. A fourth answer is the dormant knowledge of the individual; given time to remember and an unthreatening atmosphere in which to think, learners can quite often answer their own questions.
Intervention
Intervention, however, is always magisterial. It is an initiative: player A is doing something which player B has not asked him to. Even a magister’s presence can amount to tacit intervention. The magister’s silence is then an assurance to the learner that no major error has yet occurred. Sometimes this is reassuring and valuable. At other times it can destroy the whole point of an activity.
On several occasions I have used computer programs with learners or demonstrated them to teachers and have been overtaken by a power cut or mechanical failure. When this happens I continue by pretending to be the computer. I ask for inputs and respond exactly as the computer would. Since in many cases I have written the algorithms myself, I know them well enough to do this. I have found, though, that the activity begins to liven up only when my class or audience become sure that I will offer no help other than what the computer would have given. Until they can treat me like the machine, they will not join in the activity with zest.
Mario Rinvolucri has designed class activities derived from computer programs written by me or my colleague Tim Johns (Rinvolucri 1985). These include: a simplified form of the STORYBOARD program described above; a version of a program called PINPOINT in which the title of a text has to be guessed from a minute fragment of the text itself; and variations on the exploratory programs A/AN and S-ENDING. Rinvolucri reported to me that teachers who use these activities have found that they do not work well while the teacher is facing the class. Eye contact, smiles and frowns, normally so valuable in most forms of teaching, seem to inhibit guesswork and suppress experiment; the class expect guidance and take it from the facial messages. The solution has been for teachers to turn their backs on the class, so that their responses become impersonal.
The teacher facing the class is a magister, and what he does is in some sense to play God, loving, knowing, guiding, and judging. The teacher with his back to the class, the pedagogue, is playing Nature, impersonal, governed by laws of cause and effect, sometimes appearing cruel, mysterious or stupid. The natural role for the computer is the latter. To exploit a pedagogue or a computer in order to learn requires a willingness to initiate and a readiness to experiment, and this may entail changes of attitude on the part of the learner. Formal education trains learners to become responders, answering questions but rarely asking them. It may require guidance from a magister to begin the process of de- training, so that exploratory learning can begin again.
References:
Brumfit, C.J. “Classrooms as language communities.” In Holden, S. (ed) Focus on the Learner. Modern English Publications, 1983.
Cousin, W.D. Computer Clozentropy; phase one report. Scottish Centre for Education Overseas. 1983.
Ellis, R. Understanding Second Language Acquisition. Oxford University Press, 1985.
Higgins, John. Language, Learners and Computers. Longman, 1988
Higgins, Muriel. Student power and the BOOH factor. Winning entry in the 1982 Duke of Edinburgh ESU Prize, 1982.
Hubbard, Phil. Computers in Language Learning. Routledge, 2009
Johns, Tim. “Exploratory CAI; an alternative use of the computer in teaching foreign languages.” In Higgins, J. (ed) CALL; British Council Inputs. 1982.
Krashen, Stephen. Principles and Practice in Second Language Acquisition. Pergamon, 1982
Michie, D. and Johnston, R. The Creative Computer. Pelican Books, 1984
O’Shea, T. and Self, J. Learning and Teaching with Computers; Artificial Intelligence in Education. Harvester Press, 1983.
Rinvolucri, Mario. “Computer ideas for the chalkboard classroom.” Practical English Teacher, 5, 8, 1985.
Weizenbaum, J. Computer power and human reason. W. H. Freeman, 1976