Prof. Michael G. Dyer     HomePage


Overview of Current Research Interests:

I am interested in symbolic, probabilistic, neural, and animat-based approaches applied to a wide-range of human natural language (NL) semantic tasks. The animat-based approach involves the use of learning and evolution in populations of simple, communicating animal or insect-like neurally controlled agents embedded in virtual environments.

The tasks are wide-ranging because they include both basic NL-related tasks, such as:

as well as more complex NL-related tasks, such as:

I restrict myself to processing natural language written text (as opposed to research in auditory speech processing). Written texts include: narratives, editorials, dialogs, jokes, dictionary definitions, and other forms of expository text.

I am also interested in the emergence of cooperation (via learning and/or evolution) among populations of simple, mobile, neurally controlled artificial animals (animats), in order to perform interesting tasks. Animat-based NL-related tasks include:

Non-NL animat-based tasks of current interest are:

NL Research Interests (Symbolic):

Story Comprehension and Question Answering: Consider the following very short story: "John picked up the bat and hit Fred. There was blood everywhere. The police are looking for him." What did John hit Fred with? [not explicitly stated, must be inferred] Is "bat" a vampire bat? [no, a baseball bat] Is "picked up" here a social action, as in: "John picked up a pretty girl at the bar"? [no, it is physically grasping something] Whose blood is it? [not stated, must also be inferred] Who are the police looking for? [not stated, must be inferred] What meaning representations, inferential mechanisms, memory constructs, etc. are needed to comprehend stories at a semantic level similar to humans?

Story Invention: Consider an Aesop fable, such as the The Boy Who Cried Wolf. The story involves a complex interaction among multiple planners, contains a goal failure and illustrates a moral. How might such stories be generated automatically? What are rules/strategies of story invention in a given domain of knowledge (e.g., domain of knights and damsels in distress)?

Argument and Editorial Comprehension/Generation: Consider the following argument fragment: [Fred: All Finns stink at music.] [Risto: What about Sibelius?] Here, Risto has responded with a question but it is actually a statement attacking Fred's belief, since Sibelius is a famous Finnish composer. Semantically, the attack is accomplished by finding a counterexample. To comprehend and/or engage in the argument, a computer must understand the meaning of "music" and what "stink" means in this context. It must infer Fred's belief and decide whether it holds the same belief, and if not, it must decide how to best to attack the other's belief.

Word/Phrase Acquisition: Consider the sentence: "John was driving a Corvette and hit an abutment going 60 miles an hour." From this context a computer should be able to infer much about the meanings of the unknown words "Corvette" and "abutment". Consider "David took on Goliath." Suppose a computer already knows the meaning of "took" as to-take-possession-of and "on" as on-top-of. How can it use its knowledge of David and Goliath to learn the meaning of the novel phrase "took on" as refering to a semantic construct involving meeting-a-challenge?

Humor/Irony Comprehension: Many jokes involve errors in reasoning. Consider the following joke: [Fat Ethel was ordering a pizza to go. The clerk asked if she wanted the pizza cut into 8 or 12 slices. "Cut it into 8. I'm on diet." she said.] To "get" this joke requires both understanding the actions, dialog, goals of the characters and recognizing/understanding the error in reasoning.

Smart Environments with Personality:A smart environment must understand the NL statements and queries made by a user situated within thet environment. An environment with a personality must have its own goals and beliefs, its own emotional states, its own episodic memories, its own planning strategies. It is not passive but attempts to achieve its own personal goals through engaging users of the environment and discovering their goals and how their goals might interact with its own goals.

NL Comprehension in Distinct Knowledge/Task Domains: Domains include: (1) legal domain - knowledge constructs for law (rights, obligations), legal case comprehension, legal argumentation and analogical, case-based reasoning, (2) simple mechanical device domain - representation, reasoning, and comprehension of text describing such devices (e.g., scissors, doors, can openers), their structure, statics, and dynamics.

NL Research Interests (Neural/Connectionist):

Mind on Brain: How might the high-level cognitive tasks (especially those required for human language) be realized in neural-like architectures? How might the meanings of words, phrases, and sentences be encoded as patterns of activation over ensembles of firing neurons? Current artificial neural network models excel at classification tasks but are very weak in addressing the combinatorial/logical challenges posed by natural language processing. For instance, if in a story Joe buys a car then the reader infers that Joe now owns the car. How are rules and bindings (that the buyer of x is the owner of x, for any x) implemented in neurons? To tackle such issues, a variety of neural architectures are considered, including: recurrent neural networks, recursive autoassociative memories (RAAMs), self-organizing maps, tensor networks, localist networks, and Katamic networks (whose dendrites act as shift-delay lines). Other high-level issues include: goals and planning via neural networks, modeling question answering and episodic memoryv ia neural networks, machine translation, and acquisition of common sense and stereotypic knowledge via neural networks.

Language Acquisition via Symbol Grounding: How might the meanings of words, phrases, and sentences be acquired via grounding in other sensory modalities (such as vision). Consider action words such as "passes", and "grows". A child might learn what "passes" means by seeing a variety of moving objects (such as wheeled toys) overtake and pass one another while the child concurrently receives vebal descriptions of those events that include the word "passes". Likewise, "grows" can be learned by encountering this word while observing objects that are increasing in size. How might the meanings of such words be represented neurally? How might they be acquired through visual/verbal associations?

NL Research Interests (Symbolic/Probabilistic):

Word Meaning Extraction from Dictionaries: To what extent can the word-definitions and examples of word-use, available in dictionaries, be comprehended so as to create a self-extending semantic lexicon and thus bootstrap the process of word-meaning acquisition?

NL Research Interests (Animat-Based)

Evolution of Communication and Language: Consider an evolving population of neurally controlled artificial animals (animats), each capable of sending out simple signals. By a chance mutation/recombination, an animat-1 is born into the environment (as a result of mating by its parents) that is neurally wired to emit a signal (call it "foo") when coming upon a cache of food. By another chance mutation/recombination, another animat-2 is wired to approach the "foo" sound. By following such "foo" signals animat-2 might find food more easily and have a higher chance of surviving longer and thus reproducing more offspring. With proper selectional pressure, over time a signalling system will evolve in which animats guide one another to food, warn one another of predators, signal receptivity for mating, etc. Under what circumstances will different types of signalling systems evolve? What neural architectures can support the evolution of more complex signalling systems -- ones involving syntactic/semantic features of human languages, such as constituent structure, negation, reference to events removed in time and space, indexicals, and so on.

Modeling Transmission of Culture: What neural architectures support the ability of a child animat to learn via imitation of its parents and to learn vicariously, through observation of what happens to its peers as they act within the environment? A child animat must attend to the signals and actions of its parents in order to learn from them. The parents must attend to the child, so that the child has an increased chance of surviving (avoiding predators, finding food, etc.). How might parenting evolve? How might a child learn, over time, to behave as an adult and thus be capable of parenting its own offspring?

Non-NL Research Interests (Animat-Based)

Evolution/Learning of Construction Tasks: Many animals/insects build and repair nests, burrows, hives to protect themselves and their young from predators. A spider is capable of spot-repairing a broken web (without having to rebuild the entire web from scratch). Repair of structures (such as walls of an enclosure) requires a mental model of what the structure should look like. How are such mental "blueprints" encoded as neural firing patterns and how do they direct both initial construction and subsequent repair?

Evolution/Learning of Exchange of Goods/Services: Consider a population of simple agents where there is division of labor. Different agents produce and consume different products within different regions of an environment. Assume that these agents are capable of carrying products to other locations and agreeing upon the exchange of products when encountering one another. Under what circumstances will special locations in space form (that recur periodically also in the temporal dimension)? That is, what kinds of markets will evolve under different selectional pressures? Under what circumstances will a particular product evolve to serve as a medium of exchange (as a currency)? Assume that agents are capable of contractual agreements. Can an economy of interlocking contracts evolve to solve complex tasks?