Research
What if we could teach robots using language?
This research question is being tackled by the field of
Interactive Task Learning. My focus is on
creating robotic agents that can learn new tasks in
one-shot from natural language instruction.
My research involves a cognitive architecture approach, where we investigate
issues of knowledge representation, reasoning, and explanation-based learning.
Rosie
Rosie is the name of the Interactive Task Learning agent I have helped develop at the University
of Michigan in the Soar Lab.
It is written in the Soar Cognitive Architecture.
It can learn many different tasks, games, and puzzles and has been deployed in many different
real and simulated environments.
Here is a
demo video
of it learning tasks in a mobile environment.
Domains
Publications
2021 |
Aaron Mininger
Mininger
My doctoral thesis describing how I extended Rosie's task learning capabilities along three dimensions. First, learning tasks with communicative and mental actions. Second, developing a hybrid representation that supports goal-based and procedural formulations, and blends of the two. Third, learning tasks with temporal, conditional, and repetitious modifiers.
2019 |
Aaron Mininger, John E. Laird
Mininger, Laird
Demonstrates how an agent in a symbolic cognitive architecture with access to a spatial short-term memory can use spatial reasoning and domain knowledge to participate in the anchoring process and to detect and correct bottom-up anchoring errors in its input.
2019 |
John E. Laird, Shiwali Mohan, James Kirk, Aaron Mininger
Laird, Mohan, Kirk, Mininger
Identifies and describes the key characteristics of the learning problem inherent to Situated Interactive Task Learning.
2018 |
Aaron Mininger, John E. Laird
Mininger, Laird
Explains how the task-learning part of Rosie was extended to also learn simple procedural tasks in an integrated manner.
2017 |
Peter Lindes, Aaron Mininger, James Kirk, John E. Laird
Lindes, Mininger, Kirk, Laird
Describes a new language comprehension system called Lucia based on Embodied Construction Grammar that is integrated with Rosie.
2016 |
Aaron Mininger, John E. Laird
Mininger, Laird
Describes work done to extend Rosie to a mobile environment and how it handles references to objects that are not visible. Rosie learns different strategies for finding objects through instruction.
2016 |
James Kirk, Aaron Mininger, John E. Laird
Kirk, Mininger, Laird
Describes work that allows Rosie to learn task goals through visual demonstrations instead of linguistic descriptions. This was done with games, puzzles, and regular tasks on the tabletop robot.
2015 |
Shiwali Mohan, James Kirk, Aaron Mininger, John E. Laird
Mohan, Kirk, Mininger, Laird
Identifies and describes nine system-level requirements for collaborative robotic agents that help them support more effective, efficient, and task-oriented dialog.
2014 |
Shiwali Mohan, Aaron Mininger, John E. Laird
Mohan, Mininger, Laird
Describes a computational model of situated language comprehension that incorporates knowledge from perception, domain knowledge, and short and long term experiences when generating semantic representations.
2012 |
Shiwali Mohan, Aaron Mininger, James Kirk, John E. Laird
Mohan, Mininger, Kirk, Laird
Describes an early version of the Rosie project with the tabletop blocks-world domain that can learn nouns (shapes), adjectives (colors), prepositions (spatial relations), and verbs (goal-based actions).
2012 |
Shiwali Mohan, Aaron Mininger, James Kirk, John E. Laird
Mohan, Mininger, Kirk, Laird
Describes an early version of the Rosie project with the tabletop blocks-world domain that can learn nouns (shapes), adjectives (colors), prepositions (spatial relations), and verbs (goal-based actions).