Subsumption architecture

Summary

Subsumption architecture is a reactive robotic architecture heavily associated with behavior-based robotics which was very popular in the 1980s and 90s. The term was introduced by Rodney Brooks and colleagues in 1986.[1][2][3] Subsumption has been widely influential in autonomous robotics and elsewhere in real-time AI.

Overview edit

Subsumption architecture is a control architecture that was proposed in opposition to traditional symbolic AI. Instead of guiding behavior by symbolic mental representations of the world, subsumption architecture couples sensory information to action selection in an intimate and bottom-up fashion.[4]: 130 

It does this by decomposing the complete behavior into sub-behaviors. These sub-behaviors are organized into a hierarchy of layers. Each layer implements a particular level of behavioral competence, and higher levels are able to subsume lower levels (= integrate/combine lower levels to a more comprehensive whole) in order to create viable behavior. For example, a robot's lowest layer could be "avoid an object". The second layer would be "wander around", which runs beneath the third layer "explore the world". Because a robot must have the ability to "avoid objects" in order to "wander around" effectively, the subsumption architecture creates a system in which the higher layers utilize the lower-level competencies. The layers, which all receive sensor-information, work in parallel and generate outputs. These outputs can be commands to actuators, or signals that suppress or inhibit other layers.[5]: 8–12, 15–16 

Goal edit

Subsumption architecture attacks the problem of intelligence from a significantly different perspective than traditional AI. Disappointed with the performance of Shakey the robot and similar conscious mind representation-inspired projects, Rodney Brooks started creating robots based on a different notion of intelligence, resembling unconscious mind processes. Instead of modelling aspects of human intelligence via symbol manipulation, this approach is aimed at real-time interaction and viable responses to a dynamic lab or office environment.[4]: 130–131 

The goal was informed by four key ideas:

  • Situatedness – A major idea of situated AI is that a robot should be able to react to its environment within a human-like time-frame. Brooks argues that situated mobile robot should not represent the world via an internal set of symbols and then act on this model. Instead, he claims that "the world is its own best model", which means that proper perception-to-action setups can be used to directly interact with the world as opposed to modelling it. Yet, each module/behavior still models the world, but on a very low level, close to the sensorimotor signals. These simple models necessarily use hardcoded assumptions about the world encoded in the algorithms themselves, but avoid the use of memory to predict the world's behavior, instead relying on direct sensorial feedback as much as possible.
  • Embodiment – Brooks argues building an embodied agent accomplishes two things. The first is that it forces the designer to test and create an integrated physical control system, not theoretic models or simulated robots that might not work in the physical world. The second is that it can solve the symbol grounding problem, a philosophical issue many traditional AIs encounter, by directly coupling sense-data to meaningful actions. "The world grounds regress," and the internal relation of the behavioral layers are directly grounded in the world the robot perceives.
  • Intelligence – Looking at evolutionary progress, Brooks argues that developing perceptual and mobility skills are a necessary foundation for human-like intelligence. Also, by rejecting top-down representations as a viable starting point for AI, it seems that "intelligence is determined by the dynamics of interaction with the world."
  • Emergence – Conventionally, individual modules are not considered intelligent by themselves. It is the interaction of such modules, evaluated by observing the agent and its environment, that is usually deemed intelligent (or not). "Intelligence," therefore, "is in the eye of the observer."[5]: 165–170 

The ideas outlined above are still a part of an ongoing debate regarding the nature of intelligence and how the progress of robotics and AI should be fostered.

Layers and augmented finite-state machines edit

Each layer is made up by a set of processors that are augmented finite-state machines (AFSM), the augmentation being added instance variables to hold programmable data-structures. A layer is a module and is responsible for a single behavioral goal, such as "wander around." There is no central control within or between these behavioral modules. All AFSMs continuously and asynchronously receive input from the relevant sensors and send output to actuators (or other AFSMs). Input signals that are not read by the time a new one is delivered end up getting discarded. These discarded signals are common, and is useful for performance because it allows the system to work in real time by dealing with the most immediate information.

Because there is no central control, AFSMs communicate with each other via inhibition and suppression signals. Inhibition signals block signals from reaching actuators or AFSMs, and suppression signals blocks or replaces the inputs to layers or their AFSMs. This system of AFSM communication is how higher layers subsume lower ones (see figure 1), as well as how the architecture deals with priority and action selection arbitration in general.[5]: 12–16 

 
Figure 1: Abstract representation of subsumption architecture, with the higher level layers subsuming the roles of lower level layers when the sensory information determines it.[5]: 11 

The development of layers follows an intuitive progression. First, the lowest layer is created, tested, and debugged. Once that lowest level is running, one creates and attaches the second layer with the proper suppression and inhibition connections to the first layer. After testing and debugging the combined behavior, this process can be repeated for (theoretically) any number of behavioral modules.[5]: 16–20 

Robots edit

The following is a small list of robots that utilize the subsumption architecture.

  • Allen (robot)
  • Herbert, a soda can collecting robot (see external links for a video)
  • Genghis, a robust hexapodal walker (see external links for a video)

The above are described in detail along with other robots in Elephants Don't Play Chess.[6]

Strengths and weaknesses edit

The main advantages of the architecture are:

  • the emphasis on iterative development and testing of real-time systems in their target domain;
  • the emphasis on connecting limited, task-specific perception directly to the expressed actions that require it; and
  • the emphasis on distributive and parallel control, thereby integrating the perception, control, and action systems in a manner similar to animals.[5]: 172–173 [6]

The main disadvantages of the architecture are:

  • the difficulty of designing adaptable action selection through highly distributed system of inhibition and suppression;[4]: 139–140  and
  • the lack of large memory and symbolic representation, which seems to restrict the architecture from understanding language;

When subsumption architecture was developed, the novel setup and approach of subsumption architecture allowed it to be successful in many important domains where traditional AI had failed, namely real-time interaction with a dynamic environment. The lack of large memory storage, symbolic representations, and central control, however, places it at a disadvantage at learning complex actions, in-depth mapping, and understanding language.

See also edit

Notes edit

  1. ^ Brooks, R. (1986). "A robust layered control system for a mobile robot". IEEE Journal of Robotics and Automation. 2 (1): 14–23. doi:10.1109/JRA.1986.1087032. hdl:1721.1/6432. S2CID 10542804.
  2. ^ Brooks, R. (1986). "Asynchronous distributed control system for a mobile robot.". SPIE Conference on Mobile Robots. pp. 77–84.
  3. ^ Brooks, R. A., "A Robust Programming Scheme for a Mobile Robot", Proceedings of NATO Advanced Research Workshop on Languages for Sensor-Based Control in Robotics, Castelvecchio Pascoli, Italy, September 1986.
  4. ^ a b c Arkin, Ronald (1998). Behavior-Based Robotics. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-01165-5.
  5. ^ a b c d e f Brooks, Rodney (1999). Cambrian Intelligence: The Early History of the New AI. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-02468-6.
  6. ^ a b Brooks, R.A. (1990). Elephants Don't Play Chess. MIT Press. ISBN 978-0-262-63135-8. Retrieved 2013-11-23. {{cite book}}: |journal= ignored (help)

References edit

Key papers include:

  • R. A. Brooks (1986), "A Robust Layer Control System for a Mobile Robot", IEEE Journal of Robotics and Automation RA-2, 14-23.
  • R. A. Brooks (1987), "Planning is just a way of avoiding figuring out what to do next", Technical report, MIT Artificial Intelligence Laboratory.
  • R. Brooks and A. Flynn (Anita M. Flynn) (1989), "Fast, cheap, and out of control: A robot invasion of the solar system," J. Brit. Interplanetary Soc., vol. 42, no. 10, pp. 478–485, 1989. (The paper later gave rise to the title of the film Fast, Cheap and Out of Control, and the paper's concepts arguably have been seen in practice in the 1997 Mars Pathfinder and then 2004 Mars Exploration Rover Mission.)
  • R. A. Brooks (1991b), "Intelligence Without Reason", in Proceedings of the 1991 International Joint Conference on Artificial Intelligence, pp. 569–595.
  • R. A Brooks (1991c), "Intelligence Without Representation", Artificial Intelligence 47 (1991) 139-159. (Paper introduces concepts of Merkwelt and the Subsumption architecture.)

External links edit

  • SB-MASE is a subsumption-based multi-agent simulator.
  • Subsumption for the SR04 and jBot Robots, DPRG website
  • Develop LeJOS programs step by step, Juan Antonio Breña Moral website
  • Video of Herbert, the soda can collecting robot, YouTube.
  • Video of Genghis, a robust hexapodal walker, YouTube.