Why is it hard for autonomous systems to do “simple” things?
Humans do a lot of cognitive processing without realizing it. Tasks that seem simple to humans—recognizing the difference between a table and a chair, for example—are done intuitively and unconsciously. This is an ability developed not only over one’s lifetime, but over hundreds of thousands of years through the evolution of the human species. For an autonomous system to perform the same action requires extensive programming.
Recent advances in machine learning have enabled autonomous systems to begin to “understand” and “learn” from data and experience with less need to program-specific rules. For example, a system now can be taught to distinguish between a table and a chair by showing it many examples of each. These machine learning algorithms allow autonomous systems to view the world more like human cognition does, recognizing patterns in the things they observe, enabling them to operate more robustly in our unpredictable real world.
The past two decades also have brought exponential growth in computing power and memory, but even with those improvements, it is impractical to process all possible actions a robot could take and select one to implement—processors would be overwhelmed. To cope with the nearly infinite number of possible actions and outcomes that arise from complexity in environments, autonomous systems need artificial intelligence algorithms that are smarter about picking a course of action quickly and efficiently.
What is the process of decision-making?
Autonomy can be thought of as an iterative process that cycles through the four following functions: observe, orient, decide and act (OODA). In the OODA loop, each stage informs the next one. The speed at which an autonomous system can get through this cycle accurately (time from sensing to reaction) informs whether it can operate effectively in real time, thus efficient algorithms are needed for a system to be able to reason about uncertain, complex situations.
Observe: Autonomous systems are designed for particular applications and equipped with domain-specific sensors to observe the world around them. For instance, underwater vehicles are equipped with sonars to be able “see” underwater, and autonomous cars are equipped with LiDAR and cameras to detect objects around them. These sensors collect vast amounts of raw data that then must be processed in real time to obtain a cohesive understanding of the world.
Orient: The raw data collected by the sensors must be processed efficiently to create a representation of the world that the autonomous vehicle can use to make decisions. Multiple sources of data can be combined through sensor fusion techniques, yielding important information, such as the location of the vehicle in the world (localization), the structure of the environment (mapping) and the existence of objects of interest in the world (perception).
Decide: To select a course of action, an autonomous system relies on artificial intelligence algorithms. It is important to have algorithms that can reason about the complexities of the real world while new information streams to sensors in real time. As a programmed system, an autonomous platform has a fixed number of possible actions from which to choose. These can vary in complexity, from planning paths, tasks and missions to dynamic re-planning in response to a changing situation.
Act: Taking action for an autonomous system means executing a selected behavior. For an autonomous vehicle, this likely includes traveling a planned course. As the system executes the selected behavior, new sensor data become available to be processed in the next cycle of the OODA loop. If the situation’s complexity increases beyond the ability of the system’s programming to cope, it may abort its activity, possibly returning to safe/standby mode or starting point, powering down and awaiting instructions.
Access Our Capabilities
We push boundaries and deliver the capabilities you need.