Come on, think about it! Cleaning, cooking, doing all of our chores are just a few of the wondrous possibilities. What a wonderful possibility right? Unfortunately, at the moment you will have to keep dreaming. While there are some amazing robots that exist out there, robots are not yet adaptable enough to carry out a wide range of activities like this effectively. Moreover, although speech recognition technology has advanced by leaps and bounds, it’s still not good enough for use with Robots. Your best bet for getting something like a hypothetical robot butler to follow your instructions would be to type the instruction set.
Spoken Commands
Imagine telling your robot, “Pick up that box over there.” This seems simple enough but there is an issue. Your robot will have to break this down into a number of steps before completing the action. A possible scenario for carrying out this command is:
Turn on tracking systemTurn on walking motorsChange DirectionTake necessary stepsRotate limbsClench boxLift box
As you can see, this is actually more complex than it first appeared to be. Now imagine that command compared to something like, “Turn on your tracking system.” Although the number of words used to give these 2 commands is similar, their levels of complexity are worlds apart. How can we solve this? As it stands now, robots will have trouble figuring out the different levels of complexity of spoken commands. Fear not, a team at Brown University has developed a system which improves the way robots handle spoken commands.
How to Make Your Robots Obey Your Orders: A System for Enabling Robots To Carry Out Spoken Commands Effectively
The virtual Cleanup World is a virtual task domain. It consists of color-coded rooms, a virtual robot and an object for the robot to carry out tasks with. Volunteers at Mechanical Turk figured out which instruction sets led to particular actions in Cleanup world. First, they observed the robot as it carried out a variety of tasks. They were then asked what instruction sets they thought would work better. The volunteers were asked to create high-level, mid-level, and low-level commands. High-level commands were those such as instructing the robot to carry a chair to a room of a particular color. Low-level commands were commands broken down into several steps. Mid-level commands combined the features of high, and low-level commands. The researchers at Brown used the data they obtained to train their system to understand varying levels of complexity. The system was then able to gather what action needed to be carried out and understand the levels of complexity associated with different sentence structures.
Putting The System To The Test
Based on this, it was able to devise an appropriate plan based on the spoken commands it was given. After training their system, it was time to test the fruits of their labor. Research made use of Cleanup World once again as well as a real robot operating in a physical space set up similarly to the virtual Cleanup World. When the robots were able to figure out the desired end result, as well as understand the level of complexity of tasks, they completed the task in just 1 second 90 percent of the time. However, when there was a breakdown in understanding the level of complexity, task completion took longer. In this case, the robots required 20 or more seconds of planning in order to complete a task. The researchers will need to find ways of minimizing these breakdowns to create a more efficient system.
Final Thoughts
Robots still have quite a way to go before they are mainstream. However, this work brings us closer to having robots which can easily understand the commands we dish out to them. Until then, go wash your own dishes. The above article may contain affiliate links which help support Guiding Tech. However, it does not affect our editorial integrity. The content remains unbiased and authentic.