I was contacted by Newsweek reporter Drake Bennett and later quoted in his entertaining (and grumpily titled) article on the definition of a robot. You can read the story here: Everything Is Not a Damn Robot. Though I am ignorant of the process of officially determining the definition of a word, OED-style, I nonetheless ventured to describe the usage of the word “robot” as I’ve experienced it.
Here is the core of the original email I sent to Drake Bennett, from which he quoted. I’ve edited it slightly.
I’ll do my best to answer. Skip to the very end for a short summary.
Traditionally, a robot is any programmable machine that can manipulate its environment, even if it doesn’t sense that environment or conduct reasoning to determine its behavior. Such robots include certain manufacturing robots and drones.
However, in many circles, including those of most artificial intelligence researchers, the word “robot” usually implies an “autonomous robot” or an “intelligent robot”.
Both of these categories of robots employ a sense-process-act loop, in which a robot senses its environment (e.g., through cameras or IR sensors), processes or thinks about how to act based on its current sensory information and maybe some memory of past sensory information, and then takes some action that affects its environment (e.g., moving its joints or talking). A robot that communicates to a human or moves its location in space would be considered to be affecting its environment.
More specifically, an autonomous robot acts on its own without human control. Semi-autonomous robots share control with a human. Semi-autonomy can involve trading off full control, such as when the robot gets stuck and needs a human operator. It can also include a robot and human each providing control signals that together create the full control signal.
An intelligent robot’s processing step is complex enough to be called intelligent.
That likely sounds circular or slippery; it is. Intelligence, especially in AI, is largely judged based on a I-know-it-when-I-see-it test. Even autonomy isn’t clear-cut: if a user tells a robot to build her a house and that robot builds a house on its own, is it acting with full autonomy or with shared autonomy, working with the human’s build-a-house control signal?
The label “robot” similarly lacks a black-and-white border. Worse, “robot” lacks even a grayscale border that’s consistent across different people’s usages.
Some people seek a technical definition of the the word robot and go with the traditional definition above or the more restrictive definition that also requires a sense-process-act loop. From such technical definitions, a smart appliance that senses its environment (e.g. the wetness of the clothes in a dryer) and acts based on that sensing would be considered a robot. A computer could be considered a robot too, since it “senses” the world through a keyboard, a mouse, etc. and acts through a screen and speakers. Smartphones are more easily considered robots with their more complex sensing (GPS, accelerometers, etc.).
However, some would say that to be a robot, a machine must take actions that manipulate its physical environment at a more macro level than emitting light and sound waves and more directly than by communicating to a human; for them, a computer wouldn’t be a robot.
For many others, robots are defined—like intelligence—through an I-know-it-when-see-it test. In my experience, people are more likely to call a machine a robot if its resembles a human or other animal in appearance or behavior, if it has mechanical joints it can actuate, if it exhibits complex behavior, or if it operates in a complex environment (not one that is manicured for the machine).
A slightly cynical note: I suspect that it’s smart marketing of a product to take the most expansive definition of “robot” if the product can then be called a robot.
In short, I don’t know precisely what a robot is, and I would be suspicious of anyone who gives an absolute answer.