Personal Robots Group at MIT is recruiting kids and parents for a study!

The Personal Robots Group at the MIT Media Lab is looking for participants for a research study and is offering an opportunity for you and your children to visit with one of our robots in our lab!

bluedragonbot

If you are interested in participating, please select a time slot at this link:

mitrobots.youcanbook.me

or simply contact David Nunez (dnunez@media.mit.edu, 512-366-3330) or Dr. Brad Knox (bradknox@mit.edu).

We are offering a $10 Amazon gift card as a small token of appreciation for your time, and we hope you and your child will learn about our research as a result of taking part in the study.

To be able to take part in this study, you must be over the age of 18, and your child  must be 4–7 years old. You must also be able to meet for 30 minutes at the MIT Media Lab, 20 Ames St, Cambridge MA.

Furthermore, we are offering an additional Amazon gift card for every other parent you refer to our study!

Simply direct your referral to the sign-up link at mitrobots.youcanbook.me and make sure they put your name in the form.

More information about the study:

We are trying to understand how parents and children interact with tablet computers during early literacy development and testing how a robot might likewise interact with a child to build the child’s language and literacy skills.

If you take part in this study you and your child will be asked to sit and work with tablet computer software, your child will be asked to interact with the tablet software and a robot under your supervision, and you and/or your child will be asked to complete a short questionnaires at various points of the study and participate in a short interview. Your faces and voices will be videotaped during the session for research purposes.

Comments Off on Personal Robots Group at MIT is recruiting kids and parents for a study!

Filed under Uncategorized

What is a robot?

I was contacted by Newsweek reporter Drake Bennett and later quoted in his entertaining (and grumpily titled) article on the definition of a robot. You can read the story here: Everything Is Not a Damn Robot. Though I am ignorant of the process of officially determining the definition of a word, OED-style, I nonetheless ventured to describe the usage of the word “robot” as I’ve experienced it.

Here is the core of the original email I sent to Drake Bennett, from which he quoted. I’ve edited it slightly.

I’ll do my best to answer. Skip to the very end for a short summary.

 

Traditionally, a robot is any programmable machine that can manipulate its environment, even if it doesn’t sense that environment or conduct reasoning to determine its behavior. Such robots include certain manufacturing robots and drones.

 

However, in many circles, including those of most artificial intelligence researchers, the word “robot” usually implies an “autonomous robot” or an “intelligent robot”.

Both of these categories of robots employ a sense-process-act loop, in which a robot senses its environment (e.g., through cameras or IR sensors), processes or thinks about how to act based on its current sensory information and maybe some memory of past sensory information, and then takes some action that affects its environment (e.g., moving its joints or talking). A robot that communicates to a human or moves its location in space would be considered to be affecting its environment.

More specifically, an autonomous robot acts on its own without human control. Semi-autonomous robots share control with a human. Semi-autonomy can involve trading off full control, such as when the robot gets stuck and needs a human operator. It can also include a robot and human each providing control signals that together create the full control signal.

An intelligent robot’s processing step is complex enough to be called intelligent.

That likely sounds circular or slippery; it is. Intelligence, especially in AI, is largely judged based on a I-know-it-when-I-see-it test. Even autonomy isn’t clear-cut: if a user tells a robot to build her a house and that robot builds a house on its own, is it acting with full autonomy or with shared autonomy, working with the human’s build-a-house control signal?

 

The label “robot” similarly lacks a black-and-white border. Worse, “robot” lacks even a grayscale border that’s consistent across different people’s usages.

Some people seek a technical definition of the the word robot and go with the traditional definition above or the more restrictive definition that also requires a sense-process-act loop. From such technical definitions, a smart appliance that senses its environment (e.g. the wetness of the clothes in a dryer) and acts based on that sensing would be considered a robot. A computer could be considered a robot too, since it “senses” the world through a keyboard, a mouse, etc. and acts through a screen and speakers. Smartphones are more easily considered robots with their more complex sensing (GPS, accelerometers, etc.).

However, some would say that to be a robot, a machine must take actions that manipulate its physical environment at a more macro level than emitting light and sound waves and more directly than by communicating to a human; for them, a computer wouldn’t be a robot.

For many others, robots are defined—like intelligence—through an I-know-it-when-see-it test. In my experience, people are more likely to call a machine a robot if its resembles a human or other animal in appearance or behavior, if it has mechanical joints it can actuate, if it exhibits complex behavior, or if it operates in a complex environment (not one that is manicured for the machine).

A slightly cynical note: I suspect that it’s smart marketing of a product to take the most expansive definition of “robot” if the product can then be called a robot.

 

In short, I don’t know precisely what a robot is, and I would be suspicious of anyone who gives an absolute answer.

Comments Off on What is a robot?

2013/11/01 · 12:00 pm

Highlighting facial features in the 3D mesh from Kinect SDK 1.7

One of my current projects involves visualizing people’s facial expressions as detected through Kinect data. I’m using Kinect SDK 1.7 and the corresponding toolkit for face tracking (Microsoft.Kinect.Toolkit.FaceTracking). The documentation for face tracking is somewhat light and lacks full information about indices of the 3D mesh received by calling Get3DShape() in a FaceTrackFrame instance (in C#). I iterated through the lines on the face—over 400!—and identified which corresponded to the mouth, eyes, and eyebrows. I also identified which made horror-movie-like lines across the opening of the mouth or over the eyes. The end result is shown below, with the scary lines removed and the desired lines in blue. (The big whitish spheres and their connections are from skeletal data.)

Screen Shot 2013-08-20 at 4.45.00 PM

 

In case anyone else is interested in which lines I identified, here they are. Each line is represented as a pair of indices, where an index corresponds to the FeaturePoint value in each item of the EnumIndexableCollection returned by Get3DShape().

 

leftEyeConnections = {{21, 95}, {24, 101}, {22, 101}, {23, 103}, {23, 109}, {19, 103}, {21, 103}, {21, 105}, {107, 22}, {22, 109}, {24, 109}, {71, 23}, {71, 23}, {68, 20}, {67, 20}, {67, 97}, {67, 71}, {67, 21}, {71, 21}, {71, 105}, {23, 105}, {99, 68}, {99, 22}, {72, 22}, {68, 22}, {23, 72}, {23, 107}, {72, 107}, {68, 22}, {72, 68}, {97, 21}, {19, 95}, {99, 20}, {101, 20}, {97, 20}, {95, 20}};

rightEyeConnections = {{106, 56}, {110, 56}, {55, 110}, {102, 53}, {100, 53}, {96, 53}, {53, 96}, {102, 53}, {52, 96}, {54, 98}, {54, 96}, {100, 55}, {55, 102}, {74, 70}, {69, 73}, {74, 56}, {69, 53}, {53, 98}, {69, 54}, {106, 54}, {73, 54}, {56, 73}, {108, 56}, {73, 106}, {74, 108}, {108, 55}, {70, 55}, {100, 70}, {53, 70}, {54, 104}, {104, 56}, {52, 104}, {110, 57}, {102, 57}, {55,74}, {69,98}};

leftEyeBrowConnections = {{18, 17}, {16, 15}, {15, 18}, {16, 17}};

rightEyeBrowConnections =  {{50, 49}, {50, 51}, {49, 48}, {51, 48}};

mouthConnections = {{86, 8}, {89, 86}, {84, 40}, {85, 8}, {88, 83}, {88, 85}, {88, 81}, {84, 89}, {87, 82}, {83, 40}, {82, 89}, {81, 87}, {89, 80}, {88, 79}, {79, 33}, {80, 66}, {7, 66}, {7, 33}};

 

scaryMouthLines= {{81, 83}, {40, 87}, {87, 84}, {81, 40}, {84, 82}};

scaryEyeLines= {{72, 67}, {68, 67}, {71, 72}, {70, 69}, {69, 73}, {73, 70}, {74, 73}};

Comments Off on Highlighting facial features in the 3D mesh from Kinect SDK 1.7

Filed under Uncategorized