How babies learn – and why robots can’t compete

robot-vs-baby

If we could understand how the infant mind develops, it might help every child reach their full potential. But seeing them as learning machines is not the answer.

Deb Roy and Rupal Patel pulled into their driveway on a fine July day in 2005 with the beaming smiles and sleep-deprived glow common to all first-time parents. Pausing in the hallway of their Boston home for Grandpa to snap a photo, they chattered happily over the precious newborn son swaddled between them.

This normal-looking suburban couple weren’t exactly like other parents. Roy was an AI and robotics expert at MIT, Patel an eminent speech and language specialist at nearby Northeastern University. For years, they had been planning to amass the most extensive home-video collection ever.

From the ceiling in the hallway blinked two discreet black dots, each the size of a coin. Further dots were located over the open-plan living area and the dining room. There were 25 in total throughout the house – 14 microphones and 11 fish-eye cameras, part of a system primed to launch on their return from hospital, intended to record the newborn’s every move.

It had begun a decade earlier in Canada – but in fact Roy had built his first robots when he was just was six years old, back in Winnipeg in the 1970s, and he’d never really stopped. As his interest turned into a career, he wondered about android brains. What would it take for the machines he made to think and talk? “I thought I could just read the literature on how kids do it, and that would give me a blueprint for building my language and learning robots,” Roy told me.

Over dinner one night, he boasted to Patel, who was then completing her PhD in human speech pathology, that he had already created a robot that was learning the same way kids learn. He was convinced that if it got the sort of input children get, the robot could learn from it.

Toco was little more than a camera and microphone mounted on a Meccano frame, and given character with ping-pong-ball eyes, a red feather quiff and crooked yellow bill. But it was smart. Using voice recognition and pattern-analysing algorithms, Roy had painstakingly taught Toco to distinguish words and concepts within the maelstrom of everyday speech. Where previously computers learned language digitally, understanding words in relation to other words, Roy’s breakthrough was to create a machine that understood their relationship to objects. Asked to pick out the red ball among a range of physical items, Toco could do it.

Continued at The Guardian