Skip to main content

Kid-mounted cameras help A.I. learn to view the world through eyes of a child

 

Talk to any artificial intelligence researcher and they’ll tell you that, while A.I. may be capable of complex acts like driving cars and spotting tiny details on X-ray scans, they’re still way behind when it comes to the generalized abilities of even a 3-year-old kid. This is sometimes called Moravec’s paradox: That the seemingly hard stuff is easy for an A.I., while the seemingly easy stuff is hard.

Recommended Videos

But what if you could teach an A.I. to learn like a kid? And what kind of training data would you need to feed into a neural network to carry out the experiment? Researchers from New York University recently set out to test this hypothesis by using a dataset of video footage taken from head-mounted cameras worn regularly by kids during their first three years alive.

This SAYcam data was collected by psychologist Jess Sullivan and colleagues in a paper published earlier this year. The kids recorded their GoPro-style experiences for one to two hours per week as they went about their daily lives. The researchers recorded the footage to create a “large, naturalistic, longitudinal dataset of infant and child-perspective videos” for use by psychologists, linguists, and computer scientists.

Training an A.I. to view the world like a kid

The New York University researchers then took this video data and used it to train a neural network.

“The goal was to address a nature vs. nurture-type question,” Emin Orhan, lead researcher on the project, told in an email to Digital Trends. “Given this visual experience that children receive in their early development, can we learn high-level visual categories — such as table, chair, cat, car, etc. — using generic learning algorithms, or does this ability require some kind of innate knowledge in children that cannot be learned by applying generic learning methods to the early visual experience that children receive?”

The A.I. did show some learning by, for example, recognizing a cat that was frequently featured in the video. While the researchers didn’t create anything close to a kid version of Artificial General Intelligence, the research nonetheless highlights how certain visual features can be learned simply by watching naturalistic data. There’s still more work to be done, though.

“We found that, by and large, it is possible to learn pretty sophisticated high-level visual concepts in this way without assuming any innate knowledge,” Orhan explained. “But understanding precisely what these machine learning models trained with the headcam data are capable of doing, and what exactly is still missing in these models compared to the visual abilities of children, is still [a] work in progress.”

A paper describing the research is available to read online.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Gemini brings a fantastic PDF superpower to Files by Google app
step of Gemini processing a PDF in Files by Google app.

Google is on a quest to push its Gemini AI chatbot in as many productivity tools as possible. The latest app to get some generative AI lift is the Files by Google app, which now automatically pulls up Gemini analysis when you open a PDF document.

The feature, which was first shared on the r/Android Reddit community, is now live for phones running Android 15. Digital Trends tested this feature on a Pixel 9 running the stable build of Android 15 and the latest version of Google’s file manager app.

Read more
Disney co-chairman reveals why The Acolyte was canceled after one season
Sol wields his lightsaber in The Acolyte episode 8.

Lucasfilm may be in the midst of experiencing a wave of positive attention and success thanks to its latest TV series, Skeleton Crew, but the Jude Law-starring sci-fi show isn't the only Star Wars title that has premiered on Disney+ this year. This past summer, Lucasfilm also debuted The Acolyte, a Sith-centric show set around 100 years before the events of Star Wars: Episode I - The Phantom Menace. Across its eight episodes, the series proved to be critically divisive, and it was only a month after The Acolyte's finale aired that Disney and Lucasfilm announced they would not be bringing the show back for a second season.

In a recent interview with Vulture, Disney Entertainment co-chairman Alan Bergman shed some light on the behind-the-scenes decision to cancel The Acolyte after just one season. "As it relates to Acolyte, we were happy with our performance, but it wasn’t where we needed it to be given the cost structure of that title, quite frankly, to go and make a season 2," Bergman revealed. "That’s the reason why we didn’t do that."

Read more
James Gunn calls Creature Commandos episode the saddest thing he’s ever written
james gunn calls creature commandos weasel episode saddest thing ever written sits at the bottom of a staircase in

Creature Commandos has been splitting its time as of late between the past and present. Its recent episodes have both propelled the show's present-day plot forward and also explored the pasts of characters like The Bride (Indira Varma) and G.I. Robot (Sean Gunn), offering new insights into the tragic events that shaped their identities and led them to their current circumstances. Creature Commandos' fourth and most recent episode, Chasing Squirrels, does the same for Weasel (also Sean Gunn), revealing the horrifying reasons the character was incorrectly blamed for the deaths of multiple schoolchildren.

The episode refrains from explaining what Weasel is or how the character came to be, but it doesn't shy away from the gruesome and tragic details of the "crime" that turned him into a full-blown monster in society's eyes. In an interview with Variety, Creature Commandos creator and DC Studios co-CEO James Gunn reflected on the episode, which is emotionally and narratively dark, even by the Guardians of the Galaxy Vol. 3 filmmaker's standards.

Read more