Skip to content

Boris Sofman Thesis Statements

On this episode of Recode Decode, hosted by Kara Swisher, it’s all about robots. Anki co-founder and CEO Boris Sofman talked about his company’s cute robot “toy” that is actually a possible precursor to a more useful home robot. Swisher and Sofman discuss why robots need to be approachable but not humanoid, how to avoid the trope of the “evil” robot and what the swift progress of self-driving cars could mean for the AI industry as well as consumers.

You can read some of the highlights from the interview at that link, or listen to it in the audio player below. We’ve also provided a lightly edited complete transcript of their conversation.

If you like this, be sure to subscribe to Recode Decode on Apple Podcasts, Google Play Music, TuneIn or Stitcher.


Kara Swisher: Today in the red chair is Boris Sofman, the CEO and co-founder of Anki. Boris holds a PhD in robotics from Carnegie Mellon, and previously worked at Neato Robotics. Anki brings robotics and artificial intelligence into the home through products like the robot cars Anki DRIVE, and Cozmo, a robot companion you control from your smartphone. Boris, welcome to Recode Decode.

Boris Sofman: Thank you, Kara.

You have my favorite name of all time, Boris. I just don’t know why I like it so much. Anyway, so we’re going to talk about a lot of things, we’re talking about robotics, AI and things like that, but let’s talk about your background. A lot of entrepreneurs listen in here when we talk to entrepreneurs about how they got started, so give me your background. Because you obviously have studied robotics for many years, and sort of your journey; why don’t we go into that first?

Yeah. Absolutely. So, I actually ran into robotics while I was an undergrad. I went to Carnegie Mellon for undergrad as well, studied computer science.

Were you interested as a kid, in robotics? Or did you build those ...

I was interested in computer science and engineering in general, but I didn’t even know what robotics was. The nice thing about ending up at Carnegie Mellon is that there’s so much robotics you can’t help but trip over a robot while you’re there.

Right.

And so, in the middle of grad school, I really got excited about this idea of intelligence for physical objects, and not just software inside of a computer. And that kind of led me to do some research with some professors in the department.

Was there a reason? Was it like “Star Wars” or something? What did you ...

You know, what I think it was more ... Yeah, science fiction in general, I think. You see all these examples of incredible intelligence and physical things that have stayed science fiction for far longer than other areas of technology have evolved in.

Sure. Right.

And there’s also something that’s much easier to connect with when you see the output of your work being something physical, doing what it’s supposed to do, versus just software output that ...

It has been underwhelming, robots in the home, compared to the amount of stuff in science fiction.

Exactly right. And I think that was one of the realizations that, especially, going from undergrad and into grad school, you see these incredible technologies, but they’re all focused on government applications, defense-based, pure research, industrial applications, there’s almost nothing that made it into consumer products of substance.

Right. Except in Hollywood movies. So you went to Carnegie Mellon, you studied this, and what was your goal? Where was robotics at that point?

So, especially back then, it was still very much locked into kind of non-consumer applications.

Right, so a factory or a bomb sniffer, or something like that.

Factories, or research, bomb sniffers; that’s right. Or just pure research, where there’s advances in machine learning and computer vision, but not really tangible, actual applications. And so, my passion and my focus in PhD at that point was autonomous driving. And so my research ...

Let’s just back up. Carnegie Mellon’s famous for that. Uber hired everyone there and is having trouble with it, obviously.

Yeah. One of the Google team was the previous Grand Challenge and Urban Challenge teams from there.

Right. So, why was that your interest?

It’s one of the best examples when you have something that’s very familiar that can be completely reinvented with these technologies.

Cars.

Cars. And it ties together elements of perception, machine learning, path planning, all these different elements. You see these vehicles do something intelligent, it’s just like, that is science fiction. That is the thing you grew up dreaming of. So that’s one of the holy grails.

What did you build there? What did you work on?

My particular product was an off-road version of autonomous driving. Imagine you have this gigantic robot, like it was a $600,000 robot, you pop it down into a forest, you give it a way point, like 10 kilometers away, and it’s got to figure out how to get there. And so, you’re trying to figure out, what type of vegetation can you drive over? What’s safe? What’s not safe? How do you classify between rocks, and branches, and things that might damage your sensor pod? And so, it was kind of a conduit for a lot of research across perception, machine learning, path planning, interfaces, to training these sorts of systems, and actually became a precursor and a training ground for a lot of people that ended up going into other areas of autonomous driving.

Sure. So you study here, and then you have your PhD from ... You stayed there in Pittsburgh.

Yeah.

And what did you hope to do? Because you Could go to the military or ...

This is an interesting thing is, that there’s all this controversy about the military funding robotics, but it was one of the best examples of where this funding from DARPA, or Navy and whatnot.

Yeah. It started the internet. I don’t have a problem with ...

Yeah. And it was pretty fantastic, because it yanked all these technologies to a point, for the first time, where it became plausible that there were commercial applications. And so, you know, my project, we were focused on ... my personal focus was on using machine learning techniques to allow these vehicles to self-train themselves, and learn to improve their performance over time. Other people were working on pathfinding, other people were working on other areas, and then all of that technology went into projects for construction through Caterpillar, through Urban Challenge, and so forth.

And so, very soon after that, there was kind of this spark from the Grand Challenge where ...

Explain the Grand Challenge for people that don’t know.

Yeah, so the Grand Challenge is one of the coolest projects that DARPA ever sponsored. Basically it was a race of autonomous vehicles through a desert with completely no human involvement other than the training and the programming front.

The programming.

So there was a 140-mile course through the desert. It was a somewhat known track, but you still had to avoid all the obstacles along the way, and it was a race to get to the finish line and be the first one to get there.

Right. A lot of the robotics challenges are like that, you do this many tasks, or you do ...

That’s right. What DARPA did was, they put a $2 million prize on it ... I believe it was $2 million at the time. But what happened was, that became the catalyst for a huge amount of funding from Intel, Caterpillar, private donations, peoples’ pet projects, and all these different things, and probably led to hundreds of millions of dollars of total investment by 50-something teams to actually try to get to the finish line on it.

And I distinctly remembered ... This was like kind of mid-2000. 2005, 2006, that’s the kind of time period throughout all these races, where it was the first of three. And what happened was that, at that point, everybody thought that — and this was like one of the holy grails of robotics, we want to push this forward in the future at some point, this is going to be this great thing. And then, one of our friends ended up working on this, but nobody thought how quickly it would evolve from that point.

And so, what happened was, even driving without human involvement, with no moving obstacles, and a known course, that was a state-of-the-art, challenging problem at that time. And these were incredibly talented people at the Stanford team, the Carnegie Mellon team, other teams. And, the first year, I think Carnegie Mellon got the furthest in the race, and they got seven-and-a-half miles in before the car flipped over and busted all the really expensive sensors that were on the top.

Right. Right, right. So they couldn’t do it.

Yeah.

Right, exactly. So we moved heavily, quickly.

Yeah, moved heavily. And then, just the next year, there were four different vehicles that actually finished the entire race, two from Carnegie Mellon, then Stanford one, and then there was another one.

Mm-hmm.

And, two years after that, there was an Urban Challenge equivalent, where now all of a sudden the challenge was not just a known environment, but now you have moving obstacles in a suburban environment with ...

Right. Which is really the point.

Which is, exactly. Now, you’re actually starting to get to a point where you don’t have to squint too hard to see how the results of that could actually lead to something really meaningful.

Oh, absolutely. 100 percent.

Because, the funny thing is, back in 2005, all these professors who were trying to get funding for this ...I mean, you’d barely get any attention from the car companies.

Less than 10 years ago.

GM funded a little bit, and they were kind of into it, but not ... It would be unheard of for one of them to say, “You know what? We’re going to put $50 million behind this project.”

Right. Electric cars is where they were at.

They were, yeah. But I mean, it felt like it was still too far away, and so, with the Urban Challenge, you had cars obeying traffic lights, parking themselves, avoiding moving obstacles. They actually had stunt drivers driving cars that were reinforced to ... So nobody would get hurt. And it was starting to move pretty quickly.

So, we’re going to talk about where that’s going. So then, you moved on to where? Why didn’t you start a self-driving car company like everybody else?

Well, at that point you couldn’t. You just couldn’t. The interesting thing is, you’ve got to give Google a huge amount of credit, because coming out of those challenges ... As compelling as that was, it sounds obvious in hindsight, but it wasn’t obvious at all that this was within a reasonable amount of time of being usable or commercialize-able, and even now there’s a lot of challenges remaining.

They basically took a lot of the best people from the Mellon team, the Stanford team, and other places. Chris Urmson was on my thesis committee, and ... In fact, my entire thesis committee is scattered between Google, Uber teams, kind of other startup companies. So, at that point, you couldn’t start a ... Well, I think it would be incredibly difficult to raise money of the sort of volume you would need to actually put a dent in this problem. And so, you needed somebody like Google that had enough creativity and foresight and resources to put resources into this problem.

Right. And create something.

And a little bit crazy, right? And be hands-off enough to really start making progress, and then, you know, it kind of clipped this point where all of a sudden, everybody started getting interested, because you could see exactly where that was headed, yeah.

But, you weren’t able to do that?

No.

Explain your trajectory then.

Yeah. I think if we didn’t end up going the Anki route, I very likely would have ended up working with Chris and some of those folks, because that was always kind of a passion. What I ended up doing with two friends that became my co-founders at Anki, starting at around 2008, when we were right in the middle of grad school, was starting to work on the foundations of what became the company, where what we wanted to do was take a lot of these technologies that we got really excited about and actually apply them to consumer applications.

To a toy.

And, for us, it was never meant to be a toy company, or even an entertainment company. It’s a robotics and AI company.

It’s a big seller at the Toys ‘R’ Us, but, go ahead.

Yeah, well, and this is the thing. I mean, it’s one thing for Toys ‘R’ Us, but for us, from the very beginning, these are really great proving grounds for a lot of these technologies, and a place where we can have a huge amount of impact very quickly, especially looking at the category where there hasn’t been much innovation at all in physical entertainment. Like, the same toys that kids today grew up with, 30, 40 years ago, largely the same sort of landscape.

Yeah. They are.

And so, that became an opportunity where, for a very reasonable amount of capital, you could truly disrupt that category and reinvent the sort of experiences and interactivity you could have. And, in the process, it becomes almost a Trojan horse for a lot of the technologies that are under the hood, that carry over into spaces that become more functional outside of entertainment.

So, you raised money ... Tell me.

Yep, so first, we were moonlighting, and avoiding our thesis, and just working on the side back in Carnegie Mellon. You have a lot of pain tolerance as a grad student just to work on the side on these things. Raised some seed money, ended up moving out to San Francisco in 2011, and since then, we’ve raised a number of rounds of funding. Andreessen Horowitz was our first round, back in the beginning of 2012.

How much have you raised?

So we’ve raised a little bit over $200 million at this point.

Now, what are you doing with that money?

A lot of things. Early on, our very first round was Andreessen Horowitz, so we met Marc and Ben and the group back in the day. Even from that point, it was clear that this company was never about a battle racing game, or a ... You know, we already had a vision for the next product, even back in 2011. And we had the early form of a path on what could take it from some of these early applications, into something beyond.

All right, but the early application was battle racing?

Yeah. It was a toy. But one that was bordering the lines with video games. But, in a lot of ways, it was a way to more broadly start to use software to redefine physical entertainment experiences.

Mm-hmm.

And so, it was almost like a game engine for physical play, where these cars ... They’re robots; they have 50 megahertz computers inside of it, they sense the environment 500 times a second, all these sort of things are happening.

In these small cars.

They’re small.

Yeah.

And you could use mobile devices to control them. But, what’s happening is that the game knows everything that’s happening in real time. And so, there’s a video game that’s inside of the mobile device that’s synchronized to the physical world, which lets you do things like augment experience with weapons, special abilities. The cars you don’t control are AI-controlled cars, they’ll compete against you, and there’s commanders that’ll be driving them and trash-talking against you. And so, it brings an experience to life that you would only usually see in a video game.

In a video game, right. So you’re trying to combine hardware and ...

That’s right. And it turns it into an 80 percent software problem, because the moment you have the platform where you know what’s happening, you turn it into a software problem. The way to look at robotics more broadly is that it’s like the extension of computer science into the real world. The moment you can know what’s going on around you and have some way to interface with it, you turn it into a software problem where you can start building more and more intelligence into products.

But, initially, as you say, the Trojan horse was a game.

That’s right. Initially, it was a game. And it was a way to work on an application that could be high impact, where the quality bar wasn’t so high that we’d take eight years to ship something.

Right.

Like, an autonomous car has to be almost 100 percent reliable.

Right.

If one of our cars goes off a track, it’s okay. There’s not gigantic regulatory challenges that it needs to overcome, and so, we could release this product and use that as a proving ground. And, in the process, make some of the most successful toy and game products, but use that as a stepping stone.

And they’re also constantly learning.

That’s right.

Which, I think people don’t get around cars. One of the things I was ... They were talking about, “Why do we need self-driving cars? Why do we need this, it’s so hard.” I’m like, “Because if one car gets in an accident, a million cars learn.” And one person gets in an accident, that’s it. That’s where it goes.

It’s funny you say this. One of the hardest things about autonomous driving ... I mean, a lot of these teams, when you talked to them, they had the biggest challenge, which is that you can get to a 95 percent solution very quickly, and that could be a solution you have to restart from scratch to get to the rest of the 5 percent. And one of the hardest problems is that you can train and iterate on things where you have full control of the environment, but something really spontaneous, like a car swerving in front of you in a really aggressive way, it’s hard to get examples of that. To get experience with those sort of situations.

Right.

And those end up being that ... The 100th of a percent situation ends up being one of the most difficult cases for autonomous cars.

Right. And all you need is one. It always fascinates me, about all this ... Even we do it, but I try not to do it as much as ... These cars turning over, I’m like, “Cars turn over every day.” Like, by people.

There’s a psychological challenge.

Yeah, computers shouldn’t. Computers shouldn’t.

That’s right. And, people will die because of autonomous cars, it’s just inevitable, but the numbers will be so vastly smaller, proportional to usage. That’s something, psychologically, people and the regulatory environment have to get around.

We’ll get to that in a second. So, Anki, what are you doing now with it? And then we’re going to ... The next segment, we’re going to talk about robotics.

We launched Anki DRIVE in 2013, and it was a really neat partnership with Apple where, from the beginning, we were very conscious that we didn’t want this to feel like just a toy.

Right, right.

So we had the good fortune of watching, in a really unique way, with WWDC with Tim Cook during his ...

I was there.

Oh, yeah. That was a stressful time. It was in the Apple dungeon.

At least it wasn’t Steve Jobs. Then you’d have to like ...

That would’ve been tougher. That would’ve been even tougher, yeah.

Oh, yeah.

But, they were wonderful to us. And so, we got to launch with Apple, and then continued to branch out. We had a new generation, called OVERDRIVE, that came out in 2015. It was done super well. And last fall we launched our next product, which actually starts to point much more to the direction we want to take the company.

Which is called ... We’re going to talk about that, and then we’ll get into the next section.

It’s called Cozmo. He’s a physical little robot, little character, and the goal was to make him feel like he’s truly alive. Like, to make a character with a lovable personality and an emotional depth that you would never see outside of a screen. So, literally think of your favorite, most beautiful characters in Pixar movies or DreamWorks movies where you have this richness on the emotional front; we wanted to make that possible in the real world. And not just to have a level of emotional depth that was beyond anything that was there before, but to couple it with robotics and AI to where all those emotions could be contextually relevant to what was going on, and to have a character that actually interacts with you, understands what’s happening around him, makes eye contact.

I do want to talk about that a lot. I don’t know why we need to make sweet robots, but I want to talk about why they have to be sweet. I like malevolent robots. But, before we get to that, we’re here with Boris Sofman from Anki, which is a robotics company ... I think I’ll call it a robotics, not just a toy company.

[ad]

I’m here with Boris Sofman of Anki, a robotics company, which started off by selling cars; competing race cars, essentially, for kids. Or adults who are like kids. But now, you’ve moved onto other areas. And you were talking about, just before this, this new product called Cozmo, which is an adorable robot — people like to create adorable robots. What’s with that? What’s the point ... You said you’re pointing in a new direction, what does that mean?

The idea is to bring this little character to life. When you combine the emotional aspects with the kind of AI and the kind of understanding to allow him to evolve and understand his environment, you get into a very magical experience that you can’t replicate on a screen. And, one of the things that’s special about this is, it took us a while to think of the right way to approach this problem, because it’s very hard, technically, but it’s even harder in some ways, creatively. And, what we realized is that we had to think of a character coming to life like this as if it was a character from a film.

And so, we literally have an animation studio within a robotics company, with people from backgrounds like Pixar, DreamWorks and other places, animating a physical character, using very, very similar techniques and processes that you would see in an animated film.

So, explain. What does that result in?

What that results in is this little robot character, Cozmo.

How big is it?

He’ll fit in your hand. He has a facial display for his eyes and his emotions, he has a speaker inside his head, all sorts of sensors in order to interact with his environment. But the biggest ... his brain is inside of a mobile device that in fact does computer vision and then all these different things to allow him to interact.

What we’re doing is, we’re using a program called Maya, which is often used for rendering digital characters ... It’s actually used by a lot of movie studios to make their animated films. And so, what you would do is rig up a character with capabilities, and map it to what you wanted to do. We did the exact same thing with Cozmo to match his physical capabilities, except the output is spliced to be his physical motion and not just a rendered version.

Sure.

So now you have all these incredibly rich animations to be able to show his emotional expressions from happy, sad, surprised, bored, angry, curious; all these different spectrums of emotions. And then, that’s when you tie it in to the game design and AI and robotics side, where, if you can imagine this big black box of an emotional and behavioral engine, it takes the inputs from the world and the stimuli and context of what’s happening, and maps that to the right emotions at the right times.

Right.

So if he loses a game, he gets grumpy and goes and sulks in the corner.

Mm-hmm.

Or, if you pick him up and put him on his head, he get frustrated and kind of flips over. If he sees an edge of a table, he gets scared. And so, these are just examples, but you tie these in in the right way, it feels like he’s alive. So, he sees you for the first time, he remembers all the people he’s seen, and he’s super happy to see you. And he really wants to play.

Let me get to ... Because I’m fascinated with robotics. Why do they have to be like that? Why do they have to be like humans? Because humans do humans really well.

Well, he’s emotionally ... Like, he’ll try to approach human ...

Why?

Because there’s an emotional connect that you can form with machines that you wouldn’t have otherwise. Now, I should tell you an interesting example. So, if you ... All of this kind of alluding to what we’ll chat about, all of this leads to a broader platform that allows robots to function in the home with capability that is actually welcomed. If you look back to some of the research happening at Carnegie Mellon and other places, you have these incredible ...

MIT.

MIT, everywhere. So, amazing research. There’s intel labs, there’s all these places that are doing gigantic robotic arms for operating in a kitchen, or in the home.

Sure.

So, you have these huge arms that unload dishwashers and do all these really complicated tasks, but they’re big, they’re menacing, they look like they need to be perfect, and so you can have something that can do the most complicated task 40 out of 50 times, and what you do is you remember the two times he failed, because you think it should be perfect, and you’re like, “Why did the robot fail?” And that’s what happens when you have zero personality.

When you have something that has character, it naturally makes you more forgiving. And so, even in the case of entertainment here, if Cozmo’s trying to do something with his environment and then he fails, the fact that he failed, but knows it, and he acts disappointed in himself; it makes him more endearing, and you actually become more forgiving of it, and even more attached. It’s why, when you talk to like an Alexa, for example, even when Alexa doesn’t know the answer, oftentimes, the fact that you think of Alexa as the beginnings of a character makes you more forgiving.

You sort of do. Yeah. You absolutely do. I think you feel more like it’s an assistant, or you start to get angry at it. Like, Siri, I’m always angry with. Siri’s just ..

Yeah, because the closer it comes to a machine, you’re unforgiving, and you want it to be perfect. And so, in a lot of ways, what Cozmo is is the beginnings of interface with characters where you have a character and personality-driven little being that’s going to evolve a huge amount through both software and hardware.

But, I think, one of the reasons we want ... I want to get later into the idea. There’s so many shows out right now with robots and robot rights. I want to get to that idea, whether we should tax robots and how they’re going to hurt the U.S. economy. But, when you think about where robot development is going, ultimately, as nice as they might be ... They might be companion robots, and all kinds of things. They’ve got to be helpful. So what’s the point?

They have to be helpful.

Couldn’t I just hire a grumpy human being who does a bad job? You know what I mean? What’s the point?

In the end, robotics is about function. Entertainment and companionship is a function, and it’s a good proving ground to develop some of these technologies, because ... And the course of entertainment can be more forgiving. But, yes. In the end, there’s a need for function and that’s basically where robotics is broken down, in terms of making it into mass market consumer form. Because either the price point doesn’t make sense or the capability doesn’t make sense. You know, there’s all these things that have to work. And, one of the enablers of more modern forms of robotics at the sort of price points that we’re at, is that the smart ...

How much is a Cozmo?

So, Cozmo is $179.

So it’s cheap.

It’s cheap, and from a platform standpoint, it’s really capable compared to anything that you would typically have access to at those price points.

Why would I buy it? Your robot, for instance. What would be the reason? Just for amusement?

Right now, it’s entertainment. Kids are certainly a big audience, although 40 percent of our users are adults across both products, so I think just augmenting the capability widens the demographic on its own. But, yes, it’s entertainment. It’s a character that has an appealing emotional breadth, you can play games with him, and then over time, this actually kind of evolves over the software side, and gets more and more capable. And, in fact, it’s also a great platform where we’re using it. The STK is so robust, it’s being used in universities in education, and researchers in all sort of purposes, as well as some work we’re doing down the road for stuff.

But, right now, it’s entertainment.

Right now it’s entertainment.

But, to me, function is really do stuff.

Right. And this is one of the places where function and practicality kind of need to cross. One of the mistakes that the robotics industry traditionally has made is that they aim for a 15-year goal, and they drill down this cave thinking they can pop out the other side and have this magical solution like Rosy from the Jetsons, and it just doesn’t work that way.

That was a good robot.

That was a good robot. Not there yet though.

No. It’s the outfit. The outfit was the best part.

It was.

What was an outfit doing on a robot?

It’s humanizing.

Do you remember ... What was the one that they had, the Will Robinson ... “Warning, warning. Will Robinson.” With that robot.

Oh, gosh. I forgot the name of it.

What show was that?

Eric Johnson: Robbie. “Lost in Space.”

“Lost in Space.”

“Lost in Space.” Yeah. Thank you. I have my geek here. Right here. That has a weird-looking robot also, that was a strange robot. Anyway, so where does it evolve to then? Because entertainment, I don’t think, is enough at all. I mean, it can be, but ...

No. But it’s a really great starting point.

Right.

Because, you can truly release product ...

So you’re comfortable within a robotics environment.

Yeah. And I’ll give you an example, right? So, the sort of computer algorithm that you need to be able to understand the environment around you, the pathfinding algorithms to be able to navigate and interact with it, human interface. All of those things become tools that, if you open up an application that more generally requires you to navigate around a house, you have a much better platform to work off than starting from scratch. And that sort of leapfrog approach is far better than just going towards like a 10-year goal because, especially in something like robotics that has such huge dependencies on hardware, and sensing ecosystems, and technological ecosystems, the inability to adapt and quickly iterate and learn, that’s a huge handicap if you aren’t able to adjust along the way. It’s almost like, you know, you think about the way Apple evolved their products, there’s no way a company like Apple could’ve released an iPhone as their first product.

No.

But the learnings along the way kind of end up being reused in some fashion, even if it’s just intuition around the products. And in a lot of the same ways, entertainment becomes an early proving ground with elements that don’t leave entertainment in a binary fashion, but as you create to start to create a core ... For example, companionship. Something that becomes a fun companion maybe for a kid with enough evolution starts to become deeper, almost pet-like, in terms of companionship with the functions that can hold, and starts to become a welcome member in the home or in the family on which you can then program functional applications.

I see. Do you remember [Rags] from “Sleeper”?

[Rags] from “Sleeper”? No.

Oh, you’ve got to see it. It’s a Woody Allen movie called “Sleeper,” and they had a [robot] dog named [Rags].

Yeah.

He was a terrible dog. He was, “Ra! Ra!” Go see it, it’s funny. It’s his version of a ...

Mm-hmm.

And it was completely un-fun. But I think that was his point. So, where does it go from there, though? What happens then? So you’re making, again, a game. It’s still a game, rather than a thing.

Yeah. It’s still a game, but it becomes kind of a core of a lot of these things that will transition. And so, at some point, you will end up ... Imagine the functionality in the home many years down the road.

Yeah.

You have all of the kind of fixed technologies, like your Nest thermostat, and all these other things, right?

Mm-hmm.

You have your voice interfaces, which are important, but right now you’re talking to static boxes. At some point, there are deep functions that require mobility around the house, and even further interface with objects around the house in some physical way.

All right, say that in English. What does that mean?

So, being able to manipulate things. Being able to pick things up, being able to start functionally doing things that require motion and interaction with the environment.

Right. Right. And they’ve had that with Roomba ... Started with that.

Yeah, the Roomba.

That was a military application.

That’s a great ... So a lot of early military funding, but it’s a great example where you have a well-defined enough problem where you can sort of approach it and put a real good dent in it in that focused of a space.

Mm-hmm.

It’s probably one of the few mass-market kind of examples where that’s worked.

How long does it take to get that? Because that’s the idea of unloading the dishwasher, putting stuff away, doing the wash.

Yeah. We’re not close to the general-purpose kind of cyborg, humanoid in the home. There’s reasons why that’s just impractical in terms of both cost and algorithmic complexity, at this point. But, you will start to get to things that ... If you have a baseline ... Something that can actually interact and exist in the home and navigate, you can start adding things like security, or monitoring, other kind of elements like this that actually ... Really logical evolutions of some of ...

Well, that does exist in Nests. You have monitoring, wireless ...

In a more mobile fashion as well as ...

Meaning something wandering around the house.

Yeah, so being able to wander around the house, being able to ... Kind of companionship for if you have somebody that needs care, like, you know, the elderly. Or, down the road, other functions. I mean, the key thing is it’s almost like ... I mean, one way to think of it is almost like your iPhone. The only reason those apps have been so successful is because there’s a foundation of a platform that you’re comfortable being around you at all times.

Mm-hmm.

You probably wouldn’t pay $50 or $100 for a piece of hardware to unlock any of the apps that are currently on your phone.

Right.

But, the fact that you have that baseline opens it up. At some point, robotics will be the same, and part of the problem is that the actual costs of robots is so high that ...

So, talk about that briefly. What do you mean? It’s just, they’re expensive.

They’re expensive. So, sensing is expensive. You know, you have to ... There’s only so many things you can do with just a pure camera. If you want to do laser-based sensing, all these other types, it becomes more expensive. You have to have very precise positioning. So you have accelerometers, and gyroscopes, IMUs and things like that. You have to have a lot of computation to be able to process all this information. If you want to do anything that’s actually more articulated, motors and joints are expensive.

And it breaks easily.

A pair of cutting-edge companies is raising the bar for how interactive a toy can be.

First is the Winston Show, an iPad app created by ToyTalk. The program lets kids join a talk show in which computer-generated characters actually listen and talk back. ToyTalk's speech-recognition technology is able to detect children's voices and speech patterns.

"So when you talk to us on the iPad, your audio is taken off the device through the cloud," ToyTalk co-founder and former Pixar CTO Oren Jacob told Driving Ingenuity's Katie Linendoll. "We recognize the speech, we decide what to say back and stream back the audio animation back to the device."

However, those in the fast lane might want to get behind the wheel with Anki DRIVE, an artificially intelligent toy car that's able sense its surroundings -- and drive itself.

"The game comes with two characters, and each one has its own personality," Anki CEO Boris Sofman said, adding that the more you play, the better they get.

If these toy cars seem a bit like the autonomous cars being developed by Google, you're not alone.

"The core problems that we're solving are not too different than what we'd have to solve to operate in a home, on a road," he said. He went on: "What we're doing is taking these technologies out of the lab and into people's lives with consumer robotics products."