written by JoseQApril 18,2020
Tesla Gets Smarter

Comment

While browsing through the interwebs, I came across this article from electric transport website Electrek where they discuss how Tesla's acquisition of a company called Deep Scale is starting to pay off. Deep Scale's focus has been to develop power efficient deep learning neural networks, or in simple terms: portable artificial brains. This made me think about a huge divide still very much present in the future of self driving, and the many misconceptions I often hear when talking about it with friends; or even reading about it. So let's take a look!

I'd like to alleviate some of the fears that the tech "unsavvy" may have about self-driving vehicles. While there are of course tech oriented classifications for levels of car autonomy, the only question that the average person constantly asks is: "Is it safe?" But before we consider that, I propose that the actual question should be this: "Is it safer?"

If humans require a 100% guarantee of safety when getting into an autonomous vehicle, why do we not require it when getting into a non-autonomous one? We know for a fact, that there is no 100% guarantee of safety every single time we get into a car, whether we are driving or simply being a passenger. We know this when we get into a bus, train or a plane. As aircraft passengers, we were never really asked: "Hey, is it OK if we use autopilot during this flight?". That transition happened under our noses. Of course there are a lot fewer pilots than there are car drivers, but they had to get over that hump at some point. Are airplane autopilot systems 100% safe? No, they're not. But we learned to live with it, because it is safer than the alternative. So how do we do the same with cars?

Consider things that you know about computers. Computers are fast. They can make calculations in a fraction of a second that the average person is entirely unable to do, and even a mathematician would need pen and paper... and hours. Everyone is fully aware of this incredible capacity. Was this always true? No. At first they were slow and just like with automated vehicles, people were hesitant to trust them. In some cases, the problems computers were trying to solve were deemed "impossible". This is a good dramatization of where we are today with self-driving technology using a problem labeled "impossible" to solve in the 1940s:

Much like cracking the problem the allies faced in World War II, there are a lot of naysayers thinking self-driving can never be solved. Their main argument: "there are a nearly infinite amount of different circumstances in which a driver can find themselves". While I firmly believe we can crack even that code, I propose it's not even necessary to get there. If we, as humans, needed to be able to react perfectly to all of these different problems before getting issued a driver's license, how many licenses do you think would have been issued by now? I'd say that number is close to zero. So what does a self-driving computer need to be able to do?

Going back to my earlier premise, it just needs to be "safer" than the average human. When you consider what we have to work with, this is a relatively easy problem. We manage to drive with two cameras (eyes, both of which face the same direction) that black out momentarily every few seconds (blinking). The two cameras are able to rotate slowly and use mirrors to see minimized representation of what's behind the vehicle. The two cameras are also tied to a computer (your brain) that is easily distracted, sometimes insists on not looking at the road and has varying levels of performance based on the amount of rest it received recently, its age or its experience. At most, it will have 40-50 years worth of experiences before its performance starts to inevitably degrade.

Conversely, a driving computer pays attention to the road (front, back AND sides!) 100% of the time, and it is able to detect AND react to surprise changes in the vehicles path with incredible speed. This computer cannot be distracted and will have no interest in texting or using its cel phone while driving (or even when it's stopped). This computer is never sleep deprived, drunk, too old or too young. It will have many lifetimes worth of experience driving and it will only get better with time. Much like the math example above, we will eventually be no match for the driving computer.

Will this yield 100% safety? No. We will probably never get there. Computers and programs do fail. Yet, they fail a lot less often than we make mistakes. While some of us pride ourselves in being great drivers, most of us have been in at least one fender bender. Let's say computers get to the level of the "average" driver. Would you say the average driver has zero accidents? No one would say that. Yet, at that relatively low level of artificial intelligence we would just maintain the same number of accidents we have now. So why do it then?

Imagine never having to drive. Never ever. You get in your car, tell it to drive you to work, the store, the airport, and you can take a nap while you get there. You can eat a meal, check your e-mail, do your make-up, or anything you want with the exact same worry you do today. And that's just with "average" driving level, which I would say is pretty terrible! In this scenario, we haven't saved any lives but we have already gained a lot of free time! According to this study, Americans spend an average of 18 days a year driving. Can you then imagine having an extra 18 days vacation each and every year? Is it worth it now?

Then let's say we are able to get to the 80th percentile in driving greatness. Now we are not only adding a couple of weeks of time to our yearly calendar, but also saving lots of stress, lots of money and lots of lives. That fender bender you were going to have this year, doesn't happen. That kid that suddenly jumped in front of your vehicle now gets to live. Grandma can get in a car to go to the store without endangering everyone around her. You can feel safer letting your teenager "drive" themselves to school. After all, statistically, the computer drives better than four out of five of you. If this made sense at "average driver" levels, at 80th percentile this is a no-brainer.

And what if they rank very close to the top percentile? It's definitely possible. The first generations won't be it, but they will get there. We just have to get over this fear and allow them to be developed. There will be accidents. There will be fatalities. But we have plenty of those with regular drivers every day. All we need to do is make sure there's fewer of them.

Will Tesla be the first one to get to "safer"? With the largest fleet able to feed its neural net, it sure seems like it. Rest assured there's a lot of companies driving towards the same goal, and there's a lot of money to be made from it. Elon pitches Robo Taxis as a given this year; I'm not so convinced it'll happen that quickly, especially with certain microscopic distractions. Even then, it's just a matter of time.

tags:  AI   Auto   Intel   Self Driving   Tesla 

>>> Want a Switch? Get a Bot.

last update April 18,2020
© 2020 JoseQ.Com LLC