Autonomous driving: why startup Tesla beats Goliath Mercedes
In the slew of articles about the self driving features of the Tesla Model S and Mercedes S (I have not compared them myself) most agree they make your journey safer, less stressful and more enjoyable. The feature is almost universally loved by the testers, even though they still cannot take a nap and Twittering while driving is not encouraged. But which company has the best autopilot? Three comparisons that go beyond fluffy marketing are in Autofil.No, The Driveand Wired.
They all agree that the Tesla Model S is better. It’s better at staying in the right lane. It communicates better when it’s having trouble. And it’s constantly improving by learning from you and other drivers. In an otherwise gushy review of the Mercedes, a reviewer from Mashable stated: “Mercedes … simply didn’t feel as robust as Tesla’s Autopilot” and the system could disengage with little notice so you could “suddenly and rapidly find the car drifting out of it’s lane”.
How can a start-up like Tesla beat a behemoth like Mercedes?
Actually, startups beating incumbents is a story that we see over and over again and it is not the exception but the rule when innovations are disruptive. When we look at the transition to renewable energy there is an energy transition theory called Multi Level Perspective that is devoted to this and DRIFT in the Netherlands is a prominent research institute in this field.
But instead of going into boring theory, this blogpost will illustrate how innovative start-ups beat incumbents by looking at two gorgeous cars and the exciting new feature of autonomous driving.
Tesla: “morally reprehensible” to delay self driving cars after first lethal accident
Incumbents are conservative because they want to do what they know and hold on to what they’ve got. They are risk averse and their size makes it impossible for them to change in a hurry. They’ve largely written the rules and often like to follow them. The large companies dedicated to fossil fuels and cars are prime examples. I once had a Shell PR executive say to the audience (after my pitch for renewable innovation): “Remember that the pioneers take the arrows, and the settlers take the land.” He was scaring people lest they try to be pioneers.
In contrast, the motto of innovators is to boldly go where no man has gone before, regardless of the arrows. Elon Musk of Tesla would be a prime example, even if he wasn’t planning on missions to Mars. He claims he invested in Tesla in the knowledge that his chance of failure was over 90%. He just felt electrification of transportation was something that needed to be done and he could at least show the world that EVs could be sexy. With autopilot his mission is saving lives and making electric cars cheaper by turning them into taxis that can make you money.
But recently the first lethal car accident with a car using autonomous driving was officially registered and is currently under investigation. So what did Tesla do?
Tesla was not coy and immediately admitted the Autopilot malfunctioned: “a high white side of a box truck [that was hard to interpret by the camera as a car] combined with a radar signature that would have looked very similar to an overhead sign, caused automatic braking not to fire.” The malfunction and the drivers inattention (he was apparently watching a Harry Potter movie and did not activate the brakes either) resulted in the first autonomous driving death. It was a milestone that many have been fearing for decades.
Tesla got no end of grief. Pundits discouraged investors. Tesla’s supplier of autopilot chips MobilEye “filed for divorce” because it feared for it’s sales to regular car makers. Famous machine learning guru Andrew Ng called it irresponsible to provide users with a system that works a 1000 times and them BAM. Maarten Steinbuch pointed me to an analysis of the accident by robocar guru Brad Templeton. Brad thought the system worked as advertised but was still worse than a human driver. He suggested that the user should be monitored and Autopilot should be permanently disabled with careless drivers.
Like Google and Volvo, Brad even promotes skipping autonomous level 3 (driver is needed some of the time) and skip directly to level 4 because level 3 will give drivers a false sense of safety. But he admits that the system needs to be trained by humans to get to level 4 and he has no clear solution to that problem.
I think Brad’s cautiousness is misguided because it leads to more victims. This is a classic trolley problem. Yes some people will get hurt training the system because they trust it too much. But all the Tesla drivers training the system are in effect volunteers that improve the Autopilot at an increased pace. Do the math and I predict you will see that every victim in the coming ten years will save thousands of lives in the ten years after that. And the economic upside amounts to trillions of dollars.
So Elon (predictable as he is) was undaunted by the “divorce” annouced by MobilEye and said: “This was expected and will not have any material effect on our plans. MobilEye’s ability to evolve its technology is unfortunately negatively affected by having to support hundreds of models from legacy auto companies, resulting in a very high engineering drag coefficient. Tesla is laser-focused on achieving full self-driving capability on one integrated platform with an order of magnitude greater safety than the average manually driven car.”
Tesla head of Autopilot development Sterling Anderson amplified the message when he announced that in-house development was already the plan because that way you can move faster. And in case you where wondering if all this is just talk: Tesla had already hired veteran chip maker Jim Keller (he made Apple’s A4 and A5 chips) and had poached a team from AMD. So they really seem to think they can build the chips and software like MobilEye quicker in house.
Contrary to all the handwringing the sky did not fall down. In fact, Tesla stock hardly fell and Tesla traders are already back in love. Public opinion was hardly impacted.Most telling to me was the reaction of the head of the US National Highway Traffic Safety Administration, Mark Rosekind. Mr Rosekind is like a surgeon in the sense that the trolley problem is a daily reality for him. His job is to save as many lives as possible, knowing he cannot possible save them all. And he said: “If we wait for perfect … how many lives might be lost while we wait?”
Let’s wrap up this story about the sense of mission and urgency of innovators with the relevant paragraph in Mr. Musks recent “Master plan part deux” in which he has fighing words for incumbent robocar developers and car makers urging “caution”: “I should add a note here to explain why Tesla is deploying partial autonomy now, rather than waiting until some point in the future. The most important reason is that, when used correctly, it is already significantly safer than a person driving by themselves and it would therefore be morally reprehensible to delay release simply for fear of bad press or some mercantile calculation of legal liability.”
Are you beginning to see why Tesla is moving faster than Mercedes with autopilot?
“Man vs. machine” or “man-machine synergy”?
The discussion on autonomous driving has long been dominated by “macho men” throwing a tantrum at the thought that the computer would take the glorious task of driving away from them. I call this man versus machine thinking. But do you remember car navigation? There where tantrums there too…
I’m no typical macho man in this respect. I have never met someone with less sense of direction than yours truly and visiting clients by car has been one of the nightmarish parts of my early career. To make matters worse my ex wife was incredibly good at it (frequently shaming even the proudest male specimen in this respect) which only increased my sense of insecurity and failure. So when the first car navigation system came along I heaved a sigh of relief. And when I made a wrong turn it would automatically find me a new route without bitching! Of course it was buggy and incredibly expensive but “Hey!”
The whole point of being human is that we create synergy with our technology to do things that we cannot do as naked apes. It is our relationship with machines that makes us human. We have been cyborgs from the very start. Driving a car is a prime example: in essence a car is a kind of Ironman suit. And not only has it been more powerful than our muscles from the start, it has also been getting smarter for a while now. We already have suspension that protects us from over- and understeering. We have ABS that protects us from braking too much. We get warnings when there are speed detections and when cars are in our blind spot. Camera’s help us to park and self parking cars are among us since 2003. Where do we draw the line? Adaptive cruise controle? Lane-assist? If computers can beat us at chess and are able to help pilots and surgeons, why would they not be able to help ordinary drivers?
Most readers on this blog know how Maarten Steinbuch claims the car is becoming an iPad on wheels (with Dutch NXP a leading chip supplier). High tech companies get this much better than traditional car companies. That is why Google presented the first self driving car and why Apple apparently has 1000 engineers working on it. This video by Singularity Universityexplains how that turns the automotive industry upside down. So Tesla has an advantage in that it is more a silicon valley computer company than a car company.
What also helps is that – as an innovator – Tesla has innovative customers that are more willing to be engaged in innovative ways. Mercedes is afraid to make self-driving too prominent. Maybe because they don’t want to challenge male price or maybe because they don’t want to show you when the autopilot is confused. Tesla puts the autopilot front and center on the display so the user can see exactly what is going on and when the Autopilot is confused. Just look at the following stills from a Tesla Model S and a Mercedes E class while self-driving is engaged.
First the Tesla dashboard. It shows a road with lines on both sides and a car in front. The color of the road and lines and the representation of the car in front means that Autopilot has a firm grip on the situation and will likely react faster than the driver to unforeseen circumstances like a sudden breaking maneuver of the car in front. It is implicitly also an admission of fallibility and a question towards the driver: “Do you want to work together with this semi-smart but far from perfect Autopilot system?” This is amplified by the very explicit way in which autopilot is engaged and disengaged. Tesla is emphasizing “man-machine synergy”.
The Mercedes dashboard hardly shows the autopilot. It just shows a tiny steering wheel icon (it can get different colors). To me that indicates Mercedes is stuck in “man versus machine” thinking and/or is less willing to give feedback about when the self-driving feature is making mistakes. As Wired puts it: “the interface design doesn’t rise to the challenge of letting its human passengers make the most of that capability”. Some Tesla drivers even claim that this is dangerous because you cannot monitor the autopilot. And all testers have experienced the Mercedes suddenly wanting to change lanes towards oncoming traffic.
Tesla: wireless learning with customers as expert trainers
The final point I would like to make might be the most important difference between Tesla and Mercedes regarding autopilot: Mercedes claims it does not deliver over the air updates. The reasoning behind this is probably that Mercedes wants to do extensive software testing before incorporating the software into a car. It might also be that they feel that updating such basic functionality over the air makes the car more vulnerable to being hacked. Whatever the cause the result is disastrous: it leads to a buggy autopilot that will be part of your car for more than ten years. As one indignant tester (who already had experience with Tesla over the air updates) put it after an unexpected lane change towards oncoming traffic: this product cannot stand and it has to be updated!
Tesla goes at it very differently. They see their customers as experts that are training the Autopilot, able to invest more time than Tesla employees ever could. They tell their customers that the Autopilot is still in beta (a label they will remove when it is ten times safer than the average human) and that customers are the beta testers that must train it to perfection. Every time the Autopilot is overruled by a driver (e.g. because the driver feels it drives too slow/fast or brakes too late) the information is downloaded to Tesla. Automated machine learning algorithms determine how the Autopilot software could be improved. When improvements seem a good idea they are tested and transmitted back to the car. It is as if all Tesla customers give the Autopilot driving lessons.
The metric Elon Musk uses with regard to regulatory approval reflects this idea of fleet learning: “We expect worldwide regulatory aproval will require something of the order of 10 billion km [of autonomous driving]. Current fleet learning is happening at just over 5 million km per day.” In less nerdy speak: our customers are daily teaching the Autopilot system and at the current pace it will take just five years before every regulator will have admitted that it is safer. And even if it would take a trillion km of supervised driving to make Autopilot perfect, a growth of 50% a year would achieve that in less than 15 years.
The metric implies learning by doing and takes you straight into learning curve territory: one of the most important mechanisms that drives the disruptive transition to renewable energy. Learning curves have brought the price of solar cells down from $77/W in 1977 (easy to remember right?) to $0.36/W in 2014. Continued learning might take the price of solar electricity down to 2 $cents/kWh in 2050.
Admitting that your car is buggy but will improve if your customers are willing to train it seems too bold a move for a company like Mercedes. But for Tesla customers it is already paying dividend. As one driver describes: “I observed [Tesla Autopilot] navigate the same road months apart, and the improvements weren’t subtle. It slowed to below the speed limit, and I trusted it, because the volume of Teslas in the area suggested Fleet Learning had taught it a lesson this New Yorker didn’t know, and didn’t want to learn the hard way.” And another writes: “I just wanted to let you know that I think my car probably saved the life of a pedestrian last night … Thanks and thanks for letting customers use auto pilot even though it is in in beta.“