https://www.wsj.com/video/experts-break-down-the-self-driving-uber-crash/1E24A9B7-0B7B-4FA6-96BD-AD1889B921C5.html

If you read my blogs at all on this site I am an enthusiastic supporter of self-driving technology. My timeline for this is 5-10 years for the first self-driving cars and within 15-20 years for most cars to have the technology. I would not be using the technology in production purposes today on the streets of major cities at high speed.

Google has been running cars in the streets of Mountain View, Ca for quite a few years now. Their technology is approaching the maturity that going 25 miles an hour or less it can operate safely in well marked city streets. It has gotten into a number of accidents with cars, never a pedestrian and those accidents were all at slow speed and caused by the other driver. Tesla has put some advanced features like lane keeping and speed regulation at higher speed. I have been critical of Tesla for trying to solve the highway problem first.

First, at slow speed any accidents are typically far less injurious. My belief is that several things have to be done before self-driving becomes ubiquitous and safe.

1) A LOT MORE SAFETY TESTING

2) Limit speed to under 25 miles per hour

3) Zones must be qualified based on signage, street conditions and weather limitations.

Google’s technology has no driver at all and has proven safe over a million miles. Google (Waymo) claims 5,000 miles between incidents that require a driver intervention. GM’s technology has done an average of 1200 miles between driver interventions. Uber was seeing 13 miles between interventions. I don’t believe GM’s claims. They talk about incidents that would prevent a crash but not including times the driver takes over because of difficult road conditions. That’s just silly.

In my opinion eventually AI cars can be self-driving but a lot of work is yet to be done and technology developed before this tech is capable of going beyond the limitations of 1,2,3 I specify above.

Cities and States should designate regions safe for self-driving

Cities and States should make sure areas that self-driving is occurring has consistent signage proven to be visible by self-driving cars from 2 or more sensors. Tesla puts 12 cameras, 8 sonar and 1 radar sensor on every car. They have 40 processors for processing the information from those sensors. It may be desirable for roads to have special devices or markings for self-driving cars to improve their ability to detect stop lights, stop signs, road obstructions or changes in lanes. We might want to put some kind of metallic paint for lane markings and edge markings. We could have simple chip credit card type technology to put machine readable information about the roadway ahead.

We should also have a way for cities and states in areas which have self-driving to notify cars in the area of any deviations in conditions. For instance, if construction is going on or repairs, if weather conditions dictate slower speeds or avoiding certain roads.

Car companies should work with regulators to come up with enhancements that would become standards. I think this upgrade process would be cheap and easy for almost all of the country but it should be done before an area should be okay’d for self-driving. Self-driving cars should detect the zone they are in and in areas with no self-driving they should warn the driver to take over.

Way too little experience

We need more data and experience before we let self-driving go above 25 mph. The Uber car was going 40 mph. The accident seemed to indicate some kind of hardware failure. Experts in Lidar believe the car should have picked up the person crossing the street with her bike. It was night driving but one advantage of Lidar is that it can see in the dark because it uses lasers to scan. Someone will have to figure out what the failure was but the Lidar may have picked it up but the software on the car determined that the person crossing the street wasn’t real. This is definitely a problem with having a single sensor for doing this type of determination.

25mph should be the highest speed allowed until we achieve near perfect driving at this speed.

AI needs a lot of data to learn. Much more than humans. It is typical to use millions of images to teach a computer how to read letters or recognize a face. Computers learn through a mathematical technique that doesn’t recognize abstractions that give it a “awareness.” Computer learning is very rote and literal. Subtle contextual things are not something they can learn easily if at all.

We know we can train cars to be extremely reliable if the conditions around the cars can be made consistent. The more consistent the better they could be. Therefore, to achieve higher levels of safety we should have as much data as possible and we should try to keep the experiences within a constrained environment.

The cars today assume that conditions on the roads are “standard” and when the image gets things that are major changes in the road conditions (construction, workers, emergency vehicles, sudden rain or changes in road surface are not well understood and the cars typically “give up” and signal for the driver to take over. The problem with this as demonstrated in the fatal accident by Uber is that the car would signal this when there is not enough time for a driver who is not attentive and watching every second could assess how to take over safely either.

By the time the driver saw the biker he/she could not have stopped the car in time. It is likely the pedestrian is at fault for this accident. She was crossing outside the sidewalk in the dark with no obvious light reflecting materials and insufficient street lighting. Any pedestrian should have realized it was very dangerous to be crossing like she was doing on a 40mph road. It was a straightaway they were on and the pedestrian should have seen the car as well but regardless the car should not have killed the pedestrian and a real driver might have slammed on the brake sooner than the AI car did. It’s apparent the car didn’t have the bright lights on because the bike crosser appears in the video way too late for anyone to notice. It is likely that the camera recording the incident is recording the image darker than it actually was. It clearly did not think there was an obstruction. This is the same mistake made by the Tesla on the highway when a strange object appeared in front of the car.

The problem with high speed is that the time to react and the consequences of any mistake goes up exponentially with speed. Whether the car or the other object are in the wrong higher speed makes adjustment by either exponentially higher as speed goes up. Therefore, it is logical that we restrict speed until we become more confident of the technology. 40 mph is simply too high especially at night. The problem at night is that visibility is reduced for the pedestrian or other vehicle even if the car itself could see. It also means any camera on the car could not provide backup to the Lidar information reducing the ability of the car to double check information from the Lidar.

Technology limitations.

Tesla doesn’t use Lidar. Lidar is pretty expensive and I think there is some concern about it’s ability to track objects relative speeds quickly and reliably. The radar technology that Tesla’s use is faster, not impeded by vision problems.

https://techcrunch.com/2016/12/28/watch-teslas-autopilot-system-help-avoid-a-crash-with-superhuman-sight/

This video shows how radar is able to see beyond what Lidar could. The Tesla in this video was able to see beyond the drivers ability or a Lidar could. The Tesla was able to calculate that cars ahead were going to collide. It was able to take action before the driver of the Tesla noticed the accident happening. Tesla’s use of 12 cameras means that it can see the same thing from different angles and even if one or more cameras has an obstructed view for any reason. We have very limited experience with self-driving with different device failures, with different road conditions or vision problems. We don’t know how well image recognition will work with different objects like bikes or odd unexpected things like a van sideways in the middle of the highway. Image recognition software might think a drawing on the side of a car or on a sign is a rocket on the road and decide to come to a screeching halt for no reason. It might think something like a strange object is just a mistake or not recognize it and ignore it.

Computers are reliable but we have all seen how we talk to voice recognition self-service support and get frustrated because it gets it wrong. The same thing happens all the time with image recognition.

We may need to develop cameras with brushes like windshield wipers to self-clean themselves. We may need to develop camera’s with wider field of vision or ability to change their focus, cameras that have low light or even IR night vision.

We will have to figure out how to handle multiple devices with conflicting information and conclusions. NASA had a system for handling conflicting information. When they developed the Space Shuttle with computer navigation they established a 5 Computer protocol.

5 computers do the same thing. If all 5 computers agree on a computation then everything is great. If one computer was different than the others it is assumed that computer is wrong.

If 2 computers are showing a different result and 3 the same we decide to throw away 4 of the computers and go with the 5th computer. The 5th Computer was designed and programmed by an independent group of programmers.

The assumption is that it is almost impossible that 2 computers should simultaneously fail so it is assumed the voting hardware or some programming problem has led to a problem with the 4 computers programmed the same way and so they should go with the 5th computers answer. It is possible that something like this has to be engineered for self-driving cars.

Conditions to verifying self-driving up to higher speeds

We need a lot more experience to make self-driving cars ubiquitous. Just thinking about this abstractly I believe the following things have to be tested and verified in millions of miles. 1) determining objects reliably in the cars path from multiple sensors within the tolerance of avoiding an accident at the speed the car is rated for. 2) determining lane markings, street signs and automatically learning about closures, street work or modifications from some external sources. 3) improvement in technology to enable operating safely in different and challenging weather conditions.

Tesla has some good ideas

Tesla has been building self-driving technology into all its cars for over 3 years. For the last year they have had enough hardware in every car they build to do self-driving so that all they need to do is download the software to the car and enable it.

No other manufacturer is as far along. Even more incredible. Every Tesla for the last year has 40 times the compute power of previous Teslas. This means Tesla can do something no other manufacturer can. They have 100,000 cars that they can run the self-driving software in a “watch mode” as I call it.

What this means is that the Tesla without any danger to the driver can watch the car using the self-driving hardware and watch what’s going on and determine when it thought there was a problem and there wasn’t or there was and the driver saw it too and what action the driver took. The car can see things it could do and verify the driver was able to do the same thing or vice versa.

The Tesla could be doing this without ever doing anything to actually modify the cars operation. It would simply be learning. This information could be streamed back to Tesla and when situations are very interesting extra telemetry could be delivered to Tesla to figure out how to improve itself.

Tesla can do this today using literally 100,000 cars in the field today. This is an incredible advantage no other car company comes close to having. I have no doubt Tesla will have the best self-driving and sooner than anyone else by 2-3 years or more.

However, even Tesla is not ready to drive a car at 40 miles per hour in self-driving mode like Uber is doing.

Uber is being irresponsible and stupid

I think Uber should suffer a 500 million dollar damage assessment or more. In my opinion they are incredibly irresponsible driving a 4,000 lb car down the road at 40mph with the state of current technology even with Waymo or Tesla technology which is vastly superior. The evidence is that Uber’s technology is WAY WAY WAY behind Tesla, Waymo or even GM. There is no way they should be testing this technology on public streets and certainly not with real passengers and they should be punished for doing this.

In my opinion self- driving is inevitable and will be very common in 5-10 years. However, companies trying to jump ahead by testing self-driving before the technology is ready is irresponsible.

Advertisements