CAL0293317_l

Paving the Way towards Safer Roads for All

Originally appeared in CTECH

Mobileye co-founder and CEO Amnon Shashua explains the company’s model for safe driverless cars

We architected the Responsibility-Sensitive Safety (RSS) model as a catalyst to drive cross-industry discussion among industry groups, car manufacturers and regulatory bodies. Since its publication, my co-authors and I have received many positive affirmations, but it has also raised some very important questions, which was our goal since the beginning of this project.

For daily updates, subscribe to our newsletter by clicking here.

One critical line of questioning centers around the idea that human judgment involves legal, safety and cultural considerations, while the RSS model seems to be focused only on the legal aspect. The notion that RSS is designed to make manufacturers immune to liability is a misunderstanding that demands further explanation.
Math, not morals: how RSS formalizes driving dilemmas

Let’s start out by reaffirming what RSS is. RSS formalizes the common sense of human judgment under a comprehensive set of road situations. It sets clear definitions for what it means to drive safely versus to drive recklessly. With human drivers, the interpretation of responsibility for collisions and other incidents is fluid. Driver error or, quite simply, blame is applied based on imperfect information and other factors interpreted after the fact. With machines, the definitions can be formal and mathematical. Machines have highly accurate information about the environment around them, always know their reaction time and braking power, and are never distracted or impaired. With machines, we do not need to interpret their actions after the fact. Instead, we can program them to follow a determined pattern—as long as we have the means to formalize that pattern.

At its core, the RSS model is designed to formalize and contextualize today’s driving dilemmas, like notions of safe distance and safe gaps when merging and cutting in, which agent cuts in and thus assumes responsibility to maintain a safe distance, how the right of way enters in the model, how to define safe driving with limited sensing (for example, when road users are hidden behind buildings or parked cars and might suddenly appear), and more. Clearly, human judgment includes avoiding accidents and not merely avoiding blame. RSS attempts to build a formal foundation that sets all aspects of human judgment in the context of driving with the goal of setting a formal “seal of safety” for autonomous vehicles.

RSS = Less accidents on roads

Let’s follow by stating what RSS is not. RSS does not allow the autonomous vehicle (AV) to make judgments—even if the AV has the right of way—to cause a collision. On the other hand, RSS does allow an AV to perform an illegal maneuver, say crossing a solid white line to escape a collision, or proceed around a double-parked vehicle to avoid danger. What it does not allow is for an AV to take non-cautious actions, which would put it at risk of causing a separate collision.

The RSS model does not allow an AV to mitigate one accident with another, presumably less severe one. In other words, in desiring to escape a collision caused by a human driver, the RSS model allows the AV to take any action (including violating traffic laws) if those actions do not cause a separate accident. This constraint is appropriate because the judgment of accident severity is subjective and might miss hidden, critical variables, such as a baby in the back seat of the seemingly “less severe” accident.

Nevertheless, if society desires to allow mitigating one collision with another, under certain conditions it can be added to the RSS formula under a notion of “blame transitivity,” where responsibility for the complete set of incidents would be assigned to the agent that started the chain of events. We chose not to include this possibility in our model, but it can be done.

The common-sense notion that the “right of way is given, not taken” is also part of the formalities of RSS. Consider the example of a car crossing an intersection: a green light provides a legal right of way for the vehicle crossing, but there is another vehicle blocking the junction (say the other vehicle ran a red light). In this case, RSS does not give the AV the right to hit the vehicle blocking its way. The AV would be at fault under RSS.

A software state of mind

Logically, to be in a place to criticize the RSS model, you would need to find an accident scenario where the determination of blame through RSS disagrees with “common sense” human judgment. We haven’t found such a scenario, even after going through the National Highway Traffic Safety Administration’s crash typology study, which includes 6 million crashes grouped into 37 scenarios covering 99.4% of all those crashes. They all fit into the RSS model, and we will publish the analysis as part of the continued open sourcing of RSS.

Over time, as we collaborate with industry peers, standards bodies and regulators, we will surely discover more scenarios, match them to RSS and, if necessary, update the model—just like human judgment sometimes needs an update.

Bottom line, we must convince the industry that software can always make the safest decisions. At its core, for a model to be useful, one must show that it is possible to program software that never causes accidents but at the same time maintains a normal flow of traffic.

This is hardly trivial. One needs to prove that the model will not suffer from the “butterfly effect,” where a seemingly innocent action in the present will unfold through a chain of actions into a catastrophic event. For example, imagine a scenario where an aggressive merge causes the car behind to brake and swerve into another lane and cause a collision.

The devil is in the details

We published the RSS model to evoke debate, discussion and exploration—all vital pathways to the right solution. The sad reality is there are no alternatives to the RSS model right now. So, in the absence of a clear model, what is the industry to do? Simply resort to a “best practice” position? That would devolve to a “my AV has more sensors than yours” or a “my testing program included more miles than yours” argument. These quantitative-driven statements may protect AV developers in a world with no clear model to evaluate safety, but they do not guarantee safety. Worse, it will lead to AVs that are over-engineered and too expensive to deliver flexible, affordable, ultra-safe, on-demand transportation to the general population and underserved communities—the elderly and the disabled, for example—who will benefit the most.

It is not enough to adopt the RSS model into our own AV technology alone. For true safety assurance, we will require transparency and society’s acceptance of how human judgment will be incorporated into AV decision-making. Our belief is that safety, in terms of collisions caused by a properly engineered AV, can be improved 1,000-fold compared to human-driven vehicles.

To prepare the landscape for successful deployment of AVs, many issues need clarification. Issues that go far beyond technological innovation or comparisons of one company’s products versus another’s. We are putting a stake in the ground in an attempt to drive the industry to agree there is a definitive need to formalize the rules of judgment, responsibility and fault to realize the massive benefits to society. Far from a system designed to avoid liability, RSS is an innovative model intended to enable AVs to perform to the highest safety standards.

Professor Amnon Shashua is senior vice president at Intel Corporation and the co-founder and CEO of Jerusalem-based Mobileye, an Intel company.

Scroll to Top