The goal of this project is to develop a advanced driver-assistance system (ADAS) that can detect the edges and determine travel lanes on roads that don't have lines. Furthermore, this project aims to do so in a safe, verifiable manner.
Practically everyone developing autonomous vehicles says autonomous vehicles will be safer, however, there have been some notable accidents involving autonomous vehicles that highlight just how far we have yet to go.1 While these accidents are terrible, by some metrics autonomous vehicles perform better than the average driver. Some experts argue that so long as autonomous vehicles are safer than the average driver, it will result in less injuries and deaths overall and as should be rolled out quickly to not let perfect be the enemy of good.2
As of 2020, one problem is that we do not yet have an industry-wide accepted standard for how to measure safety or what is an acceptable level of safety. There are some early provisional documents, but nothing has yet been finalized.3,4 One of the only current rules we have about safety equipment in vehicles that could apply to autonomous vehicles is ISO 26262. While this standard does not cover many of the possible failure modes, it is a good starting point to develop a system from as it will likely be more difficult to attain the required SIL than to meet whatever standard will be developed.
For certification, ISO 26262 generally requires showing how and how frequently the system will fail for any given input and environment. This usually requires explainable models and a well defined environment to show how the system will respond or fail in production. The other method for certification is a proven in use argument which shows how the system performs or fails over many years of use. The proven in use argument is not used for certifying new products but for existing products that were developed before the standard was implemented.
While the original idea for this product was to develop it to the standard i.e. explainable and robust models, the more the development process has progressed and the more the depths of the standard have been explored, it seems unlikely any system of this kind can be certified based upon current standards. The effectively infinite range of inputs makes the standard system qualification process practically impossible. It seems many of the companies in this space have understood this and have decided to not pursue a safety certification via system qualification.
Perhaps their goal to later provide a proven in use argument to achieve a certification with unproven or not currently provable techniques like Deep Learning. For an assessor to accept this path for a system that was developed after the standard was created, a proven in use argument has some requirements that may be difficult to overcome. To present a successful argument, there must be extensive information about the device's failure and the situation in which it failed. This usually requires precise and accurate tracking of the units in question as well as the data about the incident. This would likely be extremely difficult to execute on a consumer product especially since, historically, vehicle failures are often repaired outside of first party channels.
Driver-assistance systems are in a strange space. They may improve driver safety but are not safety devices, rather, they are convenience devices. Some manufacturers do not make this distinction clear and many operators do not understand these distinctions, sometimes leading to deadly results.1 While operator education may reduce system misuse, it will certainly not eliminate it and the best solution would be to create safe assistance systems. To do this with current regulations, state of the art systems which use AI must have a way to translate or transition from learned networks to something explainable and auditable. This is currently not possible with the state of the art vision systems which are deep neural network based.
In order to retain the incredible effectiveness of AI methods like Deep Learning, the only other solution is to change regulation around what is considered safe. This would give a clear path to the manufacturers of such systems so they become willing to shoulder the blame for accidents with the systems active. Without these regulations, ADAS will remain convenience features to shift as much legal risk to the operator as possible. These regulations would largely be determined by the perceived risk of the system by the general public. Recent research suggests that autonomous vehicles must be 4 to 5 times safer than the average driver to be considered safe by the general public.5 From there, safety requirements could be gradually increased until systems used in practice achieve failure rates similar to those required for certification.
This system is still in very early development and there is no definitive timeline yet. The working principles of the system are demonstrated here. An initial proof of concept has been developed and work is progressing on a prototype. The hardware for this prototype has been selected but due to the difficulty of acquiring good ground truth data, development will primarially be done in a simulator. Most simulators are designed for development of highway and urban systems so acquiring one for this system's intended niche is tricky. The current plan is to develop maps and modifications to the CARLA Simulator to facillitate this use case.
A lot of work has also gone into determining whether the method could conform to ISO 26262 and to other, more appropriate standards which the federal government and industry working groups are developing. Furthermore, as this industry matures, especially with the development of new sensors, a lot more work will have to be expended on technology assessment.
These are things considered necessary for a MVP (in roughly the order they'll be handled):
These are some nice to haves:
See the motivation.
The stereo camera has trouble with the lack of texture, but LIDAR works really well. Due to the high albedo (reflectivity) of snow, LIDAR actually performs better in snow. Asphalt, however, has a low albedo so as conditions improve, the LIDAR becomes less effective. This will be able to be used to discern road surface from snow banks more effectively. Before the snow is plowed, if you find it difficult to see the edges, this system probably will too. Once the snow is plowed, the problem becomes extremely easy with LIDAR. It's like driving down a road lined with jersey barriers; borders are obvious and well defined.
An example of the jersey barrier-esque plowed snow:
The color filter matches pretty much all snow surfaces, but the LIDAR would clearly see the surfaces at the edge extremely steep angles and exclude it. The crumbled snow near the edges of the snowbank may prove to be a problem for the system. Flatness tolerances may have to be speed dependent.
This will mostly be handled by integration with GPS/maps. The system currently only maintains a locally referenced map, so it has no sense of which road is the correct one to take. GPS gives a globally referenced map, but only to a certain precision at which point the vision system can take over. If turns are blocked by something like another car, this can largely be handled by maintaining the proper distance from the vehicle in front or slowing down when the turn is approaching to allow cars on the other side of the road to move out of the way.