It's probably fair to say that the concept of self-driving cars is one of the most controversial subjects in the automotive industry. Although the idea is that autonomy equals safety, with an ultimate goal of accident-free roads, the notion of a car being capable of the same split-second decision-making as a human, both from a safety and an ethical point of view, seems far-fetched.
More knowledge of how robot cars can be made to work would help, and there’s an insight into that from a project called D-Risk. The project aims to compile the world’s largest library of real-life near-misses, known in the world of autonomous vehicle development as ‘edge cases’. In order for autonomous vehicles to cope with the weirdest of scenarios that a human takes at face value and deals with on the fly, they will need experience of some of those scenarios to cope.
This is where the difference between artificial intelligence (AI) and merely programming a computer to perform a series of tasks is important. AI involves machine learning – something that a straight-forward computer, even a very powerful one, isn’t able to do. If a bog-standard computer is programmed with a series of scenarios, it can recognise them when they crop up, but if the scenario changes slightly, the computer might not. An artificially intelligent machine (such as an autonomous vehicle), on the other hand, can take what it has learned and figure out how to cope with a similar but not identical scenario in much the same way as a human driver. The more diverse the information an AI system is fed, the more capable it will be of adapting to slightly different scenarios.
Run by urban innovation firm DG Cities, together with a number of partners, including Imperial College London, the D-Risk project has been running a survey throughout January and February this year. The survey is a call to road users, including cyclists and pedestrians, to submit examples of edge cases they’ve experienced or witnessed to feed the development of autonomous vehicle AI.
The call for submissions went out on social media, and the wackier the better. One driver told of how they swerved to avoid a cow but then nearly hit a table in the fast lane. It’s that combination of events that might not necessarily be unusual on their own – such as an animal in the carriageway or something that has fallen off the back of a lorry – that might confuse an untrained machine were they to happen simultaneously.
Once the data is compiled, it will be cleaned up both manually and by computer. Mischievous made-up scenarios created by overactive imaginations won’t harm the data and could even contribute to it, so long as they’re deemed plausible. Once the survey is complete, the data will be used to develop AI that will power a virtual driving test for autonomous vehicles to ensure they’re safe enough to drive solo on public roads.
Join the debate
Add your comment
The only way AI will work is if a computer is given a conscious,be able to make its own mind up what to do like we humans do day in day out,and that'll never happen because we wouldn't be in control,we've have to have an override button,but, what if they AI enabled computer recognises this?, will it work out how to turn that facility off?, autonomous in areas of heavy traffic is where I can see AI being useful,but, as a driver, I'd never quite trust a computer,it's the human thing really.