views
These unusual or difficult conditions are frequently referred to as "edge cases," and while they are extremely rare on their own, there are potentially millions of them, and collectively, they account for the majority of the risk associated with autonomous driving. As humans, we are reasonably adept at dealing with problems, as we can utilise context to assess circumstances and react in predictable ways. If you envision a giant animal going out in front of you, for example, you may not know if it is a camel or a tapir, but you will recognise the need to halt or maneuver around it. An autonomous vehicle (particularly one that relies heavily on cameras to see the world) would first need to process a large number of dark pixels, recognise that this indicates the presence of an object ahead, then interpret that the object is a large animal and not, say, a poster on the back of a bus, before deciding on what to do. Once an autonomous automobile understands such a circumstance, it will be able to react considerably faster (in some cases faster than humans), but it will always be limited to the "learned" edge cases. Thus, to effectively transform autonomy from hype to reality, we must develop a mechanism to manage a broad range of edge circumstances.