As driverless cars become a mainstream reality, AI is aiding it greatly to remove all barriers to autonomous function and humanization of its operation. The biggest challenge self-driving cars will have to overcome on the road is being able to react to the randomness of traffic flow, other drivers, and the fact that no two driving situations are ever the same. AI is what comes to play in this sphere.
AI will outmaneuver human drivers
The latest autonomous technology is adept at handling this type of diverse environment. By using deep learning and sensor fusion, it’s possible to build a complete three-dimensional map of everything that’s going on around the vehicle to empower the car to make better decisions than a human driver ever could.
However, this requires massive amounts of computing to interpret all the harvested data, because normally the sensors are “dumb sensors” that merely capture information. Before being actioned the information has to be interpreted. For example, a video camera records 30 frames per second (fps), where each frame is an image, made up of several color values, and thousands of pixels.
There is a massive amount of computation required to be able to take these pixels and figure out, “is that a truck?” or “is that a stationary cyclist?” or “in which direction does the road curve?” It’s this type of computer vision coupled with deep neural-network-processing that is required by self-driving cars.
Deep learning adds context to AI
Moving toward true AI, deep learning is a set of algorithms in machine learning that attempt to model high-level data concepts by using architectures of multiple non-linear transformations. Various deep learning architectures such as deep neural networks (DNNs), convolutional neural networks (CNNs), and deep belief networks are being applied to several fields such as computer vision, automatic speech recognition, natural language processing, and music/audio signal recognition where they have proven to be astoundingly responsive and accurate. Applying DNNs, a car can navigate freeways, country roads, gravel driveways, and drive in the rain after only 3,000 miles of supervised driving.
Legislators recognize AI as licensed driver
To eliminate the uncertainty around the intent of legislators to move this technology forward, U.S. vehicle safety regulators have declared the AI piloting a self-driving Google car will be considered a legal driver under federal law.
In a recent letter sent to Google, NHTSA confirmed that it “will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants.”
The stage is set for AI to dominate our roads, and not only in racecars on closed circuits.
AI in Drones
With high-quality drones now available for just a few hundred dollars, many consumers and businesses are taking to the skies. But many are plummeting to Earth, too. The drones on the market today are missing a key component needed to make them useful—the intelligence to fly autonomously. AI is helping drones to evolve greatly starting with learning how to fly by itself, which opens it up to scores of opportunities:
Learning to Fly by Crashing
The gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! Drones are being built whose sole purpose is to crash into objects. They use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation.
After 11,500 collisions, the resulting algorithm is able to fly the drone autonomously, even in narrow, cluttered environments, around moving obstacles, and in the midst of featureless white walls and even glass doors. The algorithm that controls the drone is simple: It splits the image from the AR Drone’s forward camera into a left image and a right image, and if one of those two images looks less collision-y than going straight, the drone turns in that direction. Otherwise, it continues moving forward.
How well does this work? It’s usually not as good as a human pilot, except in relatively complex environments, like narrow hallways or hallways with chairs. But compared to a baseline approach using monocular depth estimation, it’s massively better, somewhere between 2x and 10x the performance (in both time in the air and distance flown), depending on the environment. The biggest benefit comes from navigating around featureless walls and glass doors, both of which are notoriously challenging for depth estimation.
Using AI and drones to reduce Elephant Poaching (aerial surveillance)
Air Shepherd Programs are using deep learning neural network AI to boost the capabilities of the drones in its Air Shepherd program. AI is teaching drones what elephants, rhinos and poachers look like, so it can accurately pinpoint and mark them in videos. It will now put the AI to work sifting through all the footage the foundation’s drones beam back in real time, including infrared footage taken at night.
The AI’s job is to pore over these videos and quickly identify the presence of poachers to prevent them from even reaching the animals’ herds. It’s the perfect addition to the Air Shepherd program that aims to use cutting edge software and drones to stop poaching in Africa.
AI led drones in warfare
Though AI enabled autonomous weapons is a sensitive topic, the fight against terror has a new deterrent. Researchers have developed an artificial intelligence that can defeat human pilots in combat. This new AI, known as ALPHA, is designed for military drones or ‘unmanned aerial vehicles’ (UAVs). The AI is based on an algorithm created at the University of Cincinnati. ALPHA can currently process sensor data and plan combat moves for four drones in less than a millisecond, or over 250 times faster than the eye can blink — reaction times far beyond human abilities.
The AI isn’t powered by a super computer — it runs on an ordinary $500 desktop PC, which means high performance at low cost. This is achieved through efficient algorithms.
Although we usually expect machines to process all the available data before making decisions, ALPHA takes a more human approach to solving problems, simplifying variables to only consider the most relevant information — to work out whether an opponent is better, for example. Instead of using numbers for precise parameters, ALPHA’s algorithms are based on language or ‘fuzzy logic’ — it makes decisions via if-then rules. That reduces the number of branches in a decision-making tree, which lowers the computing power required to find the best strategy.
ALPHA is built on a so-called ‘genetic algorithm’ — rules inspired by how genes are inherited from one generation to the next. Genetic algorithms are a type of ‘evolutionary algorithm’, mimicking the process of adaptation by natural selection (survival of the fittest). In this case, bits of computer code are ‘bred’ to create new combinations, with the most efficient versions chosen by the selection process.
What the Future holds
The stage is set for AI to dominate our roads and our skies. As driverless cars become mainstream, it might completely alter the paradigm around private ownership of cars altogether. In case of drones, future holds great promise of it completely owning the aerial surveillance, monitoring / maintainance and delivery sector and slowly evolving itself into the next “driverless car” for aerial transport.