Machine Vision Empowering Advanced Driver Assistance Systems

Advanced driver assistance systems (ADAS) are increasingly popular with drivers, both to provide a safer time on the road and for the increased comfort they provide. Further, increasing pressure from governments may soon make ADAS required, standard equipment on all new vehicles rather than simply being an option as it is today. Camera-based machine vision is a key part of ADAS.

It’s clear that advanced driver assistance systems (ADAS) are poised to make it big in the coming years. In fact, ABI Research predicts that the market for advanced driver assistance systems would balloon to $130 billion in 2015 from only $10 billion in 2011. A large part of that will be camera-based systems.

Camera-based advanced driver assistance systems are primarily eyed for in-car driver situation and prediction. The camera captures images of the driver and its algorithms are able to detect such things as the driver not looking at the road ahead or falling asleep on the wheel. Camera systems can work on their own in providing a safer driving experience, or in conjunction with other sensors, such as radar, ultrasonic, LIDAR, photonic mixer device (PMD), and near-field and far-field object detection, in order to provide alerts and assistance. Working in conjunction allows camera-based ADAS to lessen the false positives or the unnecessary alerts and warnings from other sensors. Alternatively, the camera systems can help determine if more than an alert is needed.

For example, the camera would be able to tell that the driver is looking attentively at the road and the car in front of him, making alerts unnecessary and potentially distracting. If the sensors that monitor the front of the car try to send an alert about the car in front, the camera systems would be able to intercept and mute the alert. If the driver is not attentive, however, instead of intercepting the alert that the car is about to collide with the car in front, the camera system can initiate a braking action even before the alert sounds.

Camera-based advanced driver assistance systems also allow the driver to have an extended view of the car’s surroundings, while also being able to classify and detect objects. Unlike systems that rely on the wireless communication of data from other cars or the infrastructure, camera-based advanced driver assistance systems make use of visual input. The images are sent to a central processing unit where algorithms work on it to check the driver’s situation, as well as the car’s surroundings. Such systems would be able to detect a pedestrian about to step in front of the car, for instance.

Types of camera-based ADASM

There are several types of camera-based advanced driver assistance systems available today. There are those that help you park or back up, and then there are those that give you a surround view of your car. As mentioned above, there are also camera systems that check on the driver’s situation and drowsiness. Further, you have systems that monitor and alert you when you go out of your lane and help avoid collisions.

The images captured by the camera are transferred to a video screen or it may be used for machine vision via a high speed connection. The system also has microcontrollers that communicate with the car’s other modules and other system control functions.

The number of cameras and types of camera used differs depending on the application. For instance, you use a basic analog camera for backup and surround view applications, while you need an advanced smart camera for parking assistance and back-up camera systems. For those front camera systems, you would need smart cameras as well as a high data processing power to help process the video quickly and communicate to other modules what needs to be done.

These camera systems require considerable processing capability. Consider Freescale’s SCP2200 Image Cognition Processor. In camera-based advanced driver assistance systems, the SCP2200 provides an array processing vision analytics unit capable of 34 billion operations per second and an ARM9 RISC microcontroller. The array processor does the high and medium level processing that is needed for real-time ADAS decision making procedures, such as object detection and driver awareness. Lower-level processing and system control occur in the microcontroller.

Such performance is needed because camera-based ADAS are different from sensor-based systems in that they do not make decisions based on simple parameters such as proximity or speed. To get a feel of how different, it would be instructive to look at lane departure warning systems.

In infrared systems (sensor-based), for instance, infrared lights are used to monitor lane markings. A detection cell keeps track of the reflections of the infrared beams to see if the car goes over a lane marking. If it detects that your car has moved over a lane marking and the turn signal is off, it will send you an alert. If there is another car alongside, however, this may be too late.

In camera-based ADAS the camera looks ahead and monitors the visible lane markings. It sends the images to the system, which then calculates the divergence angle of the markings and thus the car’s lateral divergence from the lane’s center. The calculations are then run against algorithms to calculate the car’s position in the future by also taking into account the car’s speed , steering angle and yaw rate. As a result, the camera-based systems can alert you even before your car leaves the lane to help prevent. This makes camera-based systems even safer.

The pitfalls of camera-based ADAS

While visual input from camera-based systems provide better information, such systems are not perfect just yet. Cameras are still very vulnerable to the elements, and may not be suitable for every type of weather. Also, there is the case of the multi-camera car. Having too many cameras might prove to be a problem in terms of image storage and processing power, which might also lead to response-time problems. In short, there might be delays from the time the camera picks up the image of the driver nodding off and getting the image to the processing unit and for the system to start the braking action, which kind of defeats the purpose of making driving safer and drivers more alert.

Cameras are also more expensive and bulkier than sensors — and bulkier means heavier. Heavy things inside your car can add up to your gas consumption. To really take off, then, camera-based ADAS needs to be smaller and less expensive. This way, OEMs would not have problems with incorporating it with their car’s overall aesthetics. It also should consume less energy and powerful in terms of processing performance.

COMMENTS

There are no comments yet. Be the first to leave one!

    LEAVE A REPLY

    Your email address will not be published.