“At Toyota, our focus is not only on protecting people in case of an accident, but also on preventing that accident from happening in the first place,” said Chuck Gulash, Director of Toyota’s Collaborative Safety Research Center (CSRC). “While the auto industry will never eliminate every potential driver distraction, we can develop new ways to keep driver attention and awareness where it needs to be – on the road ahead.”
Gulash discussed three specific safety research initiatives aimed at better leveraging vehicle design and interaction to help drivers keep their eyes on the road, hands on the wheel and brain engaged and aware. These included Toyota’s DAR-V new-concept research vehicle and the funding of two university research programs.
Research Vehicle Helps Reduce Potential Distractions Before Driving
“Cars have become an interaction of multiple screens. Initially, there was the windscreen, and rear window and the rear and side-view mirrors,” said Gulash. “We now have multiple gauge clusters, large information screens and heads-up displays all feeding us information and competing for our attention.”
The DAR-V was developed in partnership with Microsoft Research to help reduce driver distractions before the key is even in the ignition. Utilizing Microsoft technologies such as Kinect, the interactive systems integrated into the design of the vehicle display important, highly personalized information on the side window when the driver approaches the car.
Using a combination of gesture control, voice and the key fob, drivers can navigate information such as updates on traffic and the weather, appointments and schedules for the day ahead, and even route details that might include a gas station if the vehicle is low on fuel. By addressing these critical daily priorities before even setting foot in the vehicle, a driver potentially has more mental bandwidth to focus on driving.
“We need to start thinking of the car and the driver as teammates, sharing the common goal of saving lives,” said Gulash. “The best teammates learn from each other. They watch, listen and remember. They adapt. They communicate. And they assist, as needed. In doing so, over time, a foundation of trust is built. Together, the teammates are building a common situational awareness of their driving environment.”
Because the DAR-V system can recognize and differentiate between individuals, the system might also be used to reduce driver distractions in other ways. For example, children might play “games” designed to help them buckle their seatbelts quickly, easing the stress on parents and helping them focus more of their attention on the road.
MIT AgeLab Observes the Human Factors of Voice Command
Chuck Gulash,Director of Toyota’s Collaborative Safety Research Center (CSRC) discussed a study undertaken at MIT’s AgeLab which Toyota helped to fund. These results were published in a white paper authored by Dr. Bryan Reimer and Bruce Mehler, of MIT, whose purpose was expand understanding of the human factors of voice command.
Gulash discussed three specific safety research initiatives aimed at better leveraging vehicle design and interaction to help drivers keep their eyes on the road, hands on the wheel and brain engaged and aware. These included Toyota’s DAR-V new-concept research vehicle and the funding of two university research programs.
Research Vehicle Helps Reduce Potential Distractions Before Driving
“Cars have become an interaction of multiple screens. Initially, there was the windscreen, and rear window and the rear and side-view mirrors,” said Gulash. “We now have multiple gauge clusters, large information screens and heads-up displays all feeding us information and competing for our attention.”
The DAR-V was developed in partnership with Microsoft Research to help reduce driver distractions before the key is even in the ignition. Utilizing Microsoft technologies such as Kinect, the interactive systems integrated into the design of the vehicle display important, highly personalized information on the side window when the driver approaches the car.
Using a combination of gesture control, voice and the key fob, drivers can navigate information such as updates on traffic and the weather, appointments and schedules for the day ahead, and even route details that might include a gas station if the vehicle is low on fuel. By addressing these critical daily priorities before even setting foot in the vehicle, a driver potentially has more mental bandwidth to focus on driving.
“We need to start thinking of the car and the driver as teammates, sharing the common goal of saving lives,” said Gulash. “The best teammates learn from each other. They watch, listen and remember. They adapt. They communicate. And they assist, as needed. In doing so, over time, a foundation of trust is built. Together, the teammates are building a common situational awareness of their driving environment.”
Because the DAR-V system can recognize and differentiate between individuals, the system might also be used to reduce driver distractions in other ways. For example, children might play “games” designed to help them buckle their seatbelts quickly, easing the stress on parents and helping them focus more of their attention on the road.
MIT AgeLab Observes the Human Factors of Voice Command
Chuck Gulash,Director of Toyota’s Collaborative Safety Research Center (CSRC) discussed a study undertaken at MIT’s AgeLab which Toyota helped to fund. These results were published in a white paper authored by Dr. Bryan Reimer and Bruce Mehler, of MIT, whose purpose was expand understanding of the human factors of voice command.
Researchers found that the mental demands on drivers while using voice command were actually lower than expected, potentially because drivers compensate by slowing down, changing lanes less frequently or increasing the distance to other vehicles. However, in a number of the voice interactions studied, the amount of time drivers took their eyes off the road during voice command tasks was greater than expected. The situation is often more pronounced among older drivers, some of whom were found to physically orient their bodies towards the voice command system’s graphical interface when engaging with it.
Stanford Autonomous Driving Human Factors
This idea of building trust by sharing tasks is being taken to a new level with a collaborative project between the CSRC and the Stanford University.
Using one of the most advanced driving simulators in the country, researchers are studying how drivers interact with new automated safety technologies that are increasingly capable of taking over responsibility for driving the car. The system combines EEG sensors to track brain activity, skin sensors to measure emotional arousal and eye-tracking headgear to follow directional glances. The system can perfectly align what’s happening inside the car, what’s happening outside the car and what‘s happening inside the driver’s brain.
The simulator is unique in its ability to instantly shift from fully automated control to driver in full control to mixed control. The research will help inform design improvements to automated systems that will improve how they work in partnership with the driver to improve safety for everyone.
For example, the project will help to understand how a driver responds to a sudden “takeover now!” alert compared to less aggressive commands or explanations. Other issues include studies of how driver abilities are affected by prolonged periods in fully automated mode, including potential reduction in reaction times or situational awareness.
“These are questions that need to be answered”, Gulash concluded, “not only to help build a product. But also, to build a foundation of understanding and guidelines for how we proceed with further research into the human factors of automated vehicles.”
Source: Toyota