At CES 2019, Mercedes-Benz revealed the Urbanetic autonomous shuttle, a futuristic proof-of-concept to illustrate the kind of work that is being done behind closed doors, and the direction in which its Vans unit is headed. The vehicle also put a face to the various industry buzzwords being bandied around what has become the most important tech show on the automotive calendar.
Autonomous shuttles such as these are edging further into the public eye, albeit the Urbanetic was limited to a single cordoned off lane in this case. Running alongside traffic on the Vegas Strip, the Urbanetic’s size becomes apparent: it is enormous. A bulbous roof protrudes well above nearby SUVs, and with no driver controls there is, in theory, room for 12 passengers. Bizarrely, the front headlights are almost identical to that of a Ford Transit van, and all that space is designed to ferry riders around the city— even predicting where crowd surges may take place. While explicitly a research vehicle, the Urbanetic builds on Daimler’s earlier F 015 concept—a luxury vehicle as opposed to a shuttle bus—that posed around San Francisco back in 2015.
The Urbanetic autonomous shuttle was shown during CES 2019
Alongside the development of people and product movers, the German automaker is also heavily invested in autonomous drive technology for privately owned passenger cars. The 2017 Mercedes-Benz E-Class launched with DRIVE PILOT, a suite of driver assistance systems that allow the car to control its steering and acceleration in specific highway conditions, and under the driver’s watchful eye. It is akin to that of other ‘highway pilots’ such as Cadillac Super Cruise, Volvo Pilot Assist and Tesla Autopilot, all of which require the driver’s hands—and brain—at the ready.
Under the vision of Chief Executive Dieter Zetsche, Daimler is pushing for fully autonomous systems that provide the option for human control, and driverless shuttles that do not. Tackling the former is Mercedes-Benz Cars, which is evaluating its autonomous systems on a global scale.
Around the world
Michael Hafner is Head of Automated Driving and Active Safety at Mercedes-Benz Cars. Since October 2016, he has led the series development of semi-autonomous systems such as DRIVE PILOT. A doctor of automation technology, Hafner has been with Daimler for the last 17 years, and is deeply involved in the automaker’s push for a road-legal fully autonomous system. Testing across multiple markets is an undertaking of epic proportions, but also entirely necessary, he says. The ‘brain’ of the autonomous car must be able to cope with countless forms of road layouts, driving styles and climates. But what are the most challenging markets in which an autonomous vehicle can be tested?
“We asked ourselves exactly that question a few years ago,” Hafner tells M:bility. At the 2017 IAA in Frankfurt, Daimler’s autonomous driving team sent a standard S-Class, albeit with slightly modified software and a trunk full of testing equipment, on a trip around the world. Over five months, the vehicle would investigate differences in road culture between the US, Europe and China, as well as in Australia and South Africa. This, says Hafner, was “to deliberately find situations in other countries we need to take into consideration.”
So far, testing in China has proven particularly challenging. “Chinese traffic is much denser, and we have seen many situations with partially obscured traffic signs,” explains Hafner. China has become a major test bed for autonomous vehicles, and Mercedes-Benz has been working with Internet giant Baidu to hone its system for local roads. Demand for autonomous vehicles is generally expected to soar here, where consumers already favour alternatives to the private vehicle. “The mobility context in China is vastly different from mature markets,” says Bill Russo, an ex-Vice President at Chrysler and head of Shanghai-based advisory firm Automobility. “Public transportation comes first, then on-demand mobility services—such as ride-hailing, taxis, micro transits, ride-pooling and car-sharing—and then private ownership.”
Elsewhere, it’s not just signage and traffic that can prove difficult. “In Australia, wildlife is a huge issue on rural roads, and in South Africa pedestrians and other vulnerable road users can be found on all types of roads,” says Hafner. “These are all rather challenging situations for which we are preparing.”
Evaluating how an autonomous system reacts to plastic dummies on a test track is one thing. Testing on public roads can throw up entirely new situations, and ensuring the vehicle can handle the unexpected is vital. “Testing is crucial for practically any technical product before it enters the market, and when it comes to cars and other vehicles, testing is also elementary for the safety of the product,” says Hafner. “Unlike consumer electronics, a glitch can cause real safety issues, so a good Beta version will not be good enough for a release into the customer’s hands.”
Chinese traffic has proven particularly challenging
Before a test vehicle can hit public roads, it needs to be proven out through simulation. According to Elektrobit, an expert in hardware and software validation, as much as 95% of self-driving vehicle testing can be carried out in this way. Some programmes limit test vehicles to drive only on public routes that replicate successfully simulated conditions. Virtual testing allows millions of miles of driving to be completed in a short space of time. Real-world scenarios can be repeated over and over, allowing engineers to test and make tweaks in the safety of the lab. Unfortunately, the real world does not always play out as expected, and on-road tests cannot be avoided.
“We have seen many surprising situations during our multi-million kilometres of public testing which would never have been encountered in mere simulation,” explains Hafner. Situations seen on the road can be programmed into the simulation software, but “real world testing remains essential,” he affirms. “There are some situations you simply cannot predict beforehand, and we constantly need to ensure that the simulated situations are realistic.”
One challenge is learning the laws of the road; another is learning how to drive safely and efficiently amid other traffic. Late manoeuvres, aggressive acceleration and rapid braking could not only prove dangerous, it may also be off-putting for the passenger. This raises the question of how an autonomous car should be taught to drive. “The definition of a ‘good’ driver depends very much on the perspective of the one assessing the driving. It also depends on weather and traffic conditions and—not to forget—cultural aspects,” says Hafner. “Anyone who has driven in another country might sometimes perceive the driving there as ‘crazy’. However, a rather smooth and uneventful ride that does not put anyone in harm’s way is probably something that most people can agree on. In general, an autonomous car will be programmed to behave defensively, to obey traffic rules and of course to drive comfortably—this is how we would like our self-driving cars to behave.”
New challenges arise
Autonomous vehicles are also learning how to adapt their driving in relation to other road users and pedestrians, just as a normal driver should. A dog on a leash is less likely to run out into the road, for example, than a stray dog without a human owner by its side. The vehicle, in this case, would slow down, take a slightly wider berth and err on the side of caution. Similar scenarios are being played out for the likes of toddlers, cyclists and a myriad of other entities that may be around the vehicle.
In the test bed of San Francisco lies another rapidly growing trend: e-scooters and e-bikes. While rules are being established for how and where these scooters can ride, many riders can be seen whizzing in and out of traffic. For the autonomous car, predicting where these actors may turn next could prove tricky. “Thankfully, our machine learning methods can handle that task,” says Hafner. “In general, we would classify riders of e-scooters and e-bikes as vulnerable road users—just like pedestrians, riders of regular bikes, riders of motorcycles, people in wheelchairs or even on hover-boards for that matter!
“Even though the mode of transportation might differ, the deep learning algorithms will still recognise a person on some form of transportation device and proceed with the necessary caution. Having to teach our cars to understand every new device on the road, in every colour or brand, would be a fight against windmills that we could not possibly win.”
Trust me – I’m a robot
Much interest has been placed on the so-called ‘handover’ situation, in which the vehicle transfers control back to the human driver. This is already an issue on roads today, with semi-autonomous systems often thrusting control back to the driver with little warning. The driver should of course be alert and ready to take over, but the human-machine interface (HMI) is far from effective. Some systems simply remove a small green icon on the dashboard to indicate the system has switched off.
There is also the interaction between car and pedestrian to consider; in the case of a driverless vehicle, it is crucial that other road users are not left guessing. There have been a number of attempts to crack the issue. In 2016, Swedish firm Semcon proposed the ‘smiling car’, which used LEDs on the front of the vehicle to indicate it was allowing a pedestrian to cross the road. Others suggest digital symbols could indicate whether the vehicle is coming to a stop, or just about to pull away.
“We believe that clearly communicating to the outside world as to whether the car is being operated in autonomous mode or not will help to increase acceptance of this technology as a mode of transportation in everyday traffic,” says Hafner. His team has conducted a significant amount of research into this issue, and concluded that turquoise could be a unique colour to indicate that autonomous mode is engaged. In addition, distinct patterns of light could “clearly signal” whether the car is stopping, accelerating or notifying nearby pedestrians that the car is aware of their presence. “Imagine you are about to cross the street at a crosswalk and are unsure if the driver—or in this case the car—has recognised you. These lights will increase trust, safety and acceptance,” he explains. “We have introduced this concept with our ‘Cooperative Car’ and are using it to conduct further research in this direction.”
The Co-operative Car is a modified S-Class with 360-degree light signalling
In November 2018, Daimler hosted a debate in Berlin to discuss how trust can be established between human and vehicle. The Co-operative Car, a modified S-Class with 360-degree light signalling, was posed as a solution. “People need to be able to quickly and reliably gauge what an autonomous vehicle is going to do next,” said Alexander Mankowsky, a Futurist at Daimler’s Futures Studies & Ideation unit. “The vehicle must therefore provide information about its intentions in a way that people can grasp immediately and intuitively.”
Speaking to Automotive World earlier in March 2017, Mankowsy noted that trust in automation “is a very serious issue,” and warned that “empathy with an automated car won’t come naturally.” So-called cognitive models, he said, would help consumers to understand the capabilities and limitations of an autonomous vehicle. “In doing so, we are about to replace blind trust—which can be dangerous—with ‘informed trust’ to make people comfortable with automated cars in [the] public space.”
Selling the idea
Looking ahead, the marketing of autonomous drive technology will also prove an interesting watch. While safety advocates widely present autonomous vehicles as a means to reducing traffic deaths, many automakers emphasise the technology as a comfort feature that relives the stress of driving. Sitting in traffic can be boring, so say many brands today, and semi-autonomous technology can help drivers to relax and leave the car to do most of the work—a potentially dangerous precedent to set. Many crashes have already been seen involving drivers that have become over reliant on semi-autonomous systems.
“Overreliance, or ‘blind trust’, in a system can be very dangerous,” says Hafner. “Hence, our overall goal is to create what we call ‘informed trust’ with our customers, so that they always know what drive state they are in, what they are allowed to do and what they still need to do themselves.
“When it comes to handing over the driving task entirely to the vehicle—where the passenger can ‘only’ start and stop the process and determine the destination—it is mainly about comfort for the customer, but the obvious safety of the vehicle and the ride goes without saying,” continues Hafner. Systems that can be engaged on a freeway should make it crystal clear as to what the car can and cannot do in order for the driver to relax. Guessing as to whether the car will handle the next bend, for example, or whether it has recognised a sudden build up of traffic, is a stressful experience.
“Safety and comfort do not contradict each other. A comfortable driver is also a safe driver, and people who feel comfortable in a mode of transportation also feel safe,” he concludes. “But it is true that there is a need to distinguish between features that ‘support to a large degree’ and features that ‘perform the task completely’.”
This article appeared in the Q2 2019 issue of M:bility | Magazine. Follow this link to download the full issue.