Limited to speeds of up to 37mph on motorways, Automated Lane Keeping Systems have been offered a route to their legal introduction on UK roads.
The Department for Transport claimed that the technology could improve road safety by reducing human error, which contributes to over 85 per cent of accidents. “The driver will be able to hand control over to the vehicle, which will constantly monitor speed and keep a safe distance from other cars,” it said.
Self-driving technology in cars, buses, and delivery vehicles “could spark the beginning of the end of urban congestion, with traffic lights and vehicles speaking to each other to keep traffic flowing, reducing emissions and improving air quality in our towns and cities,” DfT said.
The technology could create around 38,000 new jobs in a UK industry that could be worth £42bn by 2035, the department added.
Yet Whitehall’s enthusiasm for the technology flies in the face of evidence that it is impractical, unsafe, and undesirable.
For a start, Thatcham Research said last autumn that the plan could “put road users’ lives at risk” because the current technology has “significant performance limitations.”
It listed instances where a driver would behave differently to the system, such as slowing well in advance or making an evasive lane-changing manoeuvre to avoid debris or pedestrians encroaching on the carriageway after a breakdown. Its assumption was that the system would be applied at 70mph, perhaps explaining the reason the government was looking to limit it to much lower speeds.
While truly self-driving cars are surely just around the corner, for now here’s an AI early-warning system for your semi-autonomous ride
Meanwhile, the broader implementation of self-driving technology faces psychological barriers, according to a 2017 paper from Nature Human Behaviour.
It listed dilemmas around autonomous ethics as the first among concerns. While humans like the idea of operating under utilitarian principles, they, surprisingly enough, would prefer vehicles that prioritise their own lives as passengers as opposed to those outside.
Then there are risk heuristics and algorithmic aversion, that the novel nature of autonomous vehicles will result in an over-reaction to the inevitable accidents. Lastly, asymmetric information and the theory of the machine mind means that the lack of understanding of the underlying decision-making processes in autonomous vehicles could make it difficult for people to trust them, the researchers said.
The Register has long voiced reasonable concerns that the thrust toward self-driving technology is driven more by investors, manufacturers, and governments trying to cling on to the next hype cycle rather than a desire to offer anything of significant benefit. Incremental technology developments will not in themselves overcome these problems.
In 2018, we interviewed Christian Wolmar, author and broadcaster, who described what he calls the Holborn problem. It says that if self-driving cars are too safe, they should come to a halt as pedestrians enter the road space. But in places like London’s Holborn Underground station at rush hour, people spill over the road en masse in a completely ad-hoc fashion, putting self-driving vehicles at a standstill for hours.
While US authorities have sought to separate cars and pedestrians, in Europe they see safety increasing by mixing the two groups through concepts such as shared space, although debate still rages about that idea.
Even the most enthusiastic promoters of self-driving technology seem to have lost their appetite for it. It was mooted that Uber’s profitability would be dependent on self-driving vehicles only for the taxi-hailing app company to sell off its own efforts to develop the technology.
Maybe Prime Minister Boris Johnson’s government is not best placed to get self-driving technology in gear, famous as it is for its screeching U-turns. ®