asg70 wrote:baha wrote:I really don't want a bunch of mad robot cars running around.
How is that different from a bunch of regular cars running around? They crash into each other and into crowds and buildings with alarming regularity. People always adapt to whatever the status quo is and it's change they fear the most.
Yup. Baha, if you're objecting to the (thus far) rather poorly planned mishmash of regulations re AV's, then I'll agree that government needs to get in front of this, do what it was elected and set up to do and LEAD on this, and let the public and AV makers/testers know what acceptable legal and safety standards are -- VERY CLEARLY, then I agree with your concept.
I'd word it differently -- something like "I don't want a poorly regulated patchwork of AV's running around on a half-baked framework of loose or ill designed standards".
However, with enough testing, the overall standard should be relative safety. Once it's conclusively proven that AV's are CLEARLY superior to human drivers, and measurable by some sort of objective standard, then net, they're saving lives and money. (This is a challenge. When these things are being modified frequently re software, for example -- how do we ensure they're not being made less safe? How do we prevent catastrophic bugs from being beamed to a fleet of Tesla's, for example? Today, I don't think government is handling this at ALL? How about serious government tesing and certification requirements for software changes -- BEFORE they're released to the public, for example?)
Now, should regulations include things like serious security, to ensure it's not feasible for terrorists or smart teenagers or criminals generally to electronically hijack AV's for fun, mayhem, and profit? Of course -- which is an example of why I strongly support some seriously considered standards and regulations which look ahead and take AV's and their risks seriously.
But just blindly claiming that AV's are "worse" because we're not used to them or are uncomfortable with them, regardless of what data and safeguards exist, is an unreasonable position.
If we took that position with technology generally, we'd generally be a thousand or more years back in technology. (That would be better re things like AGW, but it's clearly not the way the world works).