How (and why) to Ramble on your goat sideways
The liability question is actually a pretty tricky one. If a software bug causes the car to crash and kill someone, who do you blame? The driver/owner/operator/whatever? Google? The engineer at Google who wrote the code with the bug in it? If it's either of the latter, then nobody will make a self-driving car because the risk is just too high. The former seems unfair if the "driver" was just using a feature he was told works. I don't know the answer.
--Ian
--Ian
The liability question is actually a pretty tricky one. If a software bug causes the car to crash and kill someone, who do you blame? The driver/owner/operator/whatever? Google? The engineer at Google who wrote the code with the bug in it? If it's either of the latter, then nobody will make a self-driving car because the risk is just too high. The former seems unfair if the "driver" was just using a feature he was told works. I don't know the answer.
--Ian
--Ian
The liability question is actually a pretty tricky one. If a software bug causes the car to crash and kill someone, who do you blame? The driver/owner/operator/whatever? Google? The engineer at Google who wrote the code with the bug in it? If it's either of the latter, then nobody will make a self-driving car because the risk is just too high. The former seems unfair if the "driver" was just using a feature he was told works. I don't know the answer.
--Ian
--Ian
Generally the courts hold harmless an employee unless the fault is malicious or had intent to cause harm. If it's something that can't be readily replicated the driver likely will take it in the short term ("your honor, I have no idea why the car didn't stop... I had my foot on the brakes").
The legal system (civil anyway) always goes after the deepest pockets.
Boost Pope
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,026
Total Cats: 6,592
The liability question is actually a pretty tricky one. If a software bug causes the car to crash and kill someone, who do you blame? The driver/owner/operator/whatever? Google? The engineer at Google who wrote the code with the bug in it? If it's either of the latter, then nobody will make a self-driving car because the risk is just too high. The former seems unfair if the "driver" was just using a feature he was told works. I don't know the answer.
Examples of the former include defective tires, faulty or malfunctioning brake and steering systems, etc.
Examples of the latter are things like faulty or badly-designed seatbelts, airbag systems, fuel tanks, and whatnot.
None of these things have made automakers hesitant to make cars.
In a very real sense, legislation which completely absolves automakers of any and all liability arising out of the navigational decisions of a self-driving car represents a radical shift in tort law, IF said legislation is in fact determined to cover software which is defective or improperly designed.
Moderator
iTrader: (12)
Join Date: Nov 2008
Location: Tampa, Florida
Posts: 20,652
Total Cats: 3,011
Why did you move to McDonough, specifically?
None of these things have made automakers hesitant to make cars.
In a very real sense, legislation which completely absolves automakers of any and all liability arising out of the navigational decisions of a self-driving car represents a radical shift in tort law, IF said legislation is in fact determined to cover software which is defective or improperly designed.
A design flaw in a mechanical system in the car can often be compensated for by the living, breathing, thinking human who's sitting behind the wheel. If the car starts behaving strangely the driver will usually do things like drive more slowly or stop and get out. Not always, there are idiots out there and certainly some of them ignore things they should have been paying attention to, but most people are reasonably smart about this kind of thing. Conversely, if there's a software bug that makes the self driving car start doing strange things while it's taking the kids to school, there's no one there to stop it. That's a whole different ball game as far as liability is concerned.
Self-driving cars are something totally new, it's a bunch of 2 ton autonomous robots running around on the roads at 60+ mph. I can think of no other software system out there that has as much potential for harm, at least not without invoking SkyNet. OTOH, 30,000 people a year die in car accidents in the US, the vast majority of which would be eliminated with a self-driving car fleet.
And yes, I agree, making the manufacturer totally liability-free is not the answer. That's why I said it's a tricky question, there's no obviously-right answer out there.
--Ian
Female long-term companion is relocating for work. I decided to go too. About 75% for her, and 25% because my current job makes me feel like I am barely treading water in a sea of mental retardation. The way my company does things is so backwards and the cause of the majority of my stress and depression. It is time to go and the pay ain't so hot.
At the consumer level, what this law would do is eliminate the willingness of consumers to buy self-driving cars.
Since there aren't any good similar comparisons that I can make in the real world, I'll make one up:
Suppose you are a fruit procurement specialist for a large city school district. Your job is simply to buy fruit for student school lunches. You have two options for buying this fruit, "one-transaction-per-student", and "daily".
1. You can make a fruit transaction for every student that wants fruit where you are required to specify if you want "real fruit" or "poisonous fruit". The "poisonous fruit" can make a kid sick or occasionally kill them. Since everyone makes mistakes, it's possible to accidentally select "poisonous fruit" for the child (and in fact, kids get sick from accidentally selected poisonous fruit at a rate of once per week). Case history shows that other procurement specialists who selected "poisonous fruit" have been allowed to keep their jobs as this was considered an "accident". The kid gets the selected fruit 100% of the time.
2. Your second option for buying fruit is that you can make the transaction on a daily basis. All of the student lunches will be filled with the fruit you specify for that day, but the sorting system provided by the fruit distributor is imperfect and does, very rarely, make mistakes (about 2-3 per year); however, if you select the automated option and a kid get/eats a poisonous fruit, you legally bear full responsibility for the outcome, to include medical bills, pain and suffering, and any civil and legal responsibilities for the death of the child.
One is tedious work, prone to mistakes.
The other frees you up to perform other tasks, and very rarely makes mistakes.
Which would you choose?
Why did you choose that?
What would make you choose the other option?
Does the manufacturer of the automated sorting equipment have an incentive to go through two decades of incremental improvements to their system to achieve greater (but never perfect) sorting accuracy?
Now; which of the options causes less total harm?
Since there aren't any good similar comparisons that I can make in the real world, I'll make one up:
Suppose you are a fruit procurement specialist for a large city school district. Your job is simply to buy fruit for student school lunches. You have two options for buying this fruit, "one-transaction-per-student", and "daily".
1. You can make a fruit transaction for every student that wants fruit where you are required to specify if you want "real fruit" or "poisonous fruit". The "poisonous fruit" can make a kid sick or occasionally kill them. Since everyone makes mistakes, it's possible to accidentally select "poisonous fruit" for the child (and in fact, kids get sick from accidentally selected poisonous fruit at a rate of once per week). Case history shows that other procurement specialists who selected "poisonous fruit" have been allowed to keep their jobs as this was considered an "accident". The kid gets the selected fruit 100% of the time.
2. Your second option for buying fruit is that you can make the transaction on a daily basis. All of the student lunches will be filled with the fruit you specify for that day, but the sorting system provided by the fruit distributor is imperfect and does, very rarely, make mistakes (about 2-3 per year); however, if you select the automated option and a kid get/eats a poisonous fruit, you legally bear full responsibility for the outcome, to include medical bills, pain and suffering, and any civil and legal responsibilities for the death of the child.
One is tedious work, prone to mistakes.
The other frees you up to perform other tasks, and very rarely makes mistakes.
Which would you choose?
Why did you choose that?
What would make you choose the other option?
Does the manufacturer of the automated sorting equipment have an incentive to go through two decades of incremental improvements to their system to achieve greater (but never perfect) sorting accuracy?
Now; which of the options causes less total harm?
Boost Pope
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,026
Total Cats: 6,592
When I was in college, I took an elective course on ethics in computing. Mind you, this was in the mid 1990s, when the idea of a self-driving car was pure science-fiction.
For the final paper, I did a study of safety in man-machine interfaces with regard to human factors engineering. (Basically, how do you design machines to do what you want, rather than what you say, and protect you from the consequences of being an idiot without causing inadvertent harm in cases in which the man is, in fact, smarter than the machine.)
The two cases which I looked at in depth were Air France Flight 296 and the Therac-25. Fascinating stuff, really. I used to love doing heavy library research in the pre-Wikipedia era.
In the first case, fly-by-wire software designed to prevent pilots from commanding unsafe maneuvers wound up causing a fatal crash which wouldn't have happened in a conventionally-controlled aircraft. (It prevented the pilot from performing a rather extreme maneuver during a fly-by at an airshow. Unfortunately, the pilot didn't know that the airplane was going to ignore his command, which he had successfully done many times previously in older-gen aircraft of the same type.)
In the latter, a software-controlled radiotherapy machine (used for radiation treatment of cancer) which featured a user interface designed to make entering treatment parameters easy and protect against novice errors had a very subtle timing-related bug that allowed massively lethal overdoses to be accidentally administered only when a highly-skilled operator (one who typed extremely quickly) made a small typo and used an undocumented set of keystrokes to quickly correct it on the terminal screen without going through the tedious process of re-entering all of the data from scratch. In previous-generation machines of the same type, there was no software interface. Treatment parameters were set up in physical hardware, which allowed less-skilled operators to make serious mistakes, but would never perform the sort of uncommanded overdose resulting from a skilled operator entering correct treatment parameters in an unconventional way.
Fascinating stuff...
For the final paper, I did a study of safety in man-machine interfaces with regard to human factors engineering. (Basically, how do you design machines to do what you want, rather than what you say, and protect you from the consequences of being an idiot without causing inadvertent harm in cases in which the man is, in fact, smarter than the machine.)
The two cases which I looked at in depth were Air France Flight 296 and the Therac-25. Fascinating stuff, really. I used to love doing heavy library research in the pre-Wikipedia era.
In the first case, fly-by-wire software designed to prevent pilots from commanding unsafe maneuvers wound up causing a fatal crash which wouldn't have happened in a conventionally-controlled aircraft. (It prevented the pilot from performing a rather extreme maneuver during a fly-by at an airshow. Unfortunately, the pilot didn't know that the airplane was going to ignore his command, which he had successfully done many times previously in older-gen aircraft of the same type.)
In the latter, a software-controlled radiotherapy machine (used for radiation treatment of cancer) which featured a user interface designed to make entering treatment parameters easy and protect against novice errors had a very subtle timing-related bug that allowed massively lethal overdoses to be accidentally administered only when a highly-skilled operator (one who typed extremely quickly) made a small typo and used an undocumented set of keystrokes to quickly correct it on the terminal screen without going through the tedious process of re-entering all of the data from scratch. In previous-generation machines of the same type, there was no software interface. Treatment parameters were set up in physical hardware, which allowed less-skilled operators to make serious mistakes, but would never perform the sort of uncommanded overdose resulting from a skilled operator entering correct treatment parameters in an unconventional way.
Fascinating stuff...
Last edited by Joe Perez; 12-17-2015 at 10:21 PM.
Boost Pope
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,026
Total Cats: 6,592
An electronic power-steering system which faults in such a way that the steering is forced hard-over to full lock?
A throttle-by-wire system which exhibits uncommanded acceleration?
All are far worse, in objective terms, than a self-driving system which makes a calculated determination as to who will live and who will die in an accident scenario borne of truly random happenstance. I'm far less concerned about genuinely random faults than about the litigation arising from the sort of deterministic behavior described in the video which I posted.
Moderator
iTrader: (12)
Join Date: Nov 2008
Location: Tampa, Florida
Posts: 20,652
Total Cats: 3,011
Female long-term companion is relocating for work. I decided to go too. About 75% for her, and 25% because my current job makes me feel like I am barely treading water in a sea of mental retardation. The way my company does things is so backwards and the cause of the majority of my stress and depression. It is time to go and the pay ain't so hot.
Elite Member
iTrader: (5)
Join Date: Oct 2011
Location: Detroit (the part with no rules or laws)
Posts: 5,677
Total Cats: 800
In the past, automakers (or auto-equipment makers) have been held liable for damage arising from accidents in which, to some extent or another, faulty equipment either contributed to causing a collision, or increased the harm suffered as a result of a collision.
Examples of the former include defective tires, faulty or malfunctioning brake and steering systems, etc.
Examples of the latter are things like faulty or badly-designed seatbelts, airbag systems, fuel tanks, and whatnot.
None of these things have made automakers hesitant to make cars.
In a very real sense, legislation which completely absolves automakers of any and all liability arising out of the navigational decisions of a self-driving car represents a radical shift in tort law, IF said legislation is in fact determined to cover software which is defective or improperly designed.
Examples of the former include defective tires, faulty or malfunctioning brake and steering systems, etc.
Examples of the latter are things like faulty or badly-designed seatbelts, airbag systems, fuel tanks, and whatnot.
None of these things have made automakers hesitant to make cars.
In a very real sense, legislation which completely absolves automakers of any and all liability arising out of the navigational decisions of a self-driving car represents a radical shift in tort law, IF said legislation is in fact determined to cover software which is defective or improperly designed.
I love the fact that new cars cost the same has a house these days...