1499
rating
52
debates
35.58%
won
Topic
#5538
Make it illegal for people to drive in Fully self-driving cars in most cases
Status
Finished
The debate is finished. The distribution of the voting points and the winner are presented below.
Winner & statistics
After 3 votes and with 15 points ahead, the winner is...
Intelligence_06
Parameters
- Publication date
- Last updated date
- Type
- Rated
- Number of rounds
- 3
- Time for argument
- One week
- Max argument characters
- 4,000
- Voting period
- One month
- Point system
- Multiple criterions
- Voting system
- Open
- Minimal rating
- None
1737
rating
172
debates
73.26%
won
Description
Burden of Proof is shared
Pro: In the vast majority of cases, fully self driving cars should not be allowed to be driven by people. Once they get in and start the car, setting specific destinations or requirements, the AI will take over and not allow them to drive. If the human is caught driving the car themself, they will be fined for a certain amount of money. Using the software to change the destination does not count as the human driving
Con: In the vast majority of cases we shouldn't punish people for driving their fully self driving cars
Round 1
Forfeited
Due to character limits I cannot quote the entire description, but look at it. All of what I can assume is what Pro intended to put as additional background information is instead put in his position. Therefore, instead of treating them like pre-set assumptions, we treat them as statements up in the air waiting to be proven by Pro. Pro would have to prove all of these separately from one another.
- In the vast majority of cases, fully self driving cars should not be allowed to be driven by people. No problems with this, but good luck proving it.
- Once they get in and start the car, setting specific destinations or requirements, the AI will take over and not allow them to drive. Will? This would exclude any case in which the AI or the car ceases to function properly. If the engines are broken, AI will be unable to take over and "bar them from driving".
- If the human is caught driving the car themself, they will be fined for a certain amount of money. Pro has to also prove why fining is the sole optimal form of punishment for this "crime".
- Using the software to change the destination does not count as the human driving. Why not? This goes against the definition of "driving" which comes later.
"Driving"
Since Pro did not define driving in his description (I have no idea how he has not learned yet) but rather put it in his position where it is his burden to prove that that is the correct interpretation, it essentially makes it perfectly allowed for me to prove that he is wrong, or at least in more rigorous terminology, more reliable sources disagree with him. Pro had sourced nothing, so far.
Since Pro did not define driving in his description (I have no idea how he has not learned yet) but rather put it in his position where it is his burden to prove that that is the correct interpretation, it essentially makes it perfectly allowed for me to prove that he is wrong, or at least in more rigorous terminology, more reliable sources disagree with him. Pro had sourced nothing, so far.
Driving, in this context, is a variant of Drive(verb):to move or travel on land in a motor vehicle, especially as the person controlling the vehicle's movement:
While this may seem like what Pro may have had in mind, we have to make a few things clear:
- Whether the person is the sole controller of the vehicle is not a hard requirement for whether it is "driving" or not, as evidenced by "especially"
- "Controlling" is vague. Is mechanical transmission a requirement, or is electrical transmission enough? If the strict former, then no person can steer a Tesla Cybertruck due to its steer-by-wire system with no mechanical transmission. That is absurd.
- If an array of analog electrical signals counts as driving(intuitively), so is the electrical signal via pressing a single button. I instructed what the car should do next with 1 button, I controlled it. That is driving.
- If such is not driving, then spotify is not "playing music" due to their tracks being pre-recorded as opposed to carved by your strokes; then using a rice-cooker will cease to be "making a meal" just because the machine automated most of the process despite you telling the machine what to do; then asking Alexa is not "turning on the lights" as Alexa is what actually controlled the electrical network upon your instruction. Such is absurd. If I specifically tell the machine what to do, I am operating it. Even pressing 1 button as an instruction to the engine, by definition, would count as driving. Therefore, not allowing humans to drive an "automated car" is in of itself an oxymoron and should be regarded as impossible. This proves my position.
"Law"
Of course, Pro may have intended to prove the possibility of a fully automated AI car that never, NEVER requires external help. Problem: If any instance of such cars break, human intervention is required, making it not "fully automated". So, Pro asked to make an unbreakable car, which is about impossible.
Even if we use "If human intervention is required, it is not fully self driving", it would make this law completely moot as it concerns a total of zero current and potential cases, the same way for a proposal to "Kill Batman when he arrives". It is just not worth it to add that onto the law.
Come again.
Round 2
ded vote con
sure, thats fine by me too
Round 3
Forfeited
Happy 4th birthday to my account.
What you could have actually done is to bring up: "Well what about taxi drivers? They do what I tell them to do, they take me to wherever I instruct them to, just like I do with Alexa. In this case, following your logic, shouldn't *I* be the one that is driving the taxi, not the so-called taxi driver?"
This would essentially derail into whether AIs should be held morally responsible in any circumstances under any level of intelligence. Because we all know, the difference between an actual taxi driver and a rice cooker is that a rice cooker is not sentient. Seeing how unclear the field of AI ethics currently is (trust me), doing that would actually give you an edge of winning especially since I don't know a lot either about moral philosophy and I can't say I do.
The scary thing is: The closer an AI is sentient as an actual human (such as a taxi driver), the closer it will get to being fully able to control the car better than we do while being trustworthy. Basically, that would mean we ought to treat it more as a buddy such as Chad the next block over than a tool like Alexa or Siri. Being able to use this example to your advantage might as well can cost me the win, and I don't think I can do anything about it in the near future.
I don't know how to survive in muddled water. I just scavenge the waterbed to seek solid pillars protruding off the shores. All you have to do is to push me off one of those.
But thanks for the concession anyways.
I know you are going to come up with some wild interpretation of the topic, especially given all that description.
The fact Pro put all of this in his Stance rather than outside as general description of the setup of this debate makes it hilarious.
A wok is a tool. A rice cooker is also a tool. You see where I'm getting?
Well, if you want to motivate people even more to not buy self-driving cars, sure.
I'm libertarian on this issue.