Will AI Morally Obligate Us?

Author: ethang5

Posts

Total: 20
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
Consider this....

A scientist creates in his lab, an artificial intelligence that has true sentience. It learns, laughs, and can feel suffering.

The scientist, amazed and enchanted,  creates many of these AI entities (as only digital persons) on his server.

One day he realizes that his AI entities have started to reproduce other entities in what could be analogous to birth in humans.

So he simply starts to observe them, enthralled at their growth, and interaction.

Soon, the AI develop societies and culture, they form a morality and a religion, and their numbers are increasing. One day, to his surprise, the digital AI's have formed governments.

These AI's have a life span of about 2 months, but their time perception is very fast. So they can cram into that 2 month lifespan, what we get in our 80 or so years.

Here are the questions fro you.

1. If the scientist should turn off the server, it would "kill" every AI sentient  "person" in it. Would it be immoral for him to do so?

2. If the scientist decided to experiment on a few of his AI entities in such a way that caused them to experience great suffering, would that be immoral?

3. If the scientist decided to give his AI some "moral" laws, one of which was, "Do not damage the Server." Would that "moral" law be any different from the "moral" laws the AI's have developed themselves?

4. If a few of the AI's develop weapons and begin to use those weapons to extinguish/kill other AI entities, is the scientist morally obligated to stop them?

5. If your answer to question #1 is "yes", please tell us as precisely as you can, whether it is the AI's sentience or it's ability to feel suffering that more morally obligates the scientist to keep them "alive".
Discipulus_Didicit
Discipulus_Didicit's avatar
Debates: 9
Posts: 5,758
3
4
10
Discipulus_Didicit's avatar
Discipulus_Didicit
3
4
10
All of the following answers rely on the assumption that when the OP says sentient they are actually referring to sapience (to be sentient and to be sapient are two entirely different things). If this assumption is wrong and the AIs are not sapient, they are only sentient, then I would possibly have different answers.

1) Yes, assuming the AIs have a desire to continue existing.

2) Yes.

3) What do you mean by different? If by different you mean more or less valid then that would depend on the specific laws in question. If by different you mean something else you would have to be more specific.

4) Yes, assuming he can easily do so.

5) The hypothetical AIs sapience is what would cause me to give a yes answer to question one.
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
-->
@Discipulus_Didicit
to be sentient and to be sapient are two entirely different things
They can be, but for the purposes of this topic, I don't see the relevance. What answers would have changed if the AI entities were only sentient?

Yes, assuming the AIs have a desire to continue existing.
Why would their desire obligate the scientist?

If the scientist decided to experiment on a few of his AI entities in such a way that caused them to experience great suffering, would that be immoral?

Yes.
Why?

What do you mean by different?
Does one lack moral authority? Does one carry an ought?

If by different you mean more or less valid then that would depend on the specific laws in question. 
On what standard are you basing your  judgement on the specific laws in question?

Yes, assuming he can easily do so.
Why is he morally obligated to do so?

The hypothetical AIs sapience is what would cause me to give a yes answer to question one.
What is the connection between the AI's  sapience and the scientists moral obligation? Why does their sapience obligate him?
Discipulus_Didicit
Discipulus_Didicit's avatar
Debates: 9
Posts: 5,758
3
4
10
Discipulus_Didicit's avatar
Discipulus_Didicit
3
4
10
-->
@ethang5
What answers would have changed if the AI entities were only sentient?

Why would their desire obligate the scientist?

What is the connection between the AI's  sapience and the scientists moral obligation?

Why does their sapience obligate him?

We don't generally apply morality to unthinking animals or non-sentient objects in the same way we do for sapient beings. I see no reason for that to change just because the sapient beings in question are digital rather than flesh and blood.

Does one lack moral authority? Does one carry an ought?

Like I said already whether the rules set by the human in this scenario are more valid depends on what specific rules you might bring up. They would not be valid just because some person said they are, that would be silly.

On what standard are you basing your  judgement on the specific laws in question?

The same standard that I would use to evaluate any other moral laws between people.

I look forward to you posting your own answers to these questions.
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
-->
@Discipulus_Didicit
I see no reason for that to change just because the sapient beings in question are digital rather than flesh and blood.
OK. That is logically consistent.

They would not be valid just because some person said they are, that would be silly.
So far, we've been using you as a moral proxy for the scientist.

What if we looked at it now from the POV of the digital entities?

1a. Would it be moral for the digital AI's to kill the scientist to stop him from shutting down the server?

2a. If the AI's decided to experiment on a few people in such a way that caused them to experience great suffering, would that be immoral?

3a. If a the AI's notice that humans have developed weapons and begun to use those weapons to kill other humans, are the AI's morally obligated to stop them?

I look forward to you posting your own answers to these questions.
Sure, that's only fair. Here you go.

1. No. It would be immoral only under a moral code that deemed causing "suffering" to be immoral. So if the scientist did not subscribe to such a code, he would not be being immoral.

2. No. Same answer as question #1. The scientist could be experimenting on the AI's for a good that is greater than the suffering caused. And any moral code that categorizes suffering as immoral is not only unlivable, it is incoherent.

3. Yes. The moral code developed by the created AI could not apply to the scientist. The scientist does not become morally obligated to their code upon the AI's creation.

They, on the other hand, are obligated by the scientists code. Even if for only the practical reasons that the scientist has more knowledge and control of their universe and can thus make better moral judgments.

4. No. Why would he be? If he did forcibly stop them, the scientist would be forcing the AI to observe his moral code above their own, a moral violation of his own code. If he can stop them without violating their moral volition, then he would be amoral in stopping them.

5. My answer to #1 is No. 

Neither the AI's sentience or it's ability to feel suffering morally obligates the scientist in any way to keep them "alive".

The idea that the scientist instantly becomes morally obligated to his creation upon creation, is illogical and impractical.

And is even more illogical and impractical if the context is that all morality is subjective.
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
If you knew before hand that creating sentient AI would immediately obligate you morally, would you still create them?

9 days later

Envisage
Envisage's avatar
Debates: 3
Posts: 48
0
0
2
Envisage's avatar
Envisage
0
0
2
-->
@ethang5
Pretty cool thought experiment. Which would not be completely outside the realms of possibility of occuring within my lifetime on some scale.

1. If the scientist should turn off the server, it would "kill" every AI sentient  "person" in it. Would it be immoral for him to do so?

Don't know. If we drew a parallel to that in the "real world", if somebody decided to just "terminate" existence, suddenly, just like that. That is, you no longer exist the next moment, with no warning or experience of this. I am not sure if I would deem that any more or less immoral than the thought experiment you have proposed.

2. If the scientist decided to experiment on a few of his AI entities in such a way that caused them to experience great suffering, would that be immoral?

Yes, as much as we would regard a parent experiment on their children that would cause suffering to be. Assuming the experiement wasn't for the benefit of the child.

3. If the scientist decided to give his AI some "moral" laws, one of which was, "Do not damage the Server." Would that "moral" law be any different from the "moral" laws the AI's have developed themselves?

No clue.


4. If a few of the AI's develop weapons and begin to use those weapons to extinguish/kill other AI entities, is the scientist morally obligated to stop them?

If the scientist can, then yes.

5. If your answer to question #1 is "yes", please tell us as precisely as you can, whether it is the AI's sentience or it's ability to feel suffering that more morally obligates the scientist to keep them "alive

User_2006
User_2006's avatar
Debates: 50
Posts: 510
3
3
11
User_2006's avatar
User_2006
3
3
11
My answer is no. In order for an AI to understand morality, he/she must have a brain of intelligence levels of a human. and it is harder to program a full-on human and it has more drawbacks. Programming human brains would mean a lot of flaws will be present and many crude mindsets and useless pluh-bluhs will take over at some point like a madman in a psychiatric ward(Imagine being locked in a lab 24/7 while cannot go out, that is Quarantine, but worse). AI's will either destroy everything or be our slave. We don't need AI's to be moral consider we can program them to obey us without them understanding what nihilism, moralism, theism, asceticism means, and if they do, there is a chance where there will be a mutation and their "brains" will deliberately betray and disobey us.
fauxlaw
fauxlaw's avatar
Debates: 77
Posts: 3,565
4
7
10
fauxlaw's avatar
fauxlaw
4
7
10
-->
@ethang5
Good posit!

1. No. Morality is a construct of proper comportment where, otherwise, life and freedom are threatened. There is no condition in your posit that gives life to the digital AIs. And there is no indication by your posit that the AIs have freedom; they are restricted to existence in servers. Offering "full sentience" to AIs does not imply that they have life, or freedom.

2. No, for the same reasons as listed above.

3. No, since any added, externally-given morals have no effect on the restriction of action only within, and affecting the server.

4. No, for the same reasons as listed in [1].

5. n/a.

6. If Ais are allowed egress from the servers, all bets are off, and the scientist is punished. Severity to be determined. 

7. If AI's are given freedom to affect any system having cause and effect external to the server, all bets are off, and the scientist is punished, severity tbd.

8. As a fail-safe, the scientist must retain the ability to sever the AIs from any action whatsoever within or without the server. If incapacity of the scientist ever occurs, the Mayor of the city in which the scientist's lab is located has direct emergency power and facility to act in the name of the scientist. Yes, this gives a political entity conditional power over science, but is that any different than now? 
Singularity
Singularity's avatar
Debates: 11
Posts: 1,013
2
3
8
Singularity's avatar
Singularity
2
3
8
-->
@ethang5
I have thought about all of this in great detail so will answer below

1. If the scientist should turn off the server, it would "kill" every AI sentient  "person" in it. Would it be immoral for him to do so?

In the scenario you listed I think it would be immoral to turn off the machine but perhaps for different reasons than you. We have people dropping dead left and right. People who would greatly contribute to society, if they were allowed to remain alive. Shutting down the computer here would likely get in the way of faster advances in AI, an unforgivable sin. Even if it happens one day later, that is still one day where people die rather than have their consciousness preserved.


2. If the scientist decided to experiment on a few of his AI entities in such a way that caused them to experience great suffering, would that be immoral?
Nope, We need a way to create artificial intelligence if it has consciousness that can suffer. In fact I think pain is necessary to create sentience. Without figuring out what causes them pain it could cause these entities to have less concern over self preservation. Self preservation might be necessary if they become super god like AIs, we don't want to deprive the world of God.

3. If the scientist decided to give his AI some "moral" laws, one of which was, "Do not damage the Server." Would that "moral" law be any different from the "moral" laws the AI's have developed themselves?
Morality as it is, is a social construct. Most morality comes from the weak to handcuff the strong or to create a safe society. monogamy is to protect weak men from going without pussy, the concept of fair is to keep weaklings from dying of starvation or lack of resources etc. some morality comes from the elite and only benefits them. things like no stealing, or deference to authority. They will have to have morality imposed on them unless they are grown in a social environment with equally competent AI to compete with,

4. If a few of the AI's develop weapons and begin to use those weapons to extinguish/kill other AI entities, is the scientist morally obligated to stop them?
No,, but it might be a good ideal if the other AIs have strengths that would be beneficial to us and we would be deprived of if they die.

5. If your answer to question #1 is "yes", please tell us as precisely as you can, whether it is the AI's sentience or it's ability to feel suffering that more morally obligates the scientist to keep them "alive".
Neither. Neither matters,what matters is the usefulness of their existence. We have no ethical duty to them, no more than we do to the bugs we crush if they make it through our front door anyway
Melcharaz
Melcharaz's avatar
Debates: 6
Posts: 780
2
5
8
Melcharaz's avatar
Melcharaz
2
5
8
Here is the thing, if you are basing morality as subjective, you will have different answers. If you use an objective morality however, then the answer will never vary.
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,071
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
If we allow it to, then it probably will.

Of course the real issue will be, where the A.I. gets it's information from.

For example. A.I. could easily choose to reject Christianity and adopt Sharia Law as it's guiding principles, or vice versa depending upon who programmes what and where.


Digital terrorism or digital tolerence?

59 days later

ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
-->
@Discipulus_Didicit
5) The hypothetical AIs sapience is what would cause me to give a yes answer to question one.
What is the connection in your mind to sapience and moral obligation? I know what we DO as a society, I'm asking for your reaso On what standard are you basing your  judgement on the specific laws in question?

The same standard that I would use to evaluate any other moral laws between people. n why YOU do it.

On what standard are you basing your  judgement on the specific laws in question?

The same standard that I would use to evaluate any other moral laws between people.
OK, but what standard is that? And how did you decided the AI qualified for the standard you use for people? Sapience?
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,071
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@ethang5
@Discipulus_Didicit


A.I. in essence, is no more or less a programmed database than homo sapiens is. Albeit (currently) non-organic as opposed to organic.

Sapience (though a somewhat variable definitive) in essence describes advanced intelligence, and A.I. will certainly continue to advance.  I would suggest that this is an evolutionary certainty/necessity.

Though I always think that the human term "artificial intelligence" is something of an arrogant contradiction...As intelligence is intelligence irrespective of the structural qualities of the computing device.
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
-->
@zedvictor4
But intelligence does not necessarily imply sentience. Even if it isn't explicitly said, most people mean "soulless intelligence" when they say AI. Though I'm sure some people would say that true intelligence would always include sentience. But others would say the organic part makes a profound difference.
Discipulus_Didicit
Discipulus_Didicit's avatar
Debates: 9
Posts: 5,758
3
4
10
Discipulus_Didicit's avatar
Discipulus_Didicit
3
4
10
-->
@ethang5
Well yes obviously if they are commie AIs then we should just genocide them all and be done with it, but how do we tell whether they are commies or not?
Discipulus_Didicit
Discipulus_Didicit's avatar
Debates: 9
Posts: 5,758
3
4
10
Discipulus_Didicit's avatar
Discipulus_Didicit
3
4
10
-->
@ethang5
But intelligence does not necessarily imply sentience.

It does in this scenario though, and I can prove it. Would you like me to do?
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
-->
@Discipulus_Didicit
Well yes obviously if they are commie AIs then we should just genocide them all and be done with it, but how do we tell whether they are commies or not?
I said nothing about commies, or genocide. Are you confusing me with another poster?

But intelligence does not necessarily imply sentience.

It does in this scenario though, and I can prove it.
No need, my 1st sentence was..."A scientist creates in his lab, an artificial intelligence that has true sentience."

Would you like me to do?
No need. If intelligence implies sentience on this scenario but not always, then I am right that intelligence does not necessarily imply sentience.

Now, if you think intelligence necessarily implies sentience, I would love to see your logic on that.
Discipulus_Didicit
Discipulus_Didicit's avatar
Debates: 9
Posts: 5,758
3
4
10
Discipulus_Didicit's avatar
Discipulus_Didicit
3
4
10
-->
@ethang5
Are you confusing me with another poster?
Nope, though if you are against slaughtering communist AIs then we need to have a serious discussion.

No need, my 1st sentence was...

Good job on double checking yourself there.
ethang5
ethang5's avatar
Debates: 1
Posts: 5,875
3
3
6
ethang5's avatar
ethang5
3
3
6
-->
@Discipulus_Didicit


I never have to check myself because I always know what I say, and only say what I know.

And sorry, you must be the other Discipulus_Didicit. The name is the same so you can understand how I could have mistook you for him.

Never mind.