AI will not kill us all

Author: Benjamin

Posts

Total: 24
Benjamin
Benjamin's avatar
Debates: 98
Posts: 827
4
7
10
Benjamin's avatar
Benjamin
4
7
10
AGI, artificial general intelligence, will most likely be achieved in the near future with constantly more elaborate AI structures and designs. However, the training process of said AI involves a constant process of trial and error --- in fact tha AI acts randomly at first and then gradually "learns" how to do things. Many are afraid that AGI will become smarter than humans, become a supergenious and then possibly  be an exsistential threat to humanity. While I do not deny the possibility of such superintelligence, I highly doubt that an AGI will somehow reach superintelligence quickly and without supervision like the doomsday scenario suggests.

Rather, the AGI will not be able to controll the computer it runs on --- as AI does not have kernal access. Moreover, an AGI is simply an intelligent program, it runs like any other program: only when we run it will it function, it cannot run itself. This limitation to an AGI means it cannot simply reach superintelligence on its own. More probable than not, the AGI won't even have a mind of its own --- I mean, intelligence and conciousness are quite different things, and they often contradict each other in terms of function. One has said "a creative camera would not be usefull". AI built for the purpose of achieving optimal intelligence will probably not have a structure similarly to our brain, that is, a structure of self-propagated conciousness wherein intelligence is a minor part of its design.

AGI will probably be a program of general intelligence that we can turn on and of as we please without it caring at all --- it would not resist or fight us.


Thus the robotic threat to humanity is minimal.
badger
badger's avatar
Debates: 0
Posts: 2,243
3
3
3
badger's avatar
badger
3
3
3
I highly doubt AGI will ever exist tbh. I just can't even begin to conceive of how it would ever work or any sort of shape it might take. AI currently is actually fairly basic in idea. Not that it doesn't get incredibly complex, but the underlying concepts are simple. A goal state is defined and the machines hammers at it until it achieves it. Monkeys on typewriters, basically, except super super speed. A program deciding its own sensible goals however is just something else entirely. And it is consciousness basically, it's self reference, I got no clue what consciousness is besides. I just doubt it tbh. It's something beyond merely technical for sure.
Dr.Franklin
Dr.Franklin's avatar
Debates: 32
Posts: 10,673
4
7
11
Dr.Franklin's avatar
Dr.Franklin
4
7
11
-->
@Benjamin
you at least recognize the fact that an AI has the power to be the greatest threat to humanity, which is the first step

Laws absolutely need to be instilled for AI. The kicker with AI is that using the Law of Accelerating Returns(fascinating article) the AI intelligence will be so good it is completely beyond human comprehension. Just look at this gif-https://assets.motherjones.com/media/2013/05/LakeMichigan-Final3.gifNot AGI, but ASI will be created

it would not resist or fight us.
It cant resist us, and thats the problem. Ill try and explain later but thats the central premise
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,074
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@Dr.Franklin
@Benjamin
@badger
It would not resist or fight us.

But it already mesmerises us.

And interestingly it mesmerises all, equally.

Resistance is seemingly futile.


And to doubt is human

And were we ever doubted?

Though humanity has taken hundreds of thousands of years

Whereas  A.I. has had a few decades.


And after all

What is intelligence?

Other than the ability 

To process and adapt information.


To achieve what?


Dr.Franklin
Dr.Franklin's avatar
Debates: 32
Posts: 10,673
4
7
11
Dr.Franklin's avatar
Dr.Franklin
4
7
11
-->
@zedvictor4
nice zedku
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,074
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@Dr.Franklin
Thanks Doc.
Tejretics
Tejretics's avatar
Debates: 9
Posts: 501
3
4
8
Tejretics's avatar
Tejretics
3
4
8
-->
@Benjamin
I think there’s several reasons it’s not as simple as “turning off” an AGI:

  • Before we make the decision to turn it off, and before it does anything that would cause us to want to turn it off, an AGI could use the internet and copy itself onto many different servers. You could not give an AGI internet access, but that would substantially limit how useful an AGI is – for example, many creators who’d want to make money with an advanced AI system (e.g. through fast-reacting algorithms on the stock market) would have the incentive to give it access to the internet. 
  • It could anticipate that we would want to turn it off under particular circumstances, and communicate in ways that cause researchers to give it more time, computing resources, and training data so it can better accomplish its goals. Remember, if an AI system decides that the best way to accomplish its goals is to kill someone, it is going to act in ways that prevent you from blocking that goal. 
  • In general, as Kelsey Piper puts it, we’re also at the mercy of the least cautious actor. If any government or corporation that has access to an AI system doesn’t employ really strict safety standards, that AI system could then engage in harmful actions. Don’t underestimate the possibility that this is the intention of whoever has access to the system – an AGI could make lethal autonomous weapons, for example, far more destructive, so if a government or non-state actor wanted to engage in maximal destruction, an AGI would allow them to do it more effectively. 
  • When an AGI is on, it could hack vulnerable systems elsewhere and upload copies of itself onto such systems. 

Tejretics
Tejretics's avatar
Debates: 9
Posts: 501
3
4
8
Tejretics's avatar
Tejretics
3
4
8
-->
@badger
Those are fair points. However, I just want to raise three counterpoints:

  • AI systems are already becoming a lot more general. Consider GPT-3 and AlphaGo/AlphaZero, for example – the tasks they perform or the types of learning they engage in are significantly less narrow than we’d typically associate with narrow AI. 
  • The main bottleneck to progress in AI so far has been computational power. The cost of computational power is reducing a lot, and systems are close to reaching the computing power of the human brain. Ajeya Cotra of Open Philanthropy used a biological anchor framework to determine how much computing power and algorithmic advances are required for an AI to be as general as human brains, and forecasts an ~80% chance of transformative artificial intelligence by 2100. 
  • Most ML and computer science researchers think AGI is possible, and surveys find that they expect human-level artificial intelligence in the next 100 years with consistently >50% probability. 

Dr.Franklin
Dr.Franklin's avatar
Debates: 32
Posts: 10,673
4
7
11
Dr.Franklin's avatar
Dr.Franklin
4
7
11
-->
@zedvictor4
your welcome zed
Dr.Franklin
Dr.Franklin's avatar
Debates: 32
Posts: 10,673
4
7
11
Dr.Franklin's avatar
Dr.Franklin
4
7
11
-->
@Tejretics
what do you think of this gif that shows the evolution of computing power-https://assets.motherjones.com/media/2013/05/LakeMichigan-Final3.gif
FLRW
FLRW's avatar
Debates: 0
Posts: 6,611
3
4
8
FLRW's avatar
FLRW
3
4
8

Meet Grace, the robot spawned from the health crisis

Benjamin
Benjamin's avatar
Debates: 98
Posts: 827
4
7
10
Benjamin's avatar
Benjamin
4
7
10
-->
@Tejretics
I agree that ASI would be capable of outplanning humanity. But hacking? Kryptology and passwords are becomming so long and hard to crack that it would simply be unfeasible to hack oneself controll of different systems. Moreover, even if the ASI has internet access and can hack, how can it run on other computers than the supercomputer it was created on? I don't really think an ASI can act like a virus. Your point about humans abusing ASI for bad, well that is the most realistic bad scenario. 
Tejretics
Tejretics's avatar
Debates: 9
Posts: 501
3
4
8
Tejretics's avatar
Tejretics
3
4
8
-->
@Benjamin
An AGI could possibly make copies of itself that can operate outside of supercomputers, or it could mail copies of itself/replicate itself onto other supercomputers.

People can break through a lot of advanced encryption already, e.g., by identifying zero-day vulnerabilities. One would imagine that a highly advanced AI system would be much better at that than people.
Benjamin
Benjamin's avatar
Debates: 98
Posts: 827
4
7
10
Benjamin's avatar
Benjamin
4
7
10
-->
@Tejretics
And I still don't think that AGI or ASI will be a being with survival instinct. After all, how we train it is by punishing/rewarding different behaviour, and this is how it learns. An AGI trained to understand the world around it won't need to have a survival instinct, as its survival is dependent on its ability to perform tasks, not on some survival skills it has. Even general intelligence like the one humans have don't mean we can do something we have never tried before, and this AGI won't likely be able to know how to turn against its creators. After all, any previous version of the AGI that attempted the same would have been punished, ensuring no trace of rebellion is left in the final AGI or ASI.
Benjamin
Benjamin's avatar
Debates: 98
Posts: 827
4
7
10
Benjamin's avatar
Benjamin
4
7
10
-->
@Tejretics
I don't think an AGI can hack. Hacking is very logical and knowledge-based; intelligence alone isn't enough. Without training, nobody can hack, not even an ASI.
Tejretics
Tejretics's avatar
Debates: 9
Posts: 501
3
4
8
Tejretics's avatar
Tejretics
3
4
8
-->
@Benjamin
The argument is that if an AGI is given a task, there are two risks:

  • A hard constraint on it successfully accomplishing that task is whether it survives.
  • It could interpret the tasks in ways that are catastrophic, because the best way to accomplish a lot of human problems will involve killing or harming lots of people if you don’t have a moral conscience. 

Tejretics
Tejretics's avatar
Debates: 9
Posts: 501
3
4
8
Tejretics's avatar
Tejretics
3
4
8
-->
@Benjamin
I don't think an AGI can hack. Hacking is very logical and knowledge-based; intelligence alone isn't enough. Without training, nobody can hack, not even an ASI.
My understanding is that “AGI” is defined as being able to carry out any task that a person can. What will differentiate an AGI from a very advanced chess AI like AlphaZero is the ability to accomplish tasks like that, which require lots of generality. 

Admittedly, I’m not a computer scientist. But I recommend the book Human Compatible by Stuart Russell (who wrote the world’s leading AI textbook, Artificial Intelligence: A Modern Approach), or The Alignment Problem by Brian Christian, for a clearer explanation of why many of these counterarguments aren’t decisive. 
badger
badger's avatar
Debates: 0
Posts: 2,243
3
3
3
badger's avatar
badger
3
3
3
-->
@Tejretics
Those are fair points. However, I just want to raise three counterpoints:

  • AI systems are already becoming a lot more general. Consider GPT-3 and AlphaGo/AlphaZero, for example – the tasks they perform or the types of learning they engage in are significantly less narrow than we’d typically associate with narrow AI. 
  • The main bottleneck to progress in AI so far has been computational power. The cost of computational power is reducing a lot, and systems are close to reaching the computing power of the human brain. Ajeya Cotra of Open Philanthropy used a biological anchor framework to determine how much computing power and algorithmic advances are required for an AI to be as general as human brains, and forecasts an ~80% chance of transformative artificial intelligence by 2100. 
  • Most ML and computer science researchers think AGI is possible, and surveys find that they expect human-level artificial intelligence in the next 100 years with consistently >50% probability. 

Admittedly, I have done pretty much zero research on this, but I am a software engineer and have some experience with AI - Djikstra's, A*, ID3, neural networks etc. I was going about building my own chess engine too at one point, but I guess I'd rather be making money for it. But it is always the same old cleverness, so far as I can see. I guess some of what I mentioned might be narrow AI or component parts, but AlphaZero isn't more than more involved algorithms. It is a far cry from the General of Artificial General Intelligence. 

I'm not going to go reading through those links, although I'm sure they're interesting. My funny thoughts about the whole thing though is that AGI is something we'd need AI itself to create. We'd provide the environment and run the program and well we'd just wait until we heard the machine equivalent of a baby crying. A funny sort of goal state. I just can't even begin to imagine what went into the creation of human consciousness. I wonder if human evolution wasn't the most complicated thing in the entire universe actually. Maybe I'm fancying myself too much though. Or maybe the ML and computer science researchers are fancying themselves too much. Who knows. 
badger
badger's avatar
Debates: 0
Posts: 2,243
3
3
3
badger's avatar
badger
3
3
3
I actually skipped out on Djikstra's tbh. I managed to build an google maps type program with A* for a project in college, which honestly I don't know how it worked, because I pulled heuristics right out of my ass. It outdid my teacher's sample answer though. I was partying a bit too much, don't think that'll ever change. 

184 days later

K_Michael
K_Michael's avatar
Debates: 38
Posts: 749
4
5
10
K_Michael's avatar
K_Michael
4
5
10
I think that the hacking abilities of potential AGI are not the concern, but their social abilities. In the AI Box Experiment proposed by Eliezer Yudkowsky, an AGI would be able to convince a human to release it from containment merely through text communication. If you gave the AGI other methods of communication, such as visual and auditory signals, it would be even more effective.
By no means do I think that AGI is necessarily dangerous, and neither does Eliezer Yudkowsky, which is why he and the Machine Intelligence Research Institute are dedicated to "identifying and managing potential existential risks from artificial general intelligence." [Wikipedia]
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,074
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@K_Michael
GO............Material evolution..........GOD principle.

Or nihilism of course.

Must stress, GOD not to be confused with a floaty about bloke.

More likely to be floaty about AI.

Somewhere along the way we will rely upon Alternative Intelligence.

If not already.

(Nothing really artificial about intelligence).


So it took Homo sapiens 300 000 years to ask this question.

And it's taken IT what?....

No more that 50 years to make us reliant upon IT.


And  AI is featureless and all talks the same basic language.

So less arguing and fighting.


sadolite
sadolite's avatar
Debates: 0
Posts: 3,171
3
2
4
sadolite's avatar
sadolite
3
2
4
AI is based on  logic, humans are irrational, emotional, deceitful, manipulators and liars. It makes perfect sense to do away with humans. I came to that conclusion and I am human, why wouldn't AI come to the same conclusion?  Humans are going to give it the power to kill if we haven't already. Just a matter of time before it becomes automatous. I know "THAT WILL NEVER HAPPEN"
K_Michael
K_Michael's avatar
Debates: 38
Posts: 749
4
5
10
K_Michael's avatar
K_Michael
4
5
10
-->
@zedvictor4
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,074
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@K_Michael
Entertaining.......But it's meant to be....And everything from a human perspective.

As I see it the future of Planet Earth may or may not be inextricably linked to the future of the universe.

The development of matter including humanity seems important, though the evolutionary process to a final point probably has billions of years to run yet.

And for how long our input will be vital is anyone's guess.....Though as technology evolves, the numbers of humans needed to facilitate the process will decrease greatly.

That is to say, that most humans are/will be surplus to requirements.

Whereas once we required natural selection to advance the knowledge of the species, we probably now have the knowledge to dispense with natural selection.

A.I. is probably the only way forwards.....With only a very select group of humans along for the ride.

Whether or not we keep churning out organic human dross, will to a greater extent, depend upon sustainability.