Posts

Total: 32
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
Strong AI is  artificial intelligence that can think like a person can, they have free will, and can plan out their own means. (no piece of strong AI have been created yet)
I will assert that this cannot exist, because of the following arguments:

  •  The Chinese Room.  Imagine an English speaking human-being who knows noChinese is put in a room and asked to simulate theexecution of a computer program operating on Chinesecharacters which he/she does not understand. Imagine the program the person is executing is an AIprogram which is receiving natural language stories andquestions in Chinese and responds appropriately withwritten Chinese sentences. 
           The claim is that even if reasonable natural languageresponses are being generated that are indistinguishablefrom ones a native Chinese speaker would generate, there is no “understanding” since only meaningless symbols arebeing manipulated. the human seems to understand Chinese, however, they do not, and the same is true of AI.
  • Mary's Room. imagine Mary is an expert on color, and she can describe  the process of the human eye seeing color. however, Mary works in a completely black and white room, and she has never seen color. one day, a red apple appears on her computer screen, Mary has seen color for the first time. the question is: does she learn something new when she sees the red apple? is the answer is yes, she does, then there is more to color then what we can program into a computer, and color is a qualia (or subjective experience) in which humans have, that cannot be put into a computer. more examples of qualia are joy, or anger, because we cannot describe these experiences to someone who has not felt them, therefore, we cannot program them into a computer, and we cannot have true strong AI.

secularmerlin
secularmerlin's avatar
Debates: 0
Posts: 7,093
3
3
3
secularmerlin's avatar
secularmerlin
3
3
3
-->
@The_Meliorist
they have free will,
I'm not sure people have free will so I'm not sure I agree with your definition. In any case what is the FUNCTIONAL and PRACTICAL difference between "strong AI" and apparently strong AI?
FLRW
FLRW's avatar
Debates: 0
Posts: 6,594
3
4
8
FLRW's avatar
FLRW
3
4
8
-->
@The_Meliorist
German scientists, led by Dr Markus Diesmann, a computational neurophysicist, are part of the Human Brain project, which along with graphene this year won the largest research award in history:  ($1.3 billion).

The trouble is that at the moment, no computer is powerful enough to run a program simulating the brain. One reason is the brain’s interconnected nature. In computing terms, the brain’s nerve cells, called neurons, are the processors, while synapses, the junctions where neurons meet and transmit information to each other, are analogous to memory. Our brains contain roughly 100 billion neurons; a powerful commercial chip holds billions of transistors. Yet a typical transistor has just three legs, or connections, while a neuron can have up to 10,000 points of connection, and a brain has some 100 trillion synapses. ”There’s no chip technology which can represent this enormous amount of wires,” says Diesmann.

The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@secularmerlin
what is the FUNCTIONAL and PRACTICAL difference between "strong AI" and apparently strong AI?

I never claimed that there was a practical difference between strong AI and apparently strong AI. however, I disputed the fact that the AI actually understands what's it's doing. For example, a chess computer is good at chess, however, it has no real understanding of  chess, because it's just doing a "if this" then "do this" programing. it doesn't actually understand it's playing a person. similarly,  the man in the Chinese room has not understanding what the Chinese word for dog is, he is just operating  on a "if this" then "do this" programing. 

So yes, functionally and practically, there is no difference. However, a façade of consciousness is still just that, a façade.

(and you still haven't refuted Mary's Room).
secularmerlin
secularmerlin's avatar
Debates: 0
Posts: 7,093
3
3
3
secularmerlin's avatar
secularmerlin
3
3
3
-->
@The_Meliorist
Forget Mary's room how do I know that I understand what I am doing rather than just doing a thing based on "if this" then "do this"?
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@secularmerlin
this is the "other minds" reply,
as for the "other minds" reply, we can know we understand things and have consciousness, because we understand things because we would understand the character’s meaning, the room does not, it only knows what answers to give to which questions, not the actual meaning of the questions themselves. So, we would be able to understand the actual meaning of the questions, because we would "know Chinese" as the analogy goes. However, the machine doesn't know, and is just doing as it is told.


The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@secularmerlin
honestly, you cannot just say "forget Mary's Room", to prove that strong AI can exist, then you would have to debunk both arguments, not just one.
FLRW
FLRW's avatar
Debates: 0
Posts: 6,594
3
4
8
FLRW's avatar
FLRW
3
4
8
The end goal of all such projects is the same: To apply the principles of the human brain to computing, so that machines can work faster, use less power, and develop the ability to learn. Diesmann hopes that with the advent of exascale computing—processing power 1,000 times greater than currently exists—within the next decade, we might be able to better understand how the brain works. IBM is hoping that its new form of chip will help it to “build a neurosynaptic chip system with 10 billion neurons and 100 trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.” But a fully working simulation of the brain, Diesmann thinks, is still a decade or two away.


RationalMadman
RationalMadman's avatar
Debates: 574
Posts: 19,931
10
11
11
RationalMadman's avatar
RationalMadman
10
11
11
Of course it can. Our brain itself runs on electrical sybaoses fused with biological brain as framework, the AI will be mechanical where we are biological but you can absolutely simulate nerves and the kind of evolving algorithmic processing that goes on in our brain, it will require very high amounts of RAM and programming. The robot will not be able to 100% randomly attain creative thoughts like we can but it can definitely think deeper and faster than us and this will happen as we evolve robotics past AI cars and begin to establish AI that factors in moral equations.
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,062
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@The_Meliorist
How many billions of years did it take for Organic Intelligence to develop?

And for how many years have we been developing  Alternative Intelligence?

A.I's got plenty of time yet.....And A.I. only requires one language to be able to understand all human languages.

And of what practical use will  anger and joy or eventually Mary be to an  A.I?
Reece101
Reece101's avatar
Debates: 1
Posts: 1,973
3
2
2
Reece101's avatar
Reece101
3
2
2
-->
@The_Meliorist
  •  The Chinese Room.  Imagine an English speaking human-being who knows noChinese is put in a room and asked to simulate theexecution of a computer program operating on Chinesecharacters which he/she does not understand. Imagine the program the person is executing is an AIprogram which is receiving natural language stories andquestions in Chinese and responds appropriately withwritten Chinese sentences. 
           The claim is that even if reasonable natural languageresponses are being generated that are indistinguishablefrom ones a native Chinese speaker would generate, there is no “understanding” since only meaningless symbols arebeing manipulated. the human seems to understand Chinese, however, they do not, and the same is true of AI.
Firstly, Chinese isn’t a language. Secondly, what do you mean there is no understanding since only meaningless symbols are being manipulated? Obviously there’s something going on if the responses are indistinguishable from a native speaker.  

  • Mary's Room. imagine Mary is an expert on color, and she can describe  the process of the human eye seeing color. however, Mary works in a completely black and white room, and she has never seen color. one day, a red apple appears on her computer screen, Mary has seen color for the first time. the question is: does she learn something new when she sees the red apple? is the answer is yes, she does, then there is more to color then what we can program into a computer, and color is a qualia (or subjective experience) in which humans have, that cannot be put into a computer. more examples of qualia are joy, or anger, because we cannot describe these experiences to someone who has not felt them, therefore, we cannot program them into a computer, and we cannot have true strong AI.
Light waves appear differently depending on what medium(s) they travel through. For humans it’s our eyes and brain.

The question then becomes, is your red the same as my red?
Reece101
Reece101's avatar
Debates: 1
Posts: 1,973
3
2
2
Reece101's avatar
Reece101
3
2
2
-->
@zedvictor4
How many billions of years did it take for Organic Intelligence to develop?
You’re kinda pushing it with the billions. Hundreds of millions is better.  

zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,062
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@Reece101
OK.
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@Reece101
Obviously there’s something going on if the responses are indistinguishable from a native speaker.
But that’s just a simulation. In the Chinese room,(or, since your going to nick-pic, the Mandarin Chinese room) the speaker is just giving certain symbols when other certain symbols are given. He does not know the meaning of the symbols, it is just that he seems like he does. 

When these symbols are given: 
你今天好吗 (which means are you good today) he knows to give these symbols: 我是 (which means yes, I am). The man doesn't know what the symbols mean, just what symbols to write when some symbols come in through the door. 

He doesn’t know the word for “food” or “house” (he does not understand Mandarin Chinese )

Light waves appear differently depending on what medium(s) they travel through. For humans it’s our eyes and brain.
You still cannot program a robot to feel emotions, like sadness, because they are subjective qualia, that means we cannot explain them, and cannot program them into a computer. More examples of subjective qualia are I run my fingers over sandpaper, smell a skunk, feel a sharp pain in my finger. You cannot describe these experiences to anyone, they only why I know what you're talking about is that I have the same experiences. This mean we cannot program them into a computer 

if we cannot give AI the ability to experience subjective qualia, then we cannot have truly strong AI

The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@RationalMadman
It's important to note that  you did not refute either argument.

Also, you committed the wishful thinking fallacy, because you are simply hoping that in the future that we will create strong AI, but not actually proving that we can. 
RationalMadman
RationalMadman's avatar
Debates: 574
Posts: 19,931
10
11
11
RationalMadman's avatar
RationalMadman
10
11
11
-->
@The_Meliorist
I do not wish it, they will be made and unfortunately they will be superior to most non-genius human thinkers.
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@RationalMadman
You're missing the point. You have not refuted either argument, nor have you proven that strong AI can exist, outside of saying "it will exist in the future" but that is just a bold assertion. 
RationalMadman
RationalMadman's avatar
Debates: 574
Posts: 19,931
10
11
11
RationalMadman's avatar
RationalMadman
10
11
11
-->
@The_Meliorist
To refute your arguments is very simple. You seem to think consciousness needs to be directly coded in, that the actual experience of reality is necessitated to be turned into a logical command or else the AI experiences nothing.

If you Understand the way a program can have random variables, you will then see how evolution and genuine sentience is possible to be displayed. The  notice that our brains run on electric impulses and combine a brain of sort via microchips and you can have a real sentient AI.
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
If you Understand the way a program can have random variables, you will then see how evolution and genuine sentience is possible to be displayed. 
I accept the Theory of Evolution, if that's what your getting at. However, how can the randomness of the computer system produce consciousness? it's not like the computer system can evolve, like species do.

  notice that our brains run on electric impulses and combine a brain of sort via microchips and you can have a real sentient AI.
are you saying that we are robots? If so, on your view, are we literally or metaphorically computers? 

Also, I think you are not understanding my position when you say that:
 You seem to think consciousness needs to be directly coded in,
I believe that Artificial intelligence's consciousness needs to be programed in, however human consciousness can come from natural processes. (however, I am still a dualist, the view that mental phenomena are non-physical, or that the mind and body are distinct and separable, but that's off topic)

Anyways, the point is  that while human consciousness might not need to be programed in, the computer's consciousness needs to be. Since we cannot describe much of our experience, we cannot program it into the computer, however, that does not mean we have been "programed" ourselves.
Reece101
Reece101's avatar
Debates: 1
Posts: 1,973
3
2
2
Reece101's avatar
Reece101
3
2
2
-->
@The_Meliorist
The man doesn't know what the symbols mean, just what symbols to write when some symbols come in through the door. 
You gave Mary the ability to describe and learn the process of seeing colour. 
Can’t you give the man the ability to describe and learn what the symbols mean?

if we cannot give AI the ability to experience subjective qualia, then we cannot have truly strong AI
It would be interesting to see what would happen when you create cultures with artificial neural networks.

RationalMadman
RationalMadman's avatar
Debates: 574
Posts: 19,931
10
11
11
RationalMadman's avatar
RationalMadman
10
11
11
-->
@The_Meliorist
The singular thing, other than being conscious ourselves, that makes us 'know' that other beings are conscious (if they are) is that they act in unpredictable ways, implying they have a genuine conscious mind creating ideas. We assume that they 'feel' and 'perceive' but all we are shown directly is that they are acting in such ways.

The reason we can safely assusme that current AI isn't experiencing a genuine consciousnessness is because the AIs currently built do not properly evolve themselves into a personality that can randomly and creatively use information, albeit in an algorithmic manner. The kind of AI you are talking about requires very high-power servers for its memory and processing, it will require state of the art microchips if it is to walk around with a mechanised body but even as an abstract AI, only conscious within a computer system, these things are real and exist as a conscious entity because they begin to develop 'will' albeit not 'free will'. This 'will' can be consistently displayed to us as they will actually become even less predictable and straightforward than us biological folk, when they are at the level you are describing. Superintelligent sentient AI is genuinely viable, it just is not known to have been coded and sufficiently designed yet, it's also something to be cautious in designing as if it's built with too little mercy, we could be wiped out as it would begin to secretly build its own army.
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@RationalMadman
I will take your arguments into consideration.

However, I really don't think robots will ever "raise up" against us in the way you are saying.

Thank you

The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
You gave Mary the ability to describe and learn the process of seeing colour. 
Can’t you give the man the ability to describe and learn what the symbols mean?
These are two separate thought experiments, therefore they abide by different rules. 

It would be interesting to see what would happen when you create cultures with artificial neural networks.
This is simply not an argument.
RationalMadman
RationalMadman's avatar
Debates: 574
Posts: 19,931
10
11
11
RationalMadman's avatar
RationalMadman
10
11
11
-->
@The_Meliorist
Whether they support us or oppose us they will be sentient and sentient beings are capable of having varying loyalties and evolving agendas.
Reece101
Reece101's avatar
Debates: 1
Posts: 1,973
3
2
2
Reece101's avatar
Reece101
3
2
2
-->
@RationalMadman
Or they might just step on us without realising and/or caring.
K_Michael
K_Michael's avatar
Debates: 38
Posts: 749
4
5
10
K_Michael's avatar
K_Michael
4
5
10
This reminds me of the P Zombie thought experiment: the idea of people who don't have "souls", but behave exactly the same as actual sentient humans. Since there is no way to measure sentience, it can be presumed that either no such thing exists or that everyone is a P Zombie. Since I don't subscribe to solipsism, I will assume the prior. Presumably, anything complex enough to be able to argue its own intelligence without being told to say so is intelligent.
Reece101
Reece101's avatar
Debates: 1
Posts: 1,973
3
2
2
Reece101's avatar
Reece101
3
2
2
-->
@The_Meliorist
These are two separate thought experiments, therefore they abide by different rules. 
Arbitrary rules. 

This is simply not an argument.
It wasn’t meant to be.
The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@Reece101
Arbitrary rules. 
This is completely irrelevant to the discussion, it's either you have a good counter argument to the Chinese room and Mary's room, or you do not.


The_Meliorist
The_Meliorist's avatar
Debates: 2
Posts: 27
0
0
6
The_Meliorist's avatar
The_Meliorist
0
0
6
-->
@K_Michael
Cool idea
Reece101
Reece101's avatar
Debates: 1
Posts: 1,973
3
2
2
Reece101's avatar
Reece101
3
2
2
-->
@The_Meliorist
This is completely irrelevant to the discussion, it's either you have a good counter argument to the Chinese room and Mary's room, or you do not.
I thought you already conceded Mary’s Room. What problem did you have with what I said?

For the Chinese Room you’re saying one of the rules are the man can’t learn/describe what the characters mean?
You do realise modern A.I. can do that, right? Again, you’re making arbitrary rules.