Should a machine run the world?

Author: User_2006

Posts

Total: 42
Marko
Marko's avatar
Debates: 0
Posts: 93
0
0
2
Marko's avatar
Marko
0
0
2
-->
@fauxlaw
 I’ll give it another shot then.

Yes, that might be the case, but, humans are equally dependant on data collection, and on the accuracy of that data (even tho less data is required by humans for inference at the moment). Many of the limitations we ascribe to AI are also limitations for humans, esp in the context of global scale management. In this context, I could come up with a number of limitations in humans that are much less serious for something like an AI machine. For example, you can’t hook up a series of brains and expect to get a productivity output that is the sum of all the brains. 

Additionally, it is slightly unfair to regard the data and the data collection systems that feed AI as somehow ‘raw data from humans’, especially when it comes to external data. Even tho humans have developed these data collection and categorisation technologies, once these systems are started, the data can difficultly be regarded as ‘raw data from humans‘ anymore, imo. They just don’t operate at the biological brain medium and scale. 

This discussion thread seems fairly focussed on comparing the calculation capacity of a biological brain vs a hypothetical AI, but are we taking for granted all the technological instruments and systems that humans use for themselves, and marking them on the pro column of a biological, human brain? I think that would be slightly disingenuous and arbitrary. 
For example...we could arbitrary compare a biological brain (without all technological innovations ) with an AI machine connected to advanced technological data collection systems. I can’t hardly imagine where humanity would be without those technological innovations (but I can easily look back to history for that). Could it still be argued that humans would be more successful at something like running the world? 

Finally, I’m still looking an for answer to the question ‘what is needed to successfully and sustainably manage a planet?’. Until we answer that question I don’t think we can assume that the biological human brain has the best calculation strategy to do it, especially in view of where the planet is currently heading. Could it be that a planet is best run using statistical analysis prediction modelling (in a much more complicated and sophisticated form than how computers predict something stochastic like the weather)? Its possible, and if so, humans wouldn’t be the best at running it. 

zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,051
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@fauxlaw
@Marko
Well.

We can not be certain of anything I suppose and can only be fairly certain of what we think we know.

And we have lost or not developed the specialisms that other species possess... Nonetheless most species possess a governing mechanism of some sort, as such the human brain is not unique but just more advanced in terms of ability....A notion of uniqueness, is as far as we are able to know, just that.

I prefer to look at a bigger picture of material evolution, of which the organic and species phase is but one  part, and in this wider context I therefore think that it is fair to consider that humanity may  not necessarily be, the be all and end all of material development.


And it's only given,  that A.I. is currently dependant upon it's acquisition of raw data from humans....To assert otherwise is to be certain of the future.
Marko
Marko's avatar
Debates: 0
Posts: 93
0
0
2
Marko's avatar
Marko
0
0
2
-->
@zedvictor4
Zedvictor4: And it's only given,  that A.I. is currently dependant upon it's acquisition of raw data from humans....To assert otherwise is to be certain of the future.
___________________________________________________________________________________________________________________

To predict a certain future you would have to assert that an infinite amount of perfect data collection and analysis is even conceivable.

But of course, in practice, a prediction machine becomes useful when it can predict something better than, for instance, another prediction machine. Or  maybe, the question doesn’t necessitate a collection of all the information with respect to the position of every atom and/or molecule in the universe. It essentially spits out a probability value, and if its prediction value is better than the prediction value of, let’s say, the human prediction machine, it is ultimately a better prediction machine. 
We still have to decide whether extremely powerful prediction machines are better at running a planet. 

On the other point, if you have largely autonomous data collection and data analysis processes—which we haven’t exactly reached yet, but will probably reach in the foreseeable future—how can you still use the term ‘human’ in the sentence.....’raw data from humans’? Of course it’s making a judgement call at a time when we reach this autonomous state, but we re partially there already. 
fauxlaw
fauxlaw's avatar
Debates: 77
Posts: 3,565
4
7
10
fauxlaw's avatar
fauxlaw
4
7
10
Apparently, none of you challenged to do so have read any material on Searle’s “Chinese Room” experiment which was specifically set up [1986, I think], to test the abilities of AI against human brain function. Just read it; it will answer a lot of questions y’all don’t even have, yet. No, it’s not a perfect scenario; there are critics with valid points, but, to date, after 34 years of argument against it, it stands as being just as valid, if not more so than critics’ options.
 
In the meantime, here’s a simple experiment you can do with your own “AI” devices. I have Apple HomePod [my preferred], Google Home, and Amazon Echo in my home. I gave all three what I thought was a simple task. I asked, “What is the current time at -111˚ W on Earth?” I was given a correct answer by all three devices [happened to be just after 04:00]. Then I asked, “What is the current time at -111˚ W on Mars?” My Apple HomePod gave the most intriguing answer, whereas all of them failed to provide the requested answer; a known value. It even exists as a downloadable app from NASA as of 04/07/2020; just one month ago, coincidentally. I have not yet downloaded it to my iMac. HomePod gave this reply, “I don’t know where that is.” The others gave me a familiar variation of “I can’t help with that.” At least HomePod gave me a clue to what was missing; i.e., NASA’s Mars24 app, which it could then, if had, easily access.
 
I offer this anecdotal evidence as demonstration of the limitations of AI. Not only is it garbage in, garbage out, if such is the case, it is always nothing in, nothing out. Whereas, knowing a few facts [I do] I could give a rough calculation of the time of day [or night] on Mars at -111˚ W. It’s day is comparable to Earth’s; 24.62 hours [about 41 minutes longer than Earth at 23.93 hours]. So, knowing Mars’ orbital position [I do], and assuming it does not have the nonsense imposition of daylight savings [it operates on UMT, anyway], plus knowing Mars’ equivalent of GMT, and, therefore, I can also determine what season it is [I do, and can] I can calculate the time at that coordinate on Mars. But, lacking that detail data, my devices have no capability to do these calculations because they do not know to reference that detail data to draw from it. HomePod can merely reply, “I don’t know where that is.” Where it is, currently, is in opposition, meaning we’re on the same side of the sun, with approximately 30˚ of arc between us. We are approaching, and Mars has just passed our relative summer solstices.
 
It is a simple calculation, really. With an understanding of the formula, and the data to plug into the elements of that formula, a child could do it. Without those things, A.I. could not do it now. How long until it can… who knows? Lacking pieces of the formula, I’d wager never.
User_2006
User_2006's avatar
Debates: 50
Posts: 510
3
3
11
User_2006's avatar
User_2006
3
3
11
Well, you guys predicted the improbability of ai running the world, but the point I am trying to make is that AI isn't biased. There may be republican presidents(George Bush Jr), democratic presidents(Obama), and undecided presidents(George Washington). George Washington is the fairest because he did not join a party(in fact, at that time there are no fixed two parties). AI would just listen to people and AI wouldn't say something like "all republicans are pigs".
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,051
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@User_2006
Firstly I feel obliged  to point out that "The World" in question refers to a global society and not just a U.S. one.


Secondly, the ultimate fate of humanity may currently rest in our own hands... but for how long? 


I think that we should stop and consider the fact that, after such a short period of technological development so much of our day to day lives is now managed by  A.I.

So will this situation stay exactly the same for evermore or even reverse?....I would suggest that this is just not the way that things happen.


A.I...….Artificial Intelligence or Alternative intelligence?....I prefer the latter....After all, human or non-human,  intelligence is still intelligence.
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,051
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@fauxlaw
I would suggest that time is the same everywhere.

I would further suggest that what you refer to is a human consideration relative to a specific duration at a specific location.

But that's another debate.


And the Chinese Room is relative to 1986 Human Earth time.

My question is how can you be certain that the Chinese Room will still be applicable  1000 years from now.
Marko
Marko's avatar
Debates: 0
Posts: 93
0
0
2
Marko's avatar
Marko
0
0
2
-->
@fauxlaw
I had read it (on your first mention of it in an earlier post), but after reading it in more depth, I found Searle’s Chinese room experiment conclusion flawed on a variety of levels.
I’ll sum up Searle’s primary argument as follows:
Syntax is not sufficient for semantics. Programs are completely characterised by their formal, syntactical structure. Human minds have semantic contents. Therefore, programs are not sufficient for creating a mind.

And to demonstrate this unlikelihood, he brings in the Chinese room experiment. The crux of the experiment is to argue that a computer running a program doesn’t understand Chinese in the same way that a human Chinese understands Chinese. He assumes that the programs’ formal instructions are carried out by someone who doesn’t understand Chinese. And so the experiment proceeds in the way described by Searle himself, etc.....and Searle concludes that computers are inherently incapable of understanding something.

 I’m sure you already heard of the usual objections to the experiment, but I’ll still list a few of them below. 

While the person in the Chinese room doesn’t understand Chinese, the room (or entire system, which includes the person and all the room’s parts) does. Searle arbitrary focusses attention on the person (the executive unit) without paying full attention to the properties of the system as a whole. And i can’t see how this is very different to how the brain works: the human brain is not the carrier of intelligence but rather that it causes intelligence. 
I won’t go into Searle’s reply to this here (which I didn’t find convincing), but the possibility below side steps his objection. 

We could also add a possibility, that while we agree that the person cannot understand Chinese, a running system could create (or bring into existence) a new entity different to the person and the system as a whole, and that this new entity is what we might call the understanding of Chinese. 

Two other regular objections are the Robot reply and the Brain Simulator  Reply. 

My thoughts are that, even tho a syntactical program doesn’t do semantics, it doesn’t mean a program can’t create semantic contents during the running of the program. In other words, the hardware is not the carrier of mental processes, but these mental processes are an emergent phenomenon that are created during the course of its execution, much like the emergent phenomenon of mental processes in the brain.

So Searle was essentially right in the sense that, on a first glance, computers can’t think, but he also never closed the possibility towards the idea that computers could create thinking.


fauxlaw
fauxlaw's avatar
Debates: 77
Posts: 3,565
4
7
10
fauxlaw's avatar
fauxlaw
4
7
10
-->
@zedvictor4
Well, now. Time is the same everywhere. That’s no more practical an answer than my HomePod’s previous answer, “I don’t know where that is.” Time is a construct of organizing man. Otherwise, I don’t see much purpose in it. It is only practical in function if a man is not alone, but must deal with society. Otherwise, what purpose does it have other than as a theoretical concern?
 
But, that’s not really the cosmic point. To demonstrate the relative uselessness of AI, I just asked HomePod a very simple question a five-year-old understands, and, if within reach of it, can respond to the request. Whereas, AI, as an “entity,” but without physical function, cannot process the request. I asked “her” [mine is set to an Aussie female], “Pass the butter, please.” Her reply, “I don’t understand.” No, she doesn’t. The request means nothing to her because the intent of the question is beyond her programming, let alone her physical attributes, which do not appeal to me in the slightest, anyway.
 
It’s a lot like the song lyrics from the musical and movie of the 60s, “Camalot,”wherein King Arthur, wondering what Guinevere is up to, ponders, “What are you thinking? I don't understand you. But no matter; Merlin told me once, never be too disturbed if you don't understand what a women is thinking, they don't do it very often, but what do you do while they are doing it?” More to the point, one might inquire: What is AI doing while I’m doing it [thinking]? Hint: not a bloody thing. As a sounding board; give me a woman any day of the week, because Merlin’s answer to Arthur’s question was sublime: “Just love her.” How does one love A.I.?
 
Consider this: Posit: God is a perfect, sentient, omnipotent Being. He is interested in creating an entity in His image. He created Adam. However, to my point, He more eloquently created Eve. But that's another story. Why did He not create A.I.?
 

fauxlaw
fauxlaw's avatar
Debates: 77
Posts: 3,565
4
7
10
fauxlaw's avatar
fauxlaw
4
7
10
-->
@Marko
Searle... never closed the possibility towards the idea that computers could create thinking.
And there is the crux of the failure of the posit that a "machine should run the world." Until A.I. demonstrates the actually ability to think, that is, to rise above the processing of data, for whatever purpose, to developing a data set of its own devising, wholly different, syntactically and semantically, from data it has been fed, it is incapable of properly running the world. It must, in a sense, achieve mastery of the distinction between a cold, 1-0-ciphered justice and warm, infinite-ciphered mercy. If it cannot tell me the time of day on Mars, it certainly cannot read my heart, can it?
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,051
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@fauxlaw
Well, I would suggest that if a god is a perfect, sentient omnipotent being...It's creation was therefore endowed with the ability to create and develop A.I.

Now, that's a god hypothesis I can happily run with.... So we therefore have to consider  to what extent and to what end's was humanity so predisposed.

Your overwhelming regard seems only to be for now though, rather than for the possibilities of the future.....Current A.I.  can only do what A.I. can currently do, just as humankind was only ever able to do what it could do at the time.....Though god always new what everything was for and what everything was capable of.


God....No singing or praying necessary though, because it already knows everything....However, if it helps.
Marko
Marko's avatar
Debates: 0
Posts: 93
0
0
2
Marko's avatar
Marko
0
0
2
-->
@fauxlaw
Fauxlaw: And there is the crux of the failure of the posit that a "machine should run the world." Until A.I. demonstrates the actually ability to think, that is, to rise above the processing of data, for whatever purpose, to developing a data set of its own devising, wholly different, syntactically and semantically, from data it has been fed, it is incapable of properly running the world. It must, in a sense, achieve mastery of the distinction between a cold, 1-0-ciphered justice and warm, infinite-ciphered mercy. If it cannot tell me the time of day on Mars, it certainly cannot read my heart, can it?
_____________________________________________________________________________________________________________________

Putting aside for one instance the question of whether a future AI can think.......there is insufficient knowledge to claim that a future ‘machine should run the world’, but there is an increasingly strong claim that ‘humans shouldn’t be running the world’. 

On the other point, you yourself admitted that the Mars24 app could tell you the time of day on Mars—and I suppose it could also tell you the season, amongst many other things you couldn’t calculate. That the Mars-focussed app wasn’t integrated to your AppleHomePod tells us nothing about whether a future AI could or couldn’t think, or that it is simply a case of ‘garbage in garbage out‘. It is an anecdote that serves to point towards specialisation in technological systems.
If I asked you for the gravitational time dilation of a given space located 100 light years away from a specific black hole, you might similarly tell me ‘I don’t know where that is’, or, ‘I know too little about the theory of relativity to give you an answer’. You are also a highly specific and specialised machine, but I equally wouldn’t  argue that you weren’t a thinking machine.