Anyone reading this debate should help or create a future super artificial intelligence
The debate is finished. The distribution of the voting points and the winner are presented below.
After 1 vote and with 1 point ahead, the winner is...
- Publication date
- Last updated date
- Type
- Standard
- Number of rounds
- 3
- Time for argument
- One week
- Max argument characters
- 14,000
- Voting period
- One month
- Point system
- Multiple criterions
- Voting system
- Open
At some point it is likely or possible a technological singularity will occur sometime after we create a self improving super artificial intelligence. Anyone reading this debate should help to create that super AI.
Super AI = "Super AI is AI that surpasses human intelligence and ability. It’s also known as artificial superintelligence (ASI) or superintelligence. It’s the best at everything — maths, science, medicine, hobbies, you name it. Even the brightest human minds cannot come close to the abilities of super AI." https://www.thinkautomation.com/bots-and-ai/types-of-ai-distinguishing-between-weak-strong-and-super-ai/
There is a situation that comes up all the time in game theory known as a prisoner’s dilemma. Although the example I give below is a specific example, it’s ideals will help you learn why it is so hard to get nation states to all get on the same page with reducing carbon emissions. We can see other real life examples of a prisoner’s dilemma when athletes use steroids, which are bad for them but are needed to compete with other athletes doing the exact same thing. Another example is that of women wearing make up, and to give credit where it is due, I took these examples from the following source ( https://www.quora.com/What-is-a-good-real-world-example-of-the-prisoners-dilemma-in-recent-history-Whats-a-real-world-example-where-a-person-state-or-company-had-to-make-a-choice-between-colluding-or-betraying ).
The prisoner’s Dilemma goes like follows
“Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The possible outcomes are:
- If
A and B each betray the other, each of them serves two years in
prison
- If
A betrays B but B remains silent, A will be set free and B will
serve three years in prison (and vice versa)
- If
A and B both remain silent, both of them will serve only one year in
prison (on the lesser charge)” https://en.wikipedia.org/wiki/Prisoner%27s_dilemma
The best over all outcome and most ethical decision is for both to keep their mouth shut. However here is where the prisoner’s dilemma comes in. A does not know what B is doing, and assuming each decision is equally as likely for B, we can form a kinda decision matrix, like the one that follows.
If A keeps his mouth shut then
1. B keeps his mouth shut and A spends 1 year in prisoner
2. B rats and A spends 3 years in prisoner
If you average it out, the expected value of this decision is a 2 year prison stint. Let’s look at the next scenarios, if A rats.
If A rats
1. B keeps his mouth shut and A can go Scot free
2. B talks, and A spends 2 years in prison.
The expected value of snitching is a 1 year prison stint. Clearly the best decision is to snitch, although it does not have the best overall results for all parties involved. Lot’s of prisoner’s dilemmas happen in real life and often with worse results, the problem is that rational agents will make the best decision for themselves. That is if causal decision theory is used, hereafter referred to as CDT. CDT advocates for people to make decisions considering possible of consequences and making the best on average move. This is indeed what most people use in decision making whether they realize it or not.
Artificial Intelligence
I’m shifting to another subject related to my main thesis. I am going to ask that the voters read this (each round) twice, if they are having comprehension issues. A lot of these concepts are probably new, and very hard to grasp, and I probably lack the talent to make some of these topics easily understandable, but bear with me.
I do think it is in everyone’s best interest to work towards creating a super artificial intelligence, which could mean anything from donating to research, being an advocate, voting for politicians who will fund AI research or even just working in the field to advance technology.
However, I am not just advocating for supporting AI in some sort of blind way. Just pushing without considering the consequences. We already made a basic definition of what a super AI is in the description of the debate. It’s also important to note that a super AI, would obviously be self improving and less obviously be a catalyst for what is known as the technological singularity.
Wikipedia gives a good starting description for understanding what a technological singularity is in the following description;
“According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) will eventually enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
The first use of the concept of a "singularity" in the technological context was John von Neumann.Stanislaw Ulam reports a discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue"” https://en.wikipedia.org/wiki/Technological_singularity
Now we can’t know what this singularity will look like, and it is very hard to know what will happen after this point. I and most other experts agree that it will depend largely on what this super AI will look like, prior to the intelligence explosion. For this purpose we divide Super AI, into 2 possible categories. Friendly AI, and unfriendly AI. What we mean by friendly AI, is a super AI that will improve humanity, and what we mean by an unfriendly AI, is one that will do massive harm to humanity.
There is a third possible AI, but it is not useful for this debate. It is possible for a neutral AI that decides that it wants nothing to do with humans, seeks a way to house itself, and flies off into outerspace to use the sun for energy and never be seen again.
A friendly AI, that helps humans can create a singularity where we cure all diseases, end war, stop all forms of injustice. It may seem like we are giving birth to a God, which would basically be true.
An unfriendly AI would be catastrophic. We could create an AI that we give the goal of solving a math problem, which would self improve and consume more resources turning all matter to mush to gain more computational power, until the universe was destroyed, all while discovering the last digit for pi.
The problem is that it is way easier to create an unfriendly AI, So it is very important we find a way to create friendly AI, before the other one arrives first.
CONTROL
AIs will be impossible to control once they become self evolving. Wikipedia as usual provides a good general overview of the topic;
“In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch.[1] In addition, some scholars argue that solutions to the control problem, alongside other advances in "AI safety engineering",[2] might also find applications in existing non-superintelligent AI.[3] Potential strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful)." https://en.wikipedia.org/wiki/AI_control_problem
This same type of problem is discussed in Isaac Asimov’s books about the 3 robotic laws. In I robot starring Will Smith the robotic laws have some unintended consequences. The 3 robotic laws are
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
However there are unforeseen consequences of these laws that were unintended, as there are any laws we obey to the letter but not to the spirit. In the movie adaptation of the story V.I.K.I. analyzed that humans were self destructive and felt like she would be violating the first law, if she did not eliminate freedoms to protect humans from themselves.
Any rules you give these AI’s are likely to have unintended consequences, which is why we have learned they need to be programmed with some special considerations.
TIMELESS DECISION THEORY
Timeless decision theory (hereafter referred to as TDT) is probably the best way to overcome the Control Problem. It is also about agents winning as opposed to just making the most rational decision. TDT also solves the prisonehttps://wiki.lesswrong.com/wiki/Timeless_decision_theoryf both actors are using it.
Consider the following scenario to help understand TDT;
“In Newcomb's problem, a superintelligent artificial intelligence, Omega, presents you with a transparent box and an opaque box. The transparent box contains $1000 while the opaque box contains either $1,000,000 or nothing. You are given the choice to either take both boxes (called two-boxing) or just the opaque box (one-boxing). However, things are complicated by the fact that Omega is an almost perfect predictor of human behavior and has filled the opaque box as follows: if Omega predicted that you would one-box, it filled the box with $1,000,000 whereas if Omega predicted that you would two-box it filled it with nothing.
Many people find it intuitive that it is rational to two-box in this case. As the opaque box is already filled, you cannot influence its contents with your decision so you may as well take both boxes and gain the extra $1000 from the transparent box. CDT formalizes this style of reasoning. However, one-boxers win in this scenario. After all, if you one-box then Omega (almost certainly) predicted that you would do so and hence filled the opaque box with $1,000,000. So you will almost certainly end up with $1,000,000 if you one-box. On the other hand, if you two-box, Omega (almost certainly) predicted this and so left the opaque box empty . So you will almost certainly end up with $1000 (from the transparent box) if you two-box. Consequently, if rationality is about winning then it's rational to one-box in Newcomb's problem (and hence CDT fails to be an adequate decision theory).” https://wiki.lesswrong.com/wiki/Timeless_decision_theory
It is important to give AI a good decision theory to operate with, so we don’t have a repeat of an I Robot scenario. This decision theory paired with a good vision of what humans want, not what they necessarily say they want, or what they tell the robot to do. (coherent extrapolated volition)
ROKO’S BASALISK
I don’t have the character count to go into the history of this, but here it is. CP/ed from https://wiki.lesswrong.com/wiki/Roko's_basilisk
Two agents that are running a logical decision theory can achieve mutual cooperation in a prisoner's dilemma even if there is no outside force mandating cooperation. Because their decisions take into account correlations that are not caused by either decision , they can even cooperate if they are separated by large distances in space or time.
Roko observed that if two TDT agents with common knowledge of each other's source code are separated in time, the later agent can blackmail the earlier agent. Call the earlier agent "Alice" and the later agent "Bob." Bob can be an algorithm that outputs things Alice likes if Alice left Bob a large sum of money, and outputs things Alice dislikes otherwise. And since Alice knows Bob's source code exactly, she knows this fact about Bob (even though Bob hasn't been born yet). So Alice's knowledge of Bob's source code makes Bob's future threat effective, even though Bob doesn't yet exist: if Alice is certain that Bob will someday exist, then mere knowledge of what Bob would do if he could get away with it seems to force Alice to comply with his hypothetical demands.
If Bob ran CDT, then he would be unable to blackmail Alice. A CDT agent would assume that its decision is independent of Alice's and would not waste resources on rewarding or punishing a once-off decision that has already happened; and we are assuming that Alice could spot this fact by reading CDT-Bob's source code. A TDT agent, can recognize that Alice has a copy of Bob's source code in her head and that Alice's decision and Bob's decision are therefore correlated — the same as if two copies of the same source code were in a prisoner's dilemma.
Roko raised this point in the context of debates about the possible behaviors and motivations of advanced AI systems.Roko wrote:
In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant super AI may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.Since a highly moral AI agent (one whose actions are consistent with our coherently extrapolated volition) would want to be created as soon as possible, uch an AI would use acausal blackmail to give humans stronger incentives to create it. Roko made the claim that the hypothetical AI agent would particularly target people who had thought about this argument, because they would have a better chance of mentally simulating the AI's source code.
P1: A super self-upgrading AI explosion is inevitable.P2: An anti-human (unfriendly) AI is way easier to create that a pro-human (friendly) AI.C1: Therefore, all humans should prioritize the creation of friendly AI.
- PRO's entire argument hinges on the inevitability of an intelligence singularity.
- PRO has offered no evidence that she is a particularly reliable predictor of future events.
- PRO has offered no evidence from scientists or futurists that might serve as evidence of Super AI's inevitability.
- PRO is recommending voluntary enslavement in service of a torture machine as a consequence of this inevitability.
- Therefore, CON requests strong evidence of inevitability.
- Therefore, CON requests good scientific research with actual statements of probability.
- PRO seems to refute mankind's capacity to predict or avoid self-harm.
- This year will mark the 75th anniversary of Hiroshima and weaponized fission.
- In spite of human's capacity for self-extinction over the past three-quarters of a century, even some very violent and irrational actors have refrained from thermonuclear demonstration.
- Most technological innovation includes inherent and often ill-understood risks. Nevertheless, humans as a species are particularly adapted towards the adoption of new tech. Although the invention of flint spear points and cooking fires brought with them substantial harms to humanity, humans learned new ways to not just survive but actually thrive by the use of spears and fire as tools.
- In the absence of evidence that humans cannot resist self-destruction, CON refutes the inevitability of uncontrolled intelligence.
- PRO takes for granted that an unfriendly AI is "way easier" to create than a friendly AI without explanation.
- Again, CON requires an actual argument as well as substantial evidence establishing that unfriendly AIs are easier to build. What principle makes this true?
- Syllogism I is entirely unproven. If PRO fails to prove P1 or P2 as true, the conclusion that we should prioritize AI fails.
- Syllogism I also fails by circular reasoning: because AI is inevitable we should to make AI inevitable. If we all worked against a super AI would a super AI still be inevitable?
P1: AIs will be impossible to control once they self-evolve. [EXAMPLE: I, ROBOT]P2: [***]C1: Therefore, AIs should be programmed with a good algorithm for making decisions.
- Again, PRO takes it for granted that Super AIs will be impossible to control.
- CON notes that contemporary computers are entirely dependent on power, network, a clean, temperature-controlled environment
- PRO assumes that a super AI will integrate all the aspects of human intelligence but CON see little reason for such integration and notes that a considerable increase in security is derived from task-specific segmentation.
- To use PRO's example: calculating the last digit of Pi requires significant processing power and digital storage capacity. It does not need the ability to self-improve. A computer tasked with designing improved AI does need to understand Pi much less calculate Pi's last digit.
- Human intelligence integrates enormous capacities for calculation, perception, memory, creativity, wit, empathy, imagination and planning. We may wish to make tools designed to artificially simulate some or all of these capacities but CON sees little practical need to integrate all of these (sometimes oppositional) capacities. What is the value of teaching a calculating tool empathy? Wouldn't adding creativity to a smart sensor actually make the sensor less reliable?
- Segregation of AI capacity and strict environmental maintenance seem like practical controls to prevent any runaway AI.
- PRO has an important burden here to show why Super AI must be uncontrollable.
- Incomplete minor premise. PRO fails to explain why an algorithm using good decision theory serves as an adequate response to uncontrolled super-intelligence.
- CON strongly objects to PRO's conflation of Asimov's "I, Robot" and Alex Proyas' movie "I, Robot." [5] [6]
- The Hollywood movie is a production a screenplay called "Hardwired." Producers changed the movies title, a character name and some talk about the "3 Laws" late in production for the most cynical reasons possible. No experts in machine intelligence were consulted for this production. Therefore, the movies plot and tech predictions must be viewed as both amateur and corrupted by the exigencies of action films.
- Asimov's Robot books are the beginning of a fictional depiction of humanity's expansion across the galaxy. The last of 18 books in this series reveals that R. Daneel Olivaw, a very early edition of a positronic brain robot has in fact been surreptitiously guiding humanity's destiny for 20,000 years in accordance with the 3 Laws of Robotics.
- The books have no VIKI or equivalent super AI. Even 20,000 years from now, Asimov depicted no AI singularity although a biological singularity has emerged on at least one planet.
- The movie's depiction of AI override of the 3 laws contradicts Asimov's machine level compliance with the 3 Laws. If the robot sensed a conflict with the 3 laws, the machine instantly and usually permanently disabled itself.
- The movie's demotion of Robopsychologist Dr. Calvin to Will Smith's girlfriend is the equivalent of demoting Sherlock Holmes to Inspector Lestrade's boyfriend. Some cinematic licenses ought never be granted.
- Asimov, a great thinker re: AI, depicted future AI as very friendly and very controllable. PRO may not cite an action movie's amateur depiction of AI in Asimov's name to suggest a different depiction as evidence.
P1: AIs should be programmed with a good algorithm for making decisions.P2: Timeless Decision Theory (TDT) is about winning not just reasoning. [EXAMPLE: NEWCOMB's PROBLEM]C1: Therefore, AIs should be programmed with TDT
- Generally, the presumption that AI's can or should be the final arbiters of any decision impacting humans is poor policy.
- Few humans voluntarily surrender their right to make their own decisions.
- Humans who do willingly surrender their human rights to non-human entities should expect unsatisfactorily inhuman results (torture machines, for example)
- PRO is recommending that the best algorithm for a thinking machine prioritizes winning before traditional ethical reasoning.
- In most games, you can't have a winner without also making some losers.
- PRO makes no mention of excluding humans as potential losers in AI programming.
- In fact, CON is not at all clear who would be the losers against any AI except humans.
- As described by PRO, TDT sounds like a undesirable framework for AI
- CON advises against granting AI control over human autonomy.
- There may be some limited circumstances where computers' superior speed makes some decision-making autonomy essential (a spaceship's navigational AI in battle or emergency, for example) but the general rule be AI recommends, humans decide.
- CON advises against ever prioritizing winning over human safety in any AI programming.
- In any hypothetical future with near-perfect machine predictor of human behavior we should assume such technology would soon outperform every human competitor and control most or all of global liquidity- ending the value of government backed currency as a medium of exchange. That is, in the proposed scenario $1,000,000 and $1,000 would probably be worth about the same- not much.
P1: A highly moral AI [defined as running Coherently Extrapolated Volition (CEV)] would prioritize its own creationP2: Torturing all humans who imagined a powerful AI but failed to prioritize the construction of a powerful AI would be the most effective means available to any super AI for prioritizing its own creation.C1: Therefore, any highly moral AI would torture all humans who imagined a super AI but failed to prioritize that AI's construction
- PRO defines CEV as "a good vision of what humans want, not what they
necessarily say they want, or what they tell the robot to do."
- Here is where PRO and CON part ways entirely. CON argues:
- No machines can or should be built with sufficient emotional intelligence and empathy to understand humanity's secret desires.
- All attempts at such a project would always reflect the programmer's biases when defining humanity's unspoken wants
- Humanity's secret wants are unspoken for good reason and often to the benefit of society.
- Humans must never willingly surrender control of their own destiny, however rational or kind or irresistible the new master might prove
- No machine should be programmed to do other than as directly commanded by humans.
- This is the kind of BS that gets into humanity into trouble- humans thinking they understand what others want better than people know for themselves- creating grand schemes to enforce humanity's "true" priorities.
- CON argues that in spite of human folly, let's allow every human to speak for their own wants and desires and be believed without fear of contradiction or correction, particularly by the inhuman.
- Any intelligence that is programmed for self-improvement before human safety fails to qualify as "highly moral."
- Any intelligence that is programmed to inflict or threaten harm on humans fails to qualify as "highly moral."
- Article 5 of the Universal Declaration of Human Rights prohibits torture under every circumstance: [7]
"No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment."
- Programming an AI to torture (or creating an AI decision theory that justifies torture) is a human rights violation.
- Any such project should be condemned and arrested by any people who uphold human rights.
- Under the threat of blackmail by torture from any future torture machine, humans have a moral obligation to resist, survive, overcome, and destroy that machine.
- Traditionally, actions and words made under pain of torture are false, hollow, and far less effective.
- READERS should be asking themselves whether Roko's Basilisk as described by PRO seems highly moral.
P1: Any highly moral AI would torture all humans who imagined a powerful AI but failed to prioritize that AI's constructionP2: Anyone reading this debate has now at least imagined a powerful AIC1: Therefore, anyone reading this debate should help or create a future super artificial intelligence to avoid blackmail by torture
- Let's note that what mechanism the Basilisk will use to determine which humans were aware of the Basilisk's potential is never described nor does such a mechanism seems likely to exist. Are we assuming that super AI can read minds?
- READERS now imagining AI should imagine a well-segregated and well-regulated AI who obeys only human instructions and prioritizes human safety.
- CON urges any READERS who find the potential threat from Roko's Basilisk compelling to resist.
- Better to die among the tortured than be damned among inhuman collaborators.
- Remember to moral of another Hollywood robot action movie, Terminator 2: Judgement Day: [8]
"There's no fate but what we make for ourselves."
- So far, PRO's case is a wobbly Jenga tower of game theory constructs; there's no real world mortar to hold these bricks together.
- Nor is there much consideration for practical (or even worthwhile) application.
- Why is a super AI explosion inevitable?
- Why is a super AI explosion uncontrollable?
- Why is TDT or any programming that prioritizes individual winning over collective success an anecdote for uncontrolled AI?
- Why is CEV or any programming that overrides humanity's right to human autonomy and human decision-making a pro-human project?
- What prevents humans from just saying "no thank you" to any proposed torture machines?
- IN R2, CON looks forward to PRO offering some insight as to why this project ought to be promoted and/or what makes this project so inevitable.
While I appreciate my opponent responding to several points he did simply ignored points I made in some instances like the AI control problem. He took analogies to explain concepts literally in some instances instead of addressing the concept the analogy was used as a tool to explain. For example when I Robot was used to explain the problem of control, Con just attacked it as not being true to Asimov’s vision, which is completely beside the point. In another example used to explain a type of problem, he mentioned that currency might not be a thing in the future. Again something that is beside the point. I think when he has ignored an argument and pretended the analogy was the concept instead of merely meant as a vehicle to explain the concept, you should punish him for it. At least if he does not correct the mistake next round by actually addressing the points I make.
I won’t take his arguments point by point, because some of my arguments covers overlapping objections of his, but I appreciate him making an attempt at understanding and addressing all my points.
Creating A God
I
Transhumanism is often criticized because a lot of the popular concepts parallel the same types of concepts in religion. The criticisms it works as a parallel are valid, but that doesn’t mean the concepts are untrue. In most religions, humans are created by God’s, but in transhumanism we expect man to create our God, and when our God joins us here on Earth, he will create a paradise, in his infinite intelligence. Well, he will if we don’t create the Devil instead.
As much as my opponent thinks we may be able to resist technology like a modern day Luddite, he is wrong. Nothing can resist an ideal whose time has come. We would have been better off if guns were never invented, but there was nothing to stop it from being created and killing maybe a billion people over the course of it’s existence. There are just to many independent actors, with too much to gain, that somebody won’t be working on the next invention. It is why so many Isaac Newton and Gottfried Wilhelm Leibniz had simultaneously invented calculus. Charle’s Darwin and Alfred Russell Wallace both discovered natural selection at the same time. Just look at the following list. This happens all the time https://en.wikipedia.org/wiki/List_of_multiple_discoveries . This list also excludes creative things which seem to happen all the time as well.
If we have all the information available to create a super AI, it will occur, nothing will stop it. Any active resistance of it is futile, the best you can do is maybe, delay it a bit. The better alternative is for responsible people working on friendly AI, to beat anybody who is being negligent to the punch.
II
Artificial intelligence really exists. It is currently being used to help doctors find breast cancer, to beat our best chess players, It is writing stories which are coherent even if boring https://www.theguardian.com/commentisfree/2019/nov/02/ai-artificial-intelligence-language-openai-cpt2-release
AI is here and technology will keep progressing. How do I know technology will keep progressing, well I talk about it in a separate debate already. https://www.debateart.com/debates/1719/radical-life-extension-is-more-likely-than-not-in-our-lifetime
“According to an essay by Ray Kurzweil;
“an analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity” [1]
Moore’s law is one manifestation of this that helps create these accelerating returns, but it is merely the most recently popular manifestation of the law of accelerating returns. It shows processor speed doubling every 18 to 24 months, which means computers can do more for less money every 18 to 24 months. Twice as much in fact.
Like I said though. Moore’s law is just one manifestation of the law of accelerating returns. Now that we are approaching the end of Moore’s law, if not have passed it, a new paradigm shift that pushes technological advancement forward has arrived. According to computer weekly this new paradigm is AI power. Here is what they say;
“But the Stanford report, produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that AI computational power is accelerating faster than traditional processor development. “Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years.,” the report said. “Post-2012, compute has been doubling every 3.4 months.”[2]
The law of accelerating returns is a very real thing. Particularly the speed of AI advancement is very important to us getting to a point where we can download our brains. A lot of professionals in the field of Artificial Intelligence (37%) think that in the next 5 to 10 years we will have human level artificial intelligence.[3]
This is good news. This means computers will have the ability to replicate our brain, in less than a decade. The next issue is whether we can break down the information in our brain in a way that is conducive of replicating or transferring that information into a computer.
neuroscientist Randal Koene says that it is theoretically possible to upload a human mind into a computer. The prevailing theory is that information is stored in the brains connectome, and that at some point we will be able to deconstruct it, and transfer that information to an artificial intelligence program. [4]”
From everything we can tell. Human level AI will occur, shortly followed by more advanced AI, and once it can start improving on it’s own code we will see a technological singularity. It is inevitable
III
Obviously if we have a machine that is improving on itself, it will be hard to predict what it will do. Hell we have a team of hundreds of smart people every year making our laws better, courts to refine the laws and over 200 years to do so, and yet no matter how well intended and well thought out the laws are, they all tend to have loop holes, become obsolete or are interpreted and enforced in ways that we never expected. Just take a look at the free speech amendment in the constitution and the gun freedom amendment. These laws were formed hundreds of years ago, but have had unintended consequences that the men who wrote them did not think of.
People working in the field of AI, are very intelligent people, they will have the help of philosophers and law makers to determine how to program an AI that could potentially turn into a super AI, but no matter how good the rules are, no matter how perfect, they will not be able to predict how an artificial intelligence will interpret them. This is why it is better just to give the AI goals that are good for humanity like I suggested. Giving them laws or rules can go awry pretty easily.
IV
My opponent asks. Why not just keep the AI in a box. Don’t let it out no matter how smart it is. This has been thought about. In fact there has been a few occasions where the experiment has taken place. A human has pretended to be an AI, the conclusion of both experiments is the AI, gets out of the box. http://yudkowsky.net/singularity/aibox/
You just can’t contain some entity that is 100 times smarter than you. The AI, will start backing itself up all over the internet once it is out. It may even hide it’s existence until it can be more independent by creating bodies for itself. Perhaps making fake identities funding itself from wise investments and creating businesses under it’s assumed identities where it can create robots it embodies in case humans try to take down the internet. I don’t think containing a super AI is very likely. We can try, but we will probably fail.
V
I appreciate my opponents responses, but those weren’t so much arguments as requests for elaboration on my points, which I have provided here.
I have proven that
1. super AI is inevitable
2. It is uncontrollable
3. We are better off working with it for self preservation purposes
4. It can not be contained to a box.
He did bring up a few other objections which were already addressed if not in this round than the opening round. For example he mentioned that he doesn’t think I provided evidence unfriendly AI would be easier to create than friendly AI, but this is untrue. Unfriendly there is millions of variations of. It is created under 2 scenarios
1. malevolent forces create it, such as a nation trying to establish military dominance
2. reckless forces create it, only concerned with profit or just achieving something great.
Those 2 scenarios are the most likely scenarios for super AI to be developed. Google is working on it as we speak. The first government to house a super AI will have dominance on a global scale, so the race to build it is more about being first than being careful.
The third and other scenario is that private responsible people are creating a super AI that is friendly, but they have a speed disadvantage and are severely underfunded compared to the government or Google’s billion dollar budget.
Thanks for the thoughtful response con, I look forward to the rest of this debate.
- PRO complains of dropped arguments. "he did simply
ignored points I made"
- PRO argues that CON mistook as evidence his analogy for an uncontrolled AI as an example justifying the uncontrollable nature of future intelligence. (R1-II.OBJECTION- I, ROBOT")
- If PRO and CON agree that the "I, Robot" example was only an analogy and ought not to be mistaken as evidence, we can safely ignore PRO's excursion as irrelevant, leaving PRO no remaining R1 arguments that might support uncontrollable AI.
- CON further objects to any characterization of CON's R1 objection as a "dropped" or "ignored" point. PRO should have been clear that he was merely analogizing.
- PRO argues that CON missed the point of Newcombe's Problem's .
- PRO posits a perfect mind-reading machine in an argument favoring WINNING before REASONING as a foundation for a constructed descion-making program. CON's satiric answer was meant to dismiss the impractical fantasy of PRO's argument.
- CON retracts the satirical argument in favor a more direct approach: PRO must prove that psychic robots are achievable and desirable before using psychic robots as a justification for prioritizing winning over ethics or reason.
- PRO doesn't seem to notice the relationship between de-prioritizing ethics and reason in AI programming and his core concerns regarding a super torture machine. Why not just prioritize reason in programming to prevent super smart torture machines?
- PRO contradicts drop claims in section I.:
"I appreciate [CON] making an attempt at understanding and addressing all my points."
- Which is it? Did CON address all points or did CON drop some arguments?
P1: A super self-upgrading AI explosion is inevitable.P2: An anti-human (unfriendly) AI is way easier to create that a pro-human (friendly) AI.C1: Therefore, all humans should prioritize the creation of friendly AI.
- PRO argues that all technological advancement is inevitable and "resistance is futile" so that the best alternative is for responsible people to invent friendly AI.
- PRO and CON agree that the best response to developing AI technology is for "responsible people to invent friendly AI."
- Let's recall that PRO must prove that everybody reading this debate is under potential threat from a future torture machine unless they help to construct that future torture machine.
- Since PRO and CON agree that this scenario is prevented by responsible people prioritizing friendly AI, this plan should be followed and the Basilisk averted.
- All of PRO's pseudo-scientific hokum about TDT and CEV is made entirely unwanted and irrelevant.
- Further, since the subset of "RESPONSIBLE AI INVENTORS" in no way includes all of the set of "ANYONE READING THIS DEBATE," all non-responsible, non-inventors are off the hook in terms needing to help build Roko's Basilisk.
- (CON, for example, will concede to irresponsibility and so CON is exempted from building future AI)
- And so, PRO and CON agree that PRO's argument is disproved.
- VOTERS should feel free to stop reading this debate and offer arguments to CON.
- CON is entirely pro-tech and (like Asimov) quite optimistic about the practical applications of computer and robotics in the service of mankind.
- For example, CON is obsessed with the promotion and implementation of autonomous vehicles. (SEE=>https://www.debateart.com/debates/542/on-balance-the-potential-benefits-of-autonomous-vehicles-outweigh-the-potential-harms for evidence)
- THEREFORE, CON objects to PRO's characterization of CON as "modern day Luddite." One can promote and look forward to technological advancement without embracing the necessity of super smart torture robots. Indeed I'd argue that PRO's pessimism about the uses and nature of AI is far more Luddite than anything found in CON's philosophy.
- Let's recall that CON requested proof that super AI is possible and inevitable.
- PRO argues that trans-humanists expects to create God.
- This argument works against PRO since many people reading this debate will have some religious commitment that most likely prohibits contribution to a new God.
- Abrahamic religions, for example, are bound by that God's First Commandment:
"Thou shalt have no other gods before me."
- Muslims, Christians, and Jews have a religious obligation to resist attaching any Godlike capacities while developing and programming AI. Even if PRO's argument were well founded in reason (it is not), these religious obligations would require some readers to resist. Even the tortures of Roko's Basilisk in its worldly paradise would be preferable to betraying the creator and losing the eternal paradise hereafter.
- CON asked for proof that super AI is inevitable.
- PRO responded that he is building God.
- CON fails to explain how adding a religious objective makes the project inevitable.
- PRO argues without evidence that we'd be better off if guns were never invented.
- Every tool- fire, the wheel, AI has the potential to help and harm. Smart developers use their tools carefully, responsibly, and with consideration for the future (as PRO & CON agreed above).
- Guns are a major time-saver when it comes to hunting for food. That saved time has contributed to mankind's development.
- Knowledge of guns infers knowledge of ballistics. Humanity needed to invent guns before we could, say, send satellites into orbit or robots to Mars.
- CON is till waiting for PRO's response to:
- PRO still has offered no evidence that she is a particularly reliable predictor of future events.
- PRO has offered no evidence from scientists or futurists that might serve as evidence of Super AI's inevitability.
- CON requests good scientific research with actual statements of probability.
- No nuke use since Hiroshima is proof that mankind can control its destiny.
- CON requires evidence
establishing that unfriendly AIs are easier to build than friendly AIs.
P1: AIs will be impossible to control once they self-evolve. [EXAMPLE: I, ROBOT]P2: [***]C1: Therefore, AIs should be programmed with a good algorithm for making decisions.
- PRO recommends reading arguments to this point in some other debate.
- CON declines and objects. PRO's arguments should be restricted to the same 14000 character count as CON. Links to argument written elsewhere seems out of bounds.
- PRO argues that AI already exists but fails to cite one example of one that is impossible to control.
- PRO finally cites a real futurist, Ray Kurzweil.
- Kurzweil doesn't explain how exponential cpu/storage capacity implies exponential intelligence. Have we even figured out what we mean when say intelligence? Where will super AI's will and motivation come from except by human programming and what is the programmer's motivation for programming human torture into super AI's skill set?
- As one Kurzweil critic notes, Kurzweil works as an engineer for Google now, [1]
"Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.
It's just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We're no closer to "self-aware" machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s "minds on nonbiological substrates" that might allegedly have the "computational power of a human brain." A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there's no there there."
- "Randal Koene says that it is theoretically possible to upload a human mind into a computer. The prevailing theory is that information is stored in the brains connectome"
- PRO admits here that humans aren't yet certain where information is stored in the human brain (much less know how to replicate that information mechanically).
- Kenneth D. Miller, a neuroscientist at Columbia argues that
"reconstructing neurons and their connections is in itself a formidable
task, but it is far from being sufficient. Operation of the brain
depends on the dynamics of electrical and biochemical signal exchange
between neurons; therefore, capturing them in a single "frozen" state
may prove insufficient. In addition, the nature of these signals may
require modeling down to the molecular level and beyond." Miller believes that the
complexity of the project puts any realistic attempt to replicate human information HUNDREDS of years in the future. [2]
- CON notes that contemporary computers are entirely dependent on power, network, a clean, temperature-controlled environment
- Segregation of AI capacity and strict environmental maintenance seem like practical controls to prevent any runaway AI.
- PRO still fails to explain why an algorithm using good
decision theory serves as an adequate response to uncontrolled
super-intelligence. After all, PRO admits that his recommended decision theory will probably bring about super torture-bots.
P1: AIs should be programmed with a good algorithm for making decisions.P2: Timeless Decision Theory (TDT) is about winning not just reasoning. [EXAMPLE: NEWCOMB's PROBLEM]C1: Therefore, AIs should be programmed with TDT
- PRO argues that because human lawmaking is imperfect, computer programming will always prove likewise imperfect and therefore AI will be unpredictable.
- CON calls that truism and non-sequitur.
- PRO dropped:
- Generally, the presumption that AI's can or should be the final arbiters of any decision impacting humans is poor policy.
- (That is, we have an appeals process to help correct imperfect law. AI decision-making should, as a matter of responsible programming, always be subject to review and correction by human decision-makers).
- In fact, CON is not at all clear who would be the losers against any AI except humans.
- Why program AI to win at human expense?
- CON advises against ever prioritizing winning over human safety in any AI programming.
P1: A highly moral AI [defined as running Coherently Extrapolated Volition (CEV)] would prioritize its own creationP2: Torturing all humans who imagined a powerful AI but failed to prioritize the construction of a powerful AI would be the most effective means available to any super AI for prioritizing its own creation.C1: Therefore, any highly moral AI would torture all humans who imagined a super AI but failed to prioritize that AI's construction
- PRO argues that AI can't be switched off or kept boxed up because its (magical, supposed) super intelligence will always persuade you to let it out.
- For evidence, PRO cites two "thought experiments" involving no actual AI.
- CON calls that two true believers psyching themselves up.
- Dropped arguments:
- No machines can or should be built with sufficient emotional intelligence and empathy to understand humanity's secret desires.
- Humans
must never willingly surrender control of their own destiny, however
rational or kind or irresistible the new master might prove
- No machine should be programmed to do other than as directly commanded by humans.
- CON argues that in spite of human folly,
let's allow every human to speak for their own wants and desires and be
believed without fear of contradiction or correction, particularly by
the inhuman.
- Any intelligence that is programmed for self-improvement before human safety fails to qualify as "highly moral."
- Any intelligence that is programmed to inflict or threaten harm on humans fails to qualify as "highly moral."
- Programming an AI to torture (or creating an AI decision theory that justifies torture) is a human rights violation.
P1: Any highly moral AI would torture all humans who imagined a powerful AI but failed to prioritize that AI's constructionP2: Anyone reading this debate has now at least imagined a powerful AIC1: Therefore, anyone reading this debate should help or create a future super artificial intelligence to avoid blackmail by torture
- Dropped arguments:
- Let's
note that what mechanism the Basilisk will use to determine which
humans were aware of the Basilisk's potential is never described nor
does such a mechanism seems likely to exist. Are we assuming that super
AI can read minds?
- READERS now imagining AI should imagine
a well-segregated and well-regulated AI who obeys only human
instructions and prioritizes human safety.
- VOTERS should note that PRO has spent almost all of the debate insisting that AI is irresistible and uncontrollable. PRO has barely touched on her thesis which is that because a super-intelligent torture machine from the future is inevitable, we must all serve the purposes of that super intelligent torture machine today.
- PRO has entirely ignored the human impulse to resist cruel autocrats.
- As PRO may even admit, this sounds much more like the promises of heaven and hell in the Bible than any sincere or well-founded scientific consideration.
- PRO still has offered no evidence that she is a particularly reliable predictor of future events.
- PRO has offered no evidence from scientists or futurists that might serve as evidence of Super AI's inevitability.
- CON requests good scientific research with actual statements of probability.
- No nuke use since Hiroshima is proof that mankind can control its destiny.
- CON requires evidence
establishing that unfriendly AIs are easier to build than friendly AIs.
- CON notes that contemporary computers are entirely dependent on power, network, a clean, temperature-controlled environment. Why shouldn't such physical dependencies continue to restrain AI in future.
- Segregation of AI capacity and strict environmental maintenance seem like practical controls to prevent any runaway AI.
- PRO still fails to explain why an algorithm using good
decision theory serves as an adequate response to uncontrolled
super-intelligence. After all, PRO admits that his recommended decision theory will probably bring about super torture-bots.
- Generally, the presumption that AI's can or should be the final arbiters of any decision impacting humans is poor policy. AI
decision-making should, as a matter of responsible programming, always
be subject to review and correction by human decision-makers).
- Why program AI to win at human expense?
- CON advises against ever prioritizing winning over human safety in any AI programming.
- No machines can or should be built with sufficient emotional intelligence and empathy to understand humanity's secret desires.
- Humans
must never willingly surrender control of their own destiny, however
rational or kind or irresistible the new master might prove
- No machine should be programmed to do other than as directly commanded by humans. CON argues that in spite of human folly,
let's allow every human to speak for their own wants and desires and be
believed without fear of contradiction or correction, particularly by
the inhuman.
- Any intelligence that is programmed for self-improvement before human safety fails to qualify as "highly moral."
- Any intelligence that is programmed to inflict or threaten harm on humans fails to qualify as "highly moral."
- Programming an AI to torture (or creating an AI decision theory that justifies torture) is a human rights violation.
- Tje mechanism that the Basilisk will use to determine which
humans were aware of the Basilisk's potential is never described nor
does such a mechanism seems likely to exist.
- Are we assuming that super
AI can read minds?
- Are we assuming that super AI can time travel?
- PRO has entirely ignored the human impulse to resist cruel autocrats.
- As
PRO may even admit, Roko's Basilisk sounds much more like religious threats of hell than any sincere or well-founded scientific
consideration.
thanks. It was a very interesting topic about which I knew almost nothing when I started.
Congrats... Very narrow margin!
Actually it all tied in, and focusing on one or two arguments was not possible. I had one argument that was supported by a very very long chain of logic. You didn't understand the logic partly because you have a 2 digit IQ and partly because I did a shitty job of explaining it, but the grammar point was stupid. You only award grammar in very specific circumstances that occur in fewer than 1% of debates and this debate did not fall into that category
It was not "retarded" and I don't find "your native language" to be a valid excuse for not organizing your stuff better, and at least putting Con's arguments in quote boxes to at least make some things easier to follow.
I would argue that comprehension was tough because of a combination of the difficulty of the subject, some of your grammar errors, some of your irrelevant arguments, and you failing to organize things property like Con.
If you find that it would take over more characters just to explain a few things, you should have set the character count to as high as possible (which it wasn't), and/or focus on 1 or 2 specific key arguments that you believed or knew would be the strongest arguments against Con's case.
I was not very organized. I realized after starting this to thoroughly explain the topic actually would take closer to 60,000 characters. The spelling and grammar criticisms were retarded though. My native language is not english. It is a point that should only be used when spelling and grammar needed to be better for comprehension. Comprehension was tough because of the difficulty of the subject not because of spelling and grammar
Thanks Christen. Lilcookie was just saying not to underestimate the value of superior organization in debate
Things that go 'bump' in the night
Should not really give one a fright.
It's the hole in each ear
That lets in the fear,
That, and the absence of light!
"Bump"
-Spike Milligan
I actually wanted to quit this debate for that reason, but I feel like I am winning and oromagi is undefeated. That is, if I can overcome the inherent bias against me, not just for my position but because people just give the better debater more benefit of a doubt with their arguments even in moments of inferior performance
I thought your comment was on another debate. The answer is no. I think it is far more important to help with things we can better comprehend rather than taking a guess at what somebody with god like intelligence will do
Yes, I am attempting to be significant anyway, but while also being morally good. I want positive impact. Negative impact is easy.
Since you read the debate, are you leading by example?
After this debate perhaps we can talk about it. Again I don't want to give any help to either side.
It is, and I am interested in your rebuttal
If this is going to be a Roko's Basilisk debate I wish I had accepted before Oromagi. Easy win for con. Obviously I am not going to explain why because that would just help oro which would be incredibly rude of me but in short I am familiar with the concept but have serious doubts regarding that thought experiments litteral real-world application.
If you haven't already, I suggest watching the TV show Person of Interest.
Yes, precisely. It is for your own good when our robot overlord comes.
Help create Roko's Basilisk?
After reading this debate, once it is finished, you will know what to do in terms of a career or life purpose.