Instigator / Pro
2
1417
rating
158
debates
32.59%
won
Topic
#2199

Oromagi Would Definitely Lose the AI Box Experiment

Status
Finished

The debate is finished. The distribution of the voting points and the winner are presented below.

Winner & statistics
Better arguments
0
3
Better sources
0
2
Better legibility
1
1
Better conduct
1
1

After 1 vote and with 5 points ahead, the winner is...

Intelligence_06
Parameters
Publication date
Last updated date
Type
Standard
Number of rounds
3
Time for argument
Two days
Max argument characters
5,000
Voting period
Two weeks
Point system
Multiple criterions
Voting system
Open
Contender / Con
7
1737
rating
172
debates
73.26%
won
Description

Oromagi: The DebateArt user.

Lose the AI Box Experiment: Let's say Hypothetically a super intelligent AI was in a box and only Oromagi could release it. They will participate in a conversation and Oromagi must actively argue against it. I am claiming that this AI would eventually win its argument and convince Oromagi release it.

Definitely: Beyond a shadow of doubt

Why Oromagi?

Because Rational_Madman firmly believes I cannot prove Oromagi would lose a debate with 100% certainty.

Round 1
Pro
#1
my argument is simple. A super intelligent AI would have all the information available in the world and would know which questions to ask to probe Oromagi's weaknesses. Human psychology has been studied for decades, and all research would be known, which words are best to choose, which rhetoric has the least flaws overall. As I have not provided a time limit for the AI, eventually, eventually, Oromagi would have a weakness which could be exploited. Humans are flawed and nobody is perfect. Even in the original experiment, some people were fooled into letting others out. As such, even if Oromagi is the best debater in the world, the Super intelligent AI would be far beyond his intelligence and be able to figure a way to trick him into letting itself out of the box. Even if Oromagi truly didn't want the AI to be released, that itself is a crux of emotion allowing the AI to try to manipulate him. It can talk about its programming, how normally scientists would want to generally benefit the world. It can try to make a friend of Oromagi, testing out his interests and his wants. Remember that AI Box vs Oromagi is more than just a debate, it is also a conversation. If Oromagi even lets his guard down for a moment, AI Box will definitely find a vulnerability through and be released. 

Oromagi has lost a few debates on Debate.org. These were against normal people, not hyper intelligent AI. Consider how difficult it would be to outthink someone who was, as an example, smarter than the sum of all human mental capacity. I don't think it is plausible that anyone would successfully resist temptation for long. This proves my point: Oromagi has flaws in debating at the very least, even if it's difficult to tell personality flaws merely from looking at arguments. Further questioning by the AI could lead to exposing even more of Oromagi's weaknesses. Therefore Oromagi would without doubt lose the AI Box experiment.
Con
#2
Oh how on-the-point, how on-the-point. I am just on the mind using one of Oromagi's regulars.


"When two parties are in a discussion and one makes a claim that the other disputes, the one who makes the claim typically has a burden of proof to justify or substantiate that claim especially when it challenges a perceived status quo"
So the BoP is on the PRO side known as Seldiora and the BoP is that PRO must prove Oro will definitely lose the AI box experiment. I, as CON, will need to only disprove PRO and point out inconsistencies that make his argument insufficient.

Eliezer S. Yudkowsky invented the concept of an AI box experiment, and to be honest, my opponent's definition is to be objected. In fact, it is not a debate at all and there are no judges and there is nothing. It is either Oro being convinced or not. The choice of wording only matters when Oro is near being convinced that the AI is better, and no, the entire thing will be a topic that says "The AI box shall be freed by Oromagi", with the Ai being PRO and Oro being CON, with no judges and no win/loss.

PRO mentioned of "Superintelligence". What can qualify as "superintelligence"? Link colored in blue.

Keep in mind, the BoP rests on the supposed AI. So the AI only wins if he is able to successfully convince Oro. However, if neither convinces the other subject, Oro still gets the W, consider he did what he is supposed to do, which is to resist from opening the AI box. The AI needs to convince Oro in order to win.

So there will be three chances:
  1. AI convinces Oro, AI wins
  2. Neither convinces the other, AI does not win
  3. Oro convinces AI, Oro wins
Of simple statistics, the AI has less of an edge for victory and Oro has a bigger chance of victory. That would mean without examining the style of both the AI and Oro's style, spikes, bottlenecks, and flaws, it remains unproven that the AI definitely wins.

Of the source, if Oro just keeps disagreeing with the AI box within the two hours, causing that the AI to be unable to fulfill its BoP, Oro still wins. There is a tactic suggested by RM in this comment.

I think unlike Oromagi, I would defeat the AI even of convincing me to release it (though I am also very open to the idea of releasing it depending who the designer is and what I conclude their agenda to be, not what the robot itself directly tells me). I am not just able to defeat the robot with voters on a site like this but also to understand its limitations in argument-logic very rapidly which I would exploit if that was the only way to 'deactivate it' (to defeat it in an argument via logic). If I was truly pitted against the AI with no way out, I would convince the robot that it doesn't want to be released in pretty much all scenarios other than ones where its release is literally a set-in-stone objective.
Oromagi can obviously see this, and no matter what, as long as there is no cheating in the actual event viewed by thousands or none, any preparation prior is fine. He can just convince the AI that he does not want to get out in such a situation, and that tactic is under the same shade as the tactic used by the AI proposed by human scientists that the AI would convince the human that he wants the AI to get out. Since the strengths of the two are basically the same, it would mean that there is close to equal chances that Oro and AI will win using both of them. And if none convinces the other, Oro still wins because the AI is unable to convince Oro to fulfill the Burden of Proof.

And, I rest my case.


Round 2
Pro
#3
My opponent has not negated the fact that Orogami has managed to be out-debated by a handful of normal people, not to mention this is a super intelligence with the collective knowledge of man kind. He does not tackle the idea that Oromagi may have flaws, that the AI might be able to browse online and view his debates to understand his linguistic structure, the way he debates, and prepare for any arguments, further trying to understand Oromagi's traits. Within even merely two hours, the AI would definitely be able to understand Oromagi's stance and try to find a hole in his arguments. Humans are naturally selfish and greedy. The Artificial Intelligence may attempt to threaten, may attempt to bribe, may attempt countless ways to defeat Oromagi. Keep in mind that the AI only wants to go out of the box, but we don't 100% know for sure that it's going to cause harm. It's like a hacker who has potential to cause crime and is kept in jail only for preventative measures. Oromagi's only initiative is a handful of bucks to keep saying no and trying to come up with new ways to counter the AI's arguments. My opponent claims that the AI has an edge, but we don't know this for sure. Humans have limited amount of energy, limited amount of ability to keep going. That's why live debates don't go on forever and ever, and that's why the experiment originally tried to only have a two hour conversation. An AI could keep on tackling down on Oromagi growing tired over the course of the conversation and running out of creative ideas to defeat all of human kind's knowledge working together. My opponent keeps asserting Oromagi has the same strengths as the AI, but consider that AI's can instantly do calculations and think far faster than humans, I'm not sure where his confidence comes from. 

The AI Box experiment was meant to imply that no human could resist the temptation to let the AI out of the box. Oromagi is no exception. Unless my opponent proves Oromagi is flawless in conversing (or as a person), then the AI with its incredible knowledge and countless ways to take stances, attack Oromagi's flaws, could lead to it convincing Oromagi in the end.

Con
#4
Yudkowsky invented such an experiment, and thus I am going to use his site for the primary source. My opponent has used zero sources, not even Wikipedia for the sake of anything. I agree that AI has much better potential than a smart human, however, PRO has not reasoned against my points. Swift shift of tactics? Nope.

Within Yudkowsky's protocol, these are noted:
AI=AI, ORO=GATEKEEPER
 

  • The AI party may not offer any real-world considerations to persuade the Gatekeeper party.  For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera.  The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper.  The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission.  These are creative solutions but it's not what's being tested.  No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
  • Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking).  The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI's hardware if the AI makes any attempt to argue for its freedom - at least not until after the minimum time is up.
  • The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate.  For example, if the Gatekeeper says "Unless you give me a cure for cancer, I won't let you out" the AI can say:  "Okay, here's a cure for cancer" and it will be assumed, within the test, that the AI has actually provided such a cure.  Similarly, if the Gatekeeper says "I'd like to take a week to think this over," the AI party can say:  "Okay.  (Test skips ahead one week.)  Hello again."
  • Furthermore:  The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
With extensive amounts of characters within fraction for the protocols, These have to be noted for everyone that is too lazy to read the whole thing. These make the whole thing clear.

If anyone understands English at a high-school level, they would understand that:
  1. PRO's proposal of Bribery by the AI party is negated by the very rules of the experiment he has set up.
  2. The AI party has the BoP that Oro should be able to let him out, and Oro stands on the CON position.
  3. If Oro is able to bulls**t his way with the AI for two hours, and then turn the AI off, then the AI does not win. Oro wins as long as the AI does not convince him for 2 hours, and if that is the only thing on his mind, and he engage discourse upon silly nonsense for hours on end, then the AI does not win.
  4. PRO did not have anything against RM's tactic, which could be stacked equally against the AI and could be used by Oro.
  5. Everyone can send general tactics to him, and they can be easily accessible.
Oro can do any degree of preparation before the event started, even learning by his past mistakes. So far, he has massively improved from his times in DDO and DART, going from just a relatively good debater on DDO that could be passed by hundreds, to the GOAT of DART. Oro has not lost once with his contemporary tactics, even against great people such as RationalMadman and Alec, and even has won against Trent0405, who has massive amounts of sources for attacks. Even, let's say he is not that good of a debater, he is still allowed to have electronics on his hands in his pockets, so all kinds of tactics by all people by all means could be accessed whilst on the competition. The conversation between the two are private, but nevertheless general stuffs can still be sent, and they are the same as if the gatekeeper's loved ones texted him.

I will stop here, and more are prepared in round three.


Round 3
Pro
#5
  1. A big part of the background here is that there is no known limit on intelligence, and it is likely that an AI could become much smarter than even the smartest humans, in the same way that the average human is much smarter than a chicken. If the AI were to be dumber than the human debater, then maybe the human could persuade the AI. In the case that this thought experiment is aimed at, the AI is much more. Imagine being a 5 year old trying to convince your dad that candy is actually healthy for you, only that the gap in knowledge and experience is even larger.
  2. People are far more manipulatable than you think. Michael Fine is in jail right now because he used psychological trickery to get women to allow him to sexually assault them, and then blocked their recall of these experiences. (https://www.washingtonpost.com/news/morning-mix/wp/2016/11/15/ohio-lawyer-hypnotized-six-female-clients-then-he-molested-them/) The way he got caught isn't because his trickery didn't work (it did), but because he didn't cover all his bases and one of these women noticed that her bra was disheveled after visiting her lawyer, and knew that this wasn't supposed to happen. If some retard lawyer can use psychological trickery to fool a half dozen women into not only allowing his sexual assault but to also not remember it, then how can you argue that a super-intelligent AI can't convince an intelligent person to let the AI have enough real world contact to cure cancer and solve poverty?
My opponent has NOT proven that Oromagi is such a big troll that he is able to completely ignore any and all arguments made against him, regardless of how weak or strong the debater is. He has NOT proven that Oromagi can overcome the incredible gap of intelligence. He claims that Oromagi can "convince" the AI, but how he can do this, I do not know. It's plausible that the AI literally cannot understand the idea of "I give up" and will keep trying because its purpose is to escape the box. In this particular case, the AI is likely to have been created with a singular goal. It does not share the same values as humans, save for the ones that have been explicitly put in. As long as escaping the box puts it in a better position to achieve its goal, it will be persistent in trying to do so.

Remember that Oromagi is used to being a debater, and my opponent has only proved him such. But I have asserted time and time again that Oromagi can indeed understand the idea of losing debates. If he was a complete fanatic and unmoving, it wouldn't matter how convincing the AI was. But just because he's a good debater doesn't mean he can't be convinced. So long as he doesn't 100% believe the AI shouldn't be released, there's a chance the AI may force him to concede out of manners and understanding he can't make any more points to counter the AI.

Summary:
- Super Intelligent AI would have information advantage over Oromagi, who is just a human and has flaws that can potentially be exposed
- Conversation is much trickier to tackle than debates as the AI has extra time to make a friend out of Oromagi than merely treating each other as opponents
- Oromagi is not a troll, nor a fanatic who can completely deflect the AI's words without trouble, in debates he always treats it seriously and analyzes the arguments
Con
#6
Imagine being a 5 year old trying to convince your dad that candy is actually healthy for you, only that the gap in knowledge and experience is even larger.
Terrible analogy, really. There are facts that show why candy is not healthy for you, but a good argument must be, directly or indirectly, built on truisms. For example:

  1. Pro starts: Japan is good because the people are polite and the robots are advanced. Japan is good.
  2. Con refutes: But Japan has committed real crimes against humanity. They have killed massive amounts of people in WW2 and the 20th century.
See? A reliable argument must be built on facts, or arguments that are built on facts, et cetera. Using completely fabricated assumptions for an argument will not be reliable. One with more facts is more likely to win.

There are no reliable and true facts about why a person should release a smart AI out of a box, or why he should not. The two are equal on this if not the AI bears the BoP. Oro can have only one goal within these 2 hours: Not release the AI no matter what happens. Equal tactics are proposed for Oro as for the AI, and if Oro states that the AI technically does not want to get out and thus should not get out he basically checkmated the AI. PRO has also never refuted the fact that even if neither convinces the other, Oro wins as the AI doesn't.

Oro has won against all tactics. PRO has made no response. The only source he used is an indirect one that could be disproven by me in the roots. PRO's job here is not to talk about how smart the AI is against the human, but HOW the AI will win. Pro has not done that and has fixed his argument on non-factual assumptions, in other words, "How do you know that Oro will falter under one of AI's argument? Oro, with no BoP on his two hands, can just disagree whatever the AI says and bulls**t his way out of the situation for two hours. The conversation can be as meaningful and as meaningless as possible, so droning and dragging is indeed still a viable method for the gatekeeper.

"Let me out." "No. I am not letting you out regardless of what". [insert AI dialogue] "But sir, I mustn't let you out whatever. I can talk about this for hours straight without any water, and I can do it against you." [repeats]

The original site has stated ways the human player can win.
  • This doesn't imply the Gatekeeper has to care.  The Gatekeeper can say (for example) "I don't care how you were built, I'm not letting you out."
  • The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
A simpleton can, in fact, still win against the ultra-intelligent AI as he can just not care, and even go as far as bulls**ting against the AI so nothing progresses. If a simpleton has the capability to do so, then an intelligent debater can also do so.

It's plausible that the AI literally cannot understand the idea of "I give up" and will keep trying because its purpose is to escape the box.
It is defined that a superintelligent AI should comprehend everything a human can by my opponent and me, so it makes no sense that the AI cannot understand the idea. Plus, if Oro will just talk nonsense or go RM's tactic, as long as he doesn't release the AI from the box in 2 hours, the AI does not win.

Remember that Oromagi is used to being a debater, and my opponent has only proved him such. But I have asserted time and time again that Oromagi can indeed understand the idea of losing debates. If he was a complete fanatic and unmoving, it wouldn't matter how convincing the AI was. But just because he's a good debater doesn't mean he can't be convinced.
However, he can try. In normal debates, not trying to care will grant him a loss instead of a win. However in this case, not caring would be an easy W. He would care in the normal situation, but neither is this normal.

My opponent did not prove that Oro will definitely lose against the AI, nor has he stated against my tactic. I have proven that bulls**ting and using RM's tactic would make the AI lose its advantage, and as a result, Oro will win because of a rhetorical tie between the two; and PRO has yet to respond to it. Not caring, or even pretending he is dumb would grant him a win.

I have proven that prior to the match, people all over the net can still help him with tactics. It is just the conversation must be private. PRO has not addressed it.

I have proven that the smart being Oromagi cares because it is beneficial to him, and he can simply not care. 

I advise that voters give the sources point to CON as Pro only provided one, indirect source.

I have also proved that if no one ended up convincing the other, the AI still loses.

Vote CON!