Acausal Trade

Author: Swagnarok

Posts

Total: 5
Swagnarok
Swagnarok's avatar
Debates: 7
Posts: 1,250
3
2
6
Swagnarok's avatar
Swagnarok
3
2
6
Acausal trade is a concept that was first(?) explored on an online community known as LessWrong. In a nutshell, you have two actors who can't communicate directly but do so by predicting the other's actions. In a nutshell it revisits the classic "prisoner's dilemma" and asks the question: what if both prisoners could anticipate the other's move?

The "Roko's Basilisk" thought experiment is internet-famous, though it arguably wasn't meant to be a serious idea so much as a demonstration of the principle of acausal trade. The future AI god can't communicate with you now, in its past. It can only threaten you by way of you anticipating its threat and responding to it.  The idea is still impractical for a number of reasons: (1). The vast majority of humans are unqualified to make any contribution to its future existence; (2). Its threat cannot be truly predicted but only speculated about, meaning this isn't true acausal communication, meaning the ultimatum cannot be issued, and it's immoral to enforce a threat made in the absence of true communication; and (3). the AI has no reason to enforce the threat after it has come into existence. Again, since acausal communication isn't happening, its decision not to enforce the threat can't be truly predicted.

Another application is the "multiverse trade" idea. Suppose that a runaway AI has taken over everything and subordinated every particle in the universe to its will. There is a multiverse but it cannot directly communicate with other realities. What it can do, however, is use its near-infinite predictive power to reconstruct what other universal AIs in other realities are like. They "communicate" by perfect prediction of each other's attributes.
They predict correctly that if they themselves do X, another AI will respond with Y, that another AI has predicted their response to its actions, and finally that the other AI knows that they know. If an AI is programmed to have a value set, then it may see utility in seeing that value set expanded to a parallel reality that lacks such. Different values may be mutually proliferated through acausal trade.
Swagnarok
Swagnarok's avatar
Debates: 7
Posts: 1,250
3
2
6
Swagnarok's avatar
Swagnarok
3
2
6
On a related note, I first learned about this concept on a blog called the Slate Star Codex. I highly recommend checking out it, and its successor blog, the Astral Codex Ten.
zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,074
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@Swagnarok
In a nutshell.

People make this stuff up for a reason.

Greyparrot
Greyparrot's avatar
Debates: 4
Posts: 25,983
3
4
10
Greyparrot's avatar
Greyparrot
3
4
10
-->
@Swagnarok
Roku's Basilisk can also explain why weak people follow the Establishment. They are hedging their bets that their heads won't be the first to fly.
Lemming
Lemming's avatar
Debates: 7
Posts: 3,352
4
4
10
Lemming's avatar
Lemming
4
4
10
I don't see how it's possible to 'know the future.

I suppose there are probable futures,
And even situations where if a person knows or not, they will make the same decision in the end.

But knowledge of what one 'thinks another person will do, often changes their action.
. . .

I suppose the mob might say it will kill you if you rat,
Is it the same as the AI idea?
. . .

A'causual
Causation?
What does acausal mean?

not involving causation or arising from a cause : not causal. acausal phenomena. The behavior of atoms, according to this interpretation, is random and therefore acausal.

Don't many people not believe in a lack of causation?

I suppose two unlinked events wouldn't have cause,
But even the theoretical AI is 'linked to people, even if uncreated in the future,
Though it doesn't actually need to follow through on it's threat, unless people had some machine that could read the future, well, create probabilities.