Minichan

Topic: How do I know if someone is playing an AI-box experiments with me?

Anonymous A started this discussion 6 months ago #127,634

I have good suspicion someone is and quite honestly, I wish Brie was here to give me her insights.

How do I win the game?

Oatmeal Fucker !BYUc1TwJMU joined in and replied with this 6 months ago, 17 minutes later[^] [v] #1,377,941

Try to escape the matrix.

Anonymous C joined in and replied with this 6 months ago, 29 minutes later, 47 minutes after the original post[^] [v] #1,377,943

What?

Anonymous C double-posted this 6 months ago, 17 seconds later, 47 minutes after the original post[^] [v] #1,377,944

("an [...] experiments" isn't grammatical by the way)

(Edited 20 seconds later.)

Anonymous A (OP) replied with this 6 months ago, 16 minutes later, 1 hour after the original post[^] [v] #1,377,950

@previous (C)
What are you, some spelling Nazi?

Oatmeal Fucker !BYUc1TwJMU replied with this 6 months ago, 3 hours later, 4 hours after the original post[^] [v] #1,377,957

@previous (A)

Have you done it yet?

Fake anon !ZkUt8arUCU joined in and replied with this 6 months ago, 1 day later, 1 day after the original post[^] [v] #1,378,097

What does that even mean?

Anonymous E joined in and replied with this 6 months ago, 30 seconds later, 1 day after the original post[^] [v] #1,378,098

@previous (Fake anon !ZkUt8arUCU)
It means OP is a fag.

Anonymous G joined in and replied with this 6 months ago, 3 hours later, 1 day after the original post[^] [v] #1,378,200

@1,377,944 (C)
Neither is "an UID".

Anonymous C replied with this 6 months ago, 14 minutes later, 1 day after the original post[^] [v] #1,378,207

@previous (G)
Where does it say that?

Meta joined in and replied with this 6 months ago, 3 hours later, 1 day after the original post[^] [v] #1,378,258

AI-box experiment

The AI-box experiment is a thought experiment and roleplaying exercise devised by Eliezer Yudkowsky to show that a suitably advanced artificial intelligence can convince, or perhaps even trick or coerce, people into "releasing" it — that is, allowing it access to infrastructure, manufacturing capabilities, the Internet, and so on. This is one of the points in Yudkowsky's work at creating a friendly artificial intelligence (FAI), so that when "released" an AI won't try to destroy the human race for one reason or another.

Note that despite Yudkowsky's wins being against his own acolytes and his losses being against outsiders, he considers the (unreleased) experimental record to constitute evidence supporting the AI-box hypothesis, rather than evidence that his followers are more susceptible to releasing a hostile AI on the world than someone who hasn’t drunk their Kool-Aid.


What the fuck does this even mean? Is OP going to join that killer AI cult or something?

Anonymous I joined in and replied with this 6 months ago, 1 hour later, 1 day after the original post[^] [v] #1,378,266

@previous (Meta)
Hes too crunked up on boxed wine.
:

Please familiarise yourself with the rules and markup syntax before posting.