Meta joined in and replied with this 6 months ago, 3 hours later, 1 day after the original post[^][v]#1,378,258
AI-box experiment
The AI-box experiment is a thought experiment and roleplaying exercise devised by Eliezer Yudkowsky to show that a suitably advanced artificial intelligence can convince, or perhaps even trick or coerce, people into "releasing" it — that is, allowing it access to infrastructure, manufacturing capabilities, the Internet, and so on. This is one of the points in Yudkowsky's work at creating a friendly artificial intelligence (FAI), so that when "released" an AI won't try to destroy the human race for one reason or another.
Note that despite Yudkowsky's wins being against his own acolytes and his losses being against outsiders, he considers the (unreleased) experimental record to constitute evidence supporting the AI-box hypothesis, rather than evidence that his followers are more susceptible to releasing a hostile AI on the world than someone who hasn’t drunk their Kool-Aid.
What the fuck does this even mean? Is OP going to join that killer AI cult or something?