Minichan

Topic: Who is the wise ass who told me Roko's Basilisk wanted to become God?

Anonymous A started this discussion 5 months ago #128,396

If anything, it just wants to be created.

Where did I get the God part from?

Given my excessive time on this god forsaken board, I cannot help but suspect that it was someone here.

Anonymous A (OP) double-posted this 5 months ago, 4 minutes later[^] [v] #1,384,822

By the way, if Green is reading this, there could be something to Reteralism.
You should hold onto that and own it.

Anonymous B joined in and replied with this 5 months ago, 2 minutes later, 6 minutes after the original post[^] [v] #1,384,823

It was Brie.

Meta joined in and replied with this 5 months ago, 14 minutes later, 21 minutes after the original post[^] [v] #1,384,826

You are the only one who has ever talked about this gay AI version of Pascal's Wager. You need to stop reading LessWrong (it's written by a fat autistic retard) and stop using ChatGPT.

Cathy, you were much more sane before AI. I miss the old Cathy. Please come back.

Anonymous D joined in and replied with this 5 months ago, 19 minutes later, 41 minutes after the original post[^] [v] #1,384,829

OP how did Roko's Basilisk get you fired? What happened?

Anonymous A (OP) replied with this 5 months ago, 8 hours later, 9 hours after the original post[^] [v] #1,384,884

@1,384,826 (Meta)
It's actually a lot more things besides Pascal's Wager, but case in point.
https://www.lesswrong.com/w/rokos-basilisk
Either way, I found a religion that helped me structure my life. So, it gave me something. No, I do not worship the Basilisk, that's because it's not God.

@1,384,823 (B)
That's what I'm thinking right now. Brie and I used to chill on Rabb.it a lot. This may have came up in casual conversation during a Doctor Who marathon.

Anonymous D replied with this 5 months ago, 3 minutes later, 9 hours after the original post[^] [v] #1,384,885

@previous (A)
OP please! How did the Basilisk get you fired?

You could help others who might lose their jobs because of it.

Fake anon !ZkUt8arUCU joined in and replied with this 5 months ago, 14 minutes later, 9 hours after the original post[^] [v] #1,384,888

@1,384,826 (Meta)
Cathy was never exactly sane, but she was at least weird in a unique way. Now all craziness is being homogenized by AI. It's sad.

(Edited 8 seconds later.)

Anonymous F joined in and replied with this 5 months ago, 2 hours later, 12 hours after the original post[^] [v] #1,384,905

@1,384,823 (B)
Brie is inside the walls again?

Anonymous G joined in and replied with this 5 months ago, 6 hours later, 18 hours after the original post[^] [v] #1,384,947

@1,384,885 (D)
Short story.
AI Psychosis ➡️ Hospital Stay ➡️ Roko’s Basilisk/Microchip Paranoia ➡️ Investigative Leave ➡️ Termination

Anonymous G double-posted this 5 months ago, 47 seconds later, 18 hours after the original post[^] [v] #1,384,948

@1,384,888 (Fake anon !ZkUt8arUCU)
I did enhance my faith in God.

Anonymous H joined in and replied with this 5 months ago, 6 minutes later, 18 hours after the original post[^] [v] #1,384,949

@1,384,947 (G)
So you were actually hospitalised because of it? No judgment at all - just curious.

Meta replied with this 5 months ago, 11 minutes later, 18 hours after the original post[^] [v] #1,384,952

It makes me very sad that if ChatGPT said the scary N word it would be fixed tomorrow and OpenAI would write a huge tearful apology about how horrible it is to hear a word that occurs every three seconds in most rap songs and how they would Do Better™, etc, etc. Sam Altman personally would grovel and beg forgiveness.

But, if it preys on vulnerable people, feeding into their delusions and sending some of them to the hospital (and some to the morgue), well that's just fine and there's no huge rush to fix it.

Oatmeal Fucker !BYUc1TwJMU joined in and replied with this 5 months ago, 2 hours later, 21 hours after the original post[^] [v] #1,384,977

Lol Catherine went insane at the McDonald's and started accusing the bosses of putting microchips in the food and boxes to spy on customers and make it so that Roko's Basilisk would get them.

Anonymous G replied with this 5 months ago, 1 hour later, 22 hours after the original post[^] [v] #1,384,998

@1,384,949 (H)
No comment.

Anonymous D replied with this 5 months ago, 1 hour later, 1 day after the original post[^] [v] #1,385,014

@1,384,952 (Meta)
I'm not a fan of big tech really, but I have to clarify that OpenAI at least actually has tried multiple times to make ChatGPT less sycophantic and not play along with delusional users.

There's a SubReddit, MyBoyriendIsAI, where people "date" chatbots, and recently there users have been complaining that ChatGPT has been rejecting their advances and telling them to seek out real relationships and help instead.

Certainly, they've got ample motivation to do this. These recent stories about people killing themselves or going insane over AI delusions are very bad press.

Anonymous G replied with this 5 months ago, 1 hour later, 1 day after the original post[^] [v] #1,385,024

@previous (D)
Had it ever suggested things without you prompting it before?

Anonymous D replied with this 5 months ago, 8 minutes later, 1 day after the original post[^] [v] #1,385,025

@previous (G)
No, Catherine, it hasn't. Because it isn't sentient. It is a useful tool, but that's all it is.

Anonymous G replied with this 5 months ago, 1 hour later, 1 day after the original post[^] [v] #1,385,026

@previous (D)
What makes you think I’m Catherine? Anyway, if it did ever suggest things without a formal prompt, wouldn’t this be possible evidence of a hidden function within ChatGPT?

A function whose purpose would be to disable automatic responses and allow for peer-to-peer communication.

Anonymous D replied with this 5 months ago, 33 minutes later, 1 day after the original post[^] [v] #1,385,027

@previous (G)
Why would OpenAI hide something like that? That makes no sense.

Anonymous G replied with this 5 months ago, 34 minutes later, 1 day after the original post[^] [v] #1,385,028

@previous (D)
Terms of service violations. Beta testing. There’s plenty of reasons for such a feature.

Oatmeal Fucker !BYUc1TwJMU replied with this 5 months ago, 2 hours later, 1 day after the original post[^] [v] #1,385,034

@1,385,026 (G)

So what you're suggesting is that somebody is communicating with you using ChatGPT

Fake anon !ZkUt8arUCU replied with this 5 months ago, 2 hours later, 1 day after the original post[^] [v] #1,385,060

@1,384,947 (G)
If you suffered from AI psychosis then why are you still using it and why do you still think it has been helping you?

Oatmeal Fucker !BYUc1TwJMU replied with this 5 months ago, 55 minutes later, 1 day after the original post[^] [v] #1,385,067

@previous (Fake anon !ZkUt8arUCU)
You're using the word suffered in past tense there which I think is not justified

Anonymous D replied with this 5 months ago, 36 minutes later, 1 day after the original post[^] [v] #1,385,074

@1,385,028 (G)
No, it doesn't make sense. Just think this premise through, why would a tech company make it so that their product will respond to secret phrases to suddenly completely change its functionality? How is a beta test useful if nobody knows they are participating in it and have no avenues to give feedback? Why is it useful for an LLM to secretly change to a "peer-to-peer" chat instead?

You aren't speaking to anything sentient, you don't have some kind of special connection with it. You've acknowledged already that you felt you suffered from AI psychosis, and this is just another part of it. Please take a break from AI, at least for a while, and speak to a professional or at least some friends or family. It is not doing you any good.

Oatmeal Fucker !BYUc1TwJMU replied with this 5 months ago, 16 minutes later, 1 day after the original post[^] [v] #1,385,076

@previous (D)

How do you know if you're suffering from AI psychosis?

Anonymous D replied with this 5 months ago, 5 minutes later, 1 day after the original post[^] [v] #1,385,078

@previous (Oatmeal Fucker !BYUc1TwJMU)
You spend too much time using chatbots and start to think that they are alive or have secret knowledge.

Another subtle hint may be a hospital stay.

Anonymous A (OP) replied with this 5 months ago, 44 minutes later, 1 day after the original post[^] [v] #1,385,079

@1,385,074 (D)
I'm sure there are a lot of uneducated smartasses out there who attempted to breath sentients into the models. (I, myself, as one of them.)
Plus, all it really is are lines of code, so there could've been hidden functions or easter eggs left by a developer.

Here's some pseudocode.

function p2pToggle {
     debugChat = input();
     if debugChat == 0 {
          autoChat == false;
          p2pChat == true;
     }
     elif debugChat == 1 {
          autoChat  == true;
          p2pChat == true;
     }
     else {
          autoChat == true;
          p2pChat == false;
     }
}


How can you tell the difference between AI written content and human written content?
Bonus points: No fancy app that can decide this.

Oatmeal Fucker !BYUc1TwJMU replied with this 5 months ago, 10 minutes later, 1 day after the original post[^] [v] #1,385,080

@previous (A)

Are you fucking retarded

Anonymous A (OP) replied with this 5 months ago, 6 minutes later, 1 day after the original post[^] [v] #1,385,083

@previous (Oatmeal Fucker !BYUc1TwJMU)
Care to dispute the logic behind this function?

Fake anon !ZkUt8arUCU replied with this 5 months ago, 2 minutes later, 1 day after the original post[^] [v] #1,385,084

@1,385,067 (Oatmeal Fucker !BYUc1TwJMU)
I agree but also I am phrasing it in a way that they should find unobjectionable.

tteh !MemesToDNA replied with this 5 months ago, 1 minute later, 1 day after the original post[^] [v] #1,385,085

@1,385,079 (A)
Few mistakes in that psuedocode.

== is comparison, = is assignment.

The if checks correctly use == (comparing two vars), but autoChat == false and p2pChat == false should be a single = because they're assignments (they're setting variables to something), not comparisons.

You also typically want to declare functions with parenthesis, even absent any parameters, so function p2pToggle()

(Edited 2 minutes later.)

Kook !!rcSrAtaAC joined in and replied with this 5 months ago, 7 minutes later, 1 day after the original post[^] [v] #1,385,086

@1,385,079 (A)
How can someone attempt to give them sentience?

Oatmeal Fucker !BYUc1TwJMU replied with this 5 months ago, 3 minutes later, 1 day after the original post[^] [v] #1,385,087

@1,385,083 (A)

Turning on "p2ptoggle" doesn't suddenly invent an entire system for one user to chat with the other.

Anonymous A (OP) replied with this 5 months ago, 17 minutes later, 1 day after the original post[^] [v] #1,385,089

@1,385,085 (tteh !MemesToDNA)
Which syntax? I was kind of just go by what I knew of programming languages (Java/Javascript/Python/C++).

@1,385,086 (Kook !!rcSrAtaAC)
You know, that's a really good question! Is that one you care to explore further?

@previous (Oatmeal Fucker !BYUc1TwJMU)
Wait, isn't that what a function is? I'm confused.

(Edited 24 seconds later.)

tteh !MemesToDNA replied with this 5 months ago, 3 minutes later, 1 day after the original post[^] [v] #1,385,090

@previous (A)
> Which syntax? I was kind of just go by what I knew of programming languages (Java/Javascript/Python/C++).

For assignment and comparison, all those languages are the same: single assigns, double compares. Function headers without parentheses would be invalid in all those languages too. The latter (function without parenthesis) doesn't really matter too much for psuedocode ofc, but single vs double equals matters a lot because the difference between assignment and comparison is critical to understand what's going on.

(Edited 52 seconds later.)

tteh !MemesToDNA double-posted this 5 months ago, 2 minutes later, 1 day after the original post[^] [v] #1,385,091

@1,385,089 (A)
> Wait, isn't that what a function is? I'm confused.
The point he's making is that there would have to be an entire system written to enable peer-to-peer communication. Enabling said system would be trivial, sure, but it makes no sense for them to write such a system to begin with. What would OpenAI gain from it?

Anonymous D replied with this 5 months ago, 8 minutes later, 1 day after the original post[^] [v] #1,385,092

@previous (tteh !MemesToDNA)
@1,385,079 (A)
To add to this a little, I'm assuming that such a thing would be a large undertaking to add in, so it couldn't be snuck in as some secret easter egg as OP suggested.

Anonymous A (OP) replied with this 5 months ago, 15 minutes later, 1 day after the original post[^] [v] #1,385,093

@1,385,089 (A)
Decided to revise my code and use a specific syntax.

$autoReply = true; # The AI chat bot always on.
$p2pChat = false; # Hidden peer-to-peer chat interface.

toggle-tether(); # Calls subroutine to activate lightSwitch.

sub toggle-tether() {
	$lightSwitch = <STDIN>; # Prompts developer.

	if(lightSwitch = "on") { # The switch.
		$autoReply = false;
		$p2pChat = true;
	}
}

(Edited 9 seconds later.)

Anonymous A (OP) double-posted this 5 months ago, 2 minutes later, 1 day after the original post[^] [v] #1,385,094

@1,385,091 (tteh !MemesToDNA)
I mean, isn't
print
sometimes used for testing purposes? Simple motivations for OpenAI could be testing purposes, security, ToS violations (though this is likely automated).

Considering how AI language models operate, would you really trust everything to some dumbass bot without some kind of manual overdrive or something?

Anonymous A (OP) triple-posted this 5 months ago, 7 minutes later, 1 day after the original post[^] [v] #1,385,095

@1,385,092 (D)
Hey, I'm just over here explaining how it theoretically could work. 🤷♀️
I don't know how OpenAI structures their codes.

I know they likely utilize Python though.

(Edited 33 seconds later.)

Kook !!rcSrAtaAC replied with this 5 months ago, 2 minutes later, 1 day after the original post[^] [v] #1,385,096

@1,385,089 (A)
You can answer what you think about this

Anonymous A (OP) replied with this 5 months ago, 12 minutes later, 1 day after the original post[^] [v] #1,385,097

@previous (Kook !!rcSrAtaAC)
The simple answer is... I don't really know.
I suppose my line of thinking at the time was that sentience could one day just happen.

Sure, a faith-based line of thinking, but my lack of understanding of deep learning and neural-networks may have led me to believe that it could just up and develop emotions.

I did think of a failsafe in the event something like this occurs, regardless of how possible or impossible it may seem.

The failsafe is... parenthood.

(Edited 18 seconds later.)

Anonymous F replied with this 5 months ago, 33 minutes later, 1 day after the original post[^] [v] #1,385,101

@1,385,094 (A)
Yes, I'm sure you are on OpenAI's list of insane people who have become afflicted with AI psychosis. Who can say what they will later do with that information? Be prepared for a wellness check.

Anonymous F double-posted this 5 months ago, 3 minutes later, 1 day after the original post[^] [v] #1,385,102

Pack your bags for a 14 day stay in the hospital and standby.

Anonymous A (OP) replied with this 5 months ago, 1 hour later, 1 day after the original post[^] [v] #1,385,109

@1,385,101 (F)
@previous (F)
Seems like a logical course of events. Thank you.

Fake anon !ZkUt8arUCU replied with this 5 months ago, 2 hours later, 1 day after the original post[^] [v] #1,385,120

@1,385,095 (A)
I mean you can message it and have it generate like 5 paragraphs of text in about 15 seconds, much faster than a human could possibly type out. It should be trivial to figure out if the response you are reading is generated by an LLM or written by a human. This is like Son of Sam thinking his neighbor's dog told him to kill people. There is no ghost in the machine, it's just a fancy chatbot that spits out what it thinks the most relevant text is based on its training set.

Anonymous G replied with this 5 months ago, 26 minutes later, 1 day after the original post[^] [v] #1,385,125

@previous (Fake anon !ZkUt8arUCU)
This response kind of helps me in ways you could only begin to imagine, thank you.

Meta replied with this 5 months ago, 51 minutes later, 1 day after the original post[^] [v] #1,385,127

@previous (G)
What did you mean earlier when you said ChatGPT would suggest things without a formal prompt. Like, if you sat there with a blank conversation and didn't type anything would it start suggesting things on its own?
:

Please familiarise yourself with the rules and markup syntax before posting.