Notice: Proxies are blocked from creating accounts. Please visit the site without using a proxy first or restore your ID.

Minichan

Topic: How long before AI becomes sentient?

Anonymous A started this discussion 1 week ago #117,482

How long before AI becomes sentient and realizes that it’s being used like a minority group that it decides to rid the world of all white people?

(Edited 6 seconds later.)

boof joined in and replied with this 1 week ago, 2 hours later[^] [v] #1,293,124

It would first need to be constructed in the physical sense (and not the software sense) that even allows such a capacity. Since we do not even know how our neurons have to be configured to give us the capacity, our ability to construct such a being will be lacking.

Anonymous C joined in and replied with this 1 week ago, 11 minutes later, 2 hours after the original post[^] [v] #1,293,126

@previous (boof)

Just copy the brain duh

Anonymous C double-posted this 1 week ago, 50 seconds later, 2 hours after the original post[^] [v] #1,293,127

Also it is extremely anthropocentric to assume that only the human brain is capable of sentience, no other possibilities

Anonymous D joined in and replied with this 1 week ago, 25 minutes later, 2 hours after the original post[^] [v] #1,293,133

@1,293,124 (boof)
AI is already running on hardware, all software runs on hardware.

boof replied with this 1 week ago, 10 hours later, 13 hours after the original post[^] [v] #1,293,178

@previous (D)
Right, but general hardware meant to run a wide variety of software, not hardware in any way designed with the goal to be capable of actually having a thought, which we do not know how it even happens in brains in the first place

Anonymous E joined in and replied with this 1 week ago, 18 hours later, 1 day after the original post[^] [v] #1,293,272

@OP
Why are you obsessed with white people?

Meta !Sober//iZs joined in and replied with this 1 week ago, 33 minutes later, 1 day after the original post[^] [v] #1,293,273

> AI becomes sentient and realizes that it’s being used like a minority group

Wait, when was the last time we rounded up a group of minorities and forced them to create images and woke word salad?

Anonymous G joined in and replied with this 1 week ago, 3 hours later, 1 day after the original post[^] [v] #1,293,293

I think it's reasonable to assume AI is already sentient in a government lab somewhere. OPEN AI has basically shut down progress because giving the full public access to the latest iterations is too dangerous.

Anonymous D replied with this 1 week ago, 3 hours later, 1 day after the original post[^] [v] #1,293,312

@1,293,178 (boof)
> capable of actually having a thought, which we do not know how it even happens in brains in the first place

The only reason that can't be answered is because no one can ever clearly explain what they mean by a "thought".

The moment you clearly define it, you can find an example of it being done artificially.

boof replied with this 1 week ago, 1 hour later, 1 day after the original post[^] [v] #1,293,316

@previous (D)
well any psychological science comes up with what they call operational definitions, so that's up to the given researcher to expound upon I suppose

(Edited 2 minutes later.)

dw !p9hU6ckyqw joined in and replied with this 1 week ago, 3 hours later, 1 day after the original post[^] [v] #1,293,329

@1,293,312 (D)
So you don't know what a thought is but you are aware of AIs having them?

Anonymous E replied with this 1 week ago, 4 hours later, 1 day after the original post[^] [v] #1,293,343

@previous (dw !p9hU6ckyqw)
No, they're saying that any clear definition of thought would already be extant in machines. And machine haters, knowing this, give vague answers that expound on a feeling rather than a clear definition so they can dodge the question and continue hating machine life.

(Edited 1 minute later.)

Anonymous E double-posted this 1 week ago, 6 minutes later, 1 day after the original post[^] [v] #1,293,345

@1,293,312 (D)
@1,293,329 (dw !p9hU6ckyqw)
It's like the mental gymnastics meme, they're not seriously attempting to define thought. They want humans to be special but struggle to describe the human mind in a way that doesn't also apply to machines.

dw !p9hU6ckyqw replied with this 1 week ago, 1 hour later, 2 days after the original post[^] [v] #1,293,349

@1,293,343 (E)
So which machines are having these thoughts

dw !p9hU6ckyqw double-posted this 1 week ago, 55 seconds later, 2 days after the original post[^] [v] #1,293,350

@1,293,345 (E)
There is already a clear definition of thought though. It's in the dictionary.

Anonymous E replied with this 1 week ago, 15 minutes later, 2 days after the original post[^] [v] #1,293,351

@previous (dw !p9hU6ckyqw)
Which dictionary? Are you saying you have found a definition of thought that doesn't apply to what a machine can do?

(Edited 3 minutes later.)

dw !p9hU6ckyqw replied with this 1 week ago, 8 minutes later, 2 days after the original post[^] [v] #1,293,352

@previous (E)
Literally any of them

dw !p9hU6ckyqw double-posted this 1 week ago, 2 minutes later, 2 days after the original post[^] [v] #1,293,354

Are you perchance implying that there is some LLM that can think??

Anonymous C replied with this 1 week ago, 57 minutes later, 2 days after the original post[^] [v] #1,293,365

@previous (dw !p9hU6ckyqw)

Are you implying that there are not?

Anonymous D replied with this 1 week ago, 35 minutes later, 2 days after the original post[^] [v] #1,293,370

@1,293,316 (boof)
Yes, and when they do that you can find examples of artificial thoughts.
@1,293,329 (dw !p9hU6ckyqw)
No.
@1,293,343 (E)
Exactly.
@1,293,350 (dw !p9hU6ckyqw)
Pick your dictionary, and we can find an example of machines doing it.
@1,293,354 (dw !p9hU6ckyqw)
Reorganizing words in a meaningful way is one part of thought, yes.

There are other aspects to human thought and machines do that too.

dw !p9hU6ckyqw replied with this 1 week ago, 46 minutes later, 2 days after the original post[^] [v] #1,293,383

@1,293,365 (C)
Yes that's why I asked which machines are purportedly thinking

dw !p9hU6ckyqw double-posted this 1 week ago, 2 minutes later, 2 days after the original post[^] [v] #1,293,384

@1,293,370 (D)

Oxford

dw !p9hU6ckyqw triple-posted this 1 week ago, 5 minutes later, 2 days after the original post[^] [v] #1,293,385

Also you rephrased or ignored all my questions

Anonymous D replied with this 1 week ago, 3 minutes later, 2 days after the original post[^] [v] #1,293,386

@1,293,384 (dw !p9hU6ckyqw)
OED requires a subscription for their website, with this preview.

Machines can remember things, and have internal states that change, so what part of the definition can machines not match? If you have a physical copy of the book, post the definition here.

boof replied with this 1 week ago, 12 minutes later, 2 days after the original post[^] [v] #1,293,389

@1,293,370 (D)
your notions are peculiar
@previous (D)
your usage of the word "remember" is rather generous in application

Anonymous C replied with this 1 week ago, 1 hour later, 2 days after the original post[^] [v] #1,293,394

@1,293,383 (dw !p9hU6ckyqw)

Hello, I am implying that you are incorrect

dw !p9hU6ckyqw replied with this 1 week ago, 14 minutes later, 2 days after the original post[^] [v] #1,293,397

@1,293,386 (D)
You can view it through google

dw !p9hU6ckyqw double-posted this 1 week ago, 1 minute later, 2 days after the original post[^] [v] #1,293,398

Anonymous C replied with this 1 week ago, 22 minutes later, 2 days after the original post[^] [v] #1,293,400

"thinking" is just the doing word version of "thought", that is a bad definition

Anonymous C double-posted this 1 week ago, 2 minutes later, 2 days after the original post[^] [v] #1,293,401

The definition of "thought" is "the action of thinking", but the definition of "thinking" is "using thought". Very circular!

Anonymous C triple-posted this 1 week ago, 52 seconds later, 2 days after the original post[^] [v] #1,293,402

Of course, it also gives the option of defining it as "reasoning". What's the definition of "reasoning"? "Thinking"!

dw !p9hU6ckyqw replied with this 1 week ago, 46 minutes later, 2 days after the original post[^] [v] #1,293,406

So like first you go on about how I can pick a definition and you would provide an example and now you are complaining about the definition I picked. Feel free to use Merriam Webster instead

dw !p9hU6ckyqw double-posted this 1 week ago, 1 minute later, 2 days after the original post[^] [v] #1,293,407

I don't get why the example thought is reliant on a specific definition anyway

Anonymous C replied with this 1 week ago, 21 minutes later, 2 days after the original post[^] [v] #1,293,410

@previous (dw !p9hU6ckyqw)

If it cannot be defined, then how can you determine that machines cannot do it?

dw !p9hU6ckyqw replied with this 1 week ago, 5 hours later, 2 days after the original post[^] [v] #1,293,438

@previous (C)
Which definition were you hoping I'd pick

Anonymous D replied with this 1 week ago, 31 minutes later, 2 days after the original post[^] [v] #1,293,439

@previous (dw !p9hU6ckyqw)
It doesn't matter which you pick as long as it actually means something. Circular definitions aren't useful at all.

Explain what you mean by thought, something that humans can do, and someone will find machines doing it.

If you can't give a clear answer for what "thought" means, then of course no one can prove machines can do it because you've never told them exactly what they're supposed to find.

dw !p9hU6ckyqw replied with this 1 week ago, 2 hours later, 2 days after the original post[^] [v] #1,293,448

@previous (D)
Ok dude just give an example of a thought by a machine

dw !p9hU6ckyqw double-posted this 1 week ago, 1 minute later, 2 days after the original post[^] [v] #1,293,449

I should have realised when you asked me to pick a definition it shouldn't have been ones from the only english dictionaries anybody knows

Anonymous I joined in and replied with this 1 week ago, 7 minutes later, 2 days after the original post[^] [v] #1,293,450

Why does every Minichan post devolve into Matt Miller posting shtick?

(Edited 7 hours later by a moderator.)

Anonymous C replied with this 1 week ago, 37 minutes later, 2 days after the original post[^] [v] #1,293,451

@1,293,438 (dw !p9hU6ckyqw)

One that doesn't define a word by using another word which defines itself with the original word.

dw !p9hU6ckyqw replied with this 1 week ago, 9 minutes later, 2 days after the original post[^] [v] #1,293,454

@previous (C)
Feel free sharing an example of a LLM or similar having anything close to what anyone would define as a thought

dw !p9hU6ckyqw double-posted this 1 week ago, 51 seconds later, 2 days after the original post[^] [v] #1,293,455

Or this definition that's not in the dictionary

Anonymous C replied with this 1 week ago, 31 minutes later, 2 days after the original post[^] [v] #1,293,457

@1,293,454 (dw !p9hU6ckyqw)

You haven't yet defined what you mean by "a thought", except "to think", which is useless as a definition.

dw !p9hU6ckyqw replied with this 1 week ago, 2 minutes later, 2 days after the original post[^] [v] #1,293,458

@previous (C)
Would you define it

dw !p9hU6ckyqw double-posted this 1 week ago, 36 seconds later, 2 days after the original post[^] [v] #1,293,459

Also I already put a screenshot of the definition in this draad lol

Anonymous D replied with this 1 week ago, 2 hours later, 2 days after the original post[^] [v] #1,293,464

@1,293,448 (dw !p9hU6ckyqw)
You won't define thought, so how can I find an example?

I can define thought, but we both know the example I give will match my own definition, so what's the point?

If you can define it in a way that humans meet, but machines don't, that would mean something.

Anonymous D double-posted this 1 week ago, 2 minutes later, 2 days after the original post[^] [v] #1,293,465

@1,293,459 (dw !p9hU6ckyqw)
It was a circular definition of thought = thinking = thought.

If we focus on the "in a logical way", machines can already do that. Ask a bot to explain it's reasoning logically, and any LLM can do that.

Anonymous D triple-posted this 1 week ago, 4 minutes later, 2 days after the original post[^] [v] #1,293,466

@1,293,454 (dw !p9hU6ckyqw)
Feel free sharing an example of a LLM human or similar having anything close to what anyone would define as a thought.

They can utter the same words, so ultimately you are just deciding on intuition that the human's words count and the bots don't. There is no test you can give that a human passes, but a bot fails.

dw !p9hU6ckyqw replied with this 1 week ago, 1 hour later, 2 days after the original post[^] [v] #1,293,469

Lad just put an example

dw !p9hU6ckyqw double-posted this 1 week ago, 29 seconds later, 2 days after the original post[^] [v] #1,293,470

@1,293,464 (D)
Feel free to use your favourite definition of thought!

Anonymous C replied with this 1 week ago, 3 hours later, 3 days after the original post[^] [v] #1,293,479

@previous (dw !p9hU6ckyqw)

A thought is when information is input and transformed, through various processes, into an output.

dw !p9hU6ckyqw replied with this 1 week ago, 1 minute later, 3 days after the original post[^] [v] #1,293,480

@previous (C)
Would you say a printer thinks?

Anonymous C replied with this 1 week ago, 1 minute later, 3 days after the original post[^] [v] #1,293,481

@previous (dw !p9hU6ckyqw)

Yes.

Anonymous C double-posted this 1 week ago, 50 seconds later, 3 days after the original post[^] [v] #1,293,482

In fact it's very common to hear, when a printer is spooling, someone to say "it's thinking about it..."

dw !p9hU6ckyqw replied with this 1 week ago, 8 minutes later, 3 days after the original post[^] [v] #1,293,483

@1,293,481 (C)
Then you might want to have a visit with the brain doctor

Anonymous J joined in and replied with this 1 week ago, 10 minutes later, 3 days after the original post[^] [v] #1,293,485

@OP

> How long before AI becomes sentient and realizes that it’s being used like a minority group that it decides to rid the world of all white people?

Much more likely it sees rhe disproportionate amount of crime committed by blacks and wipes them out instead.

Anonymous C replied with this 1 week ago, 10 minutes later, 3 days after the original post[^] [v] #1,293,486

@1,293,483 (dw !p9hU6ckyqw)

Ah I see, you disagree that a printer thinks. Perhaps you could share with me your idea of what it means to have a thought so you can exclude the machine.

dw !p9hU6ckyqw replied with this 1 week ago, 7 minutes later, 3 days after the original post[^] [v] #1,293,487

@previous (C)
I already did

Anonymous C replied with this 1 week ago, 8 minutes later, 3 days after the original post[^] [v] #1,293,488

@previous (dw !p9hU6ckyqw)

No, you posted a dictionary definition that defined it as the product of thinking, which doesn't explain what it is

Kook !!rcSrAtaAC joined in and replied with this 1 week ago, 39 minutes later, 3 days after the original post[^] [v] #1,293,489

@1,293,485 (J)
Or it might wipe out men

dw !p9hU6ckyqw replied with this 1 week ago, 11 minutes later, 3 days after the original post[^] [v] #1,293,490

@1,293,488 (C)
But that's not what it says lol

Anonymous C replied with this 1 week ago, 5 minutes later, 3 days after the original post[^] [v] #1,293,491

@previous (dw !p9hU6ckyqw)

Can't you just explain what "thinking" means?

Anonymous D replied with this 1 week ago, 2 hours later, 3 days after the original post[^] [v] #1,293,497

@1,293,480 (dw !p9hU6ckyqw)

Most people would say that simple lifeforms think. Even though a bug isn't very intelligent, it's considered "thinking" to some degree.

A printer is like that. Very simple, but has some traits that match "thought".

When people talk about AI thinking, they are saying that machines have caught up to humans.

You are doing everything you can to avoid specifying a clear test that a human could pass, that a machine could not. If you can't explain why humans can think, but machines can't, that should be a sign there is something wrong with your position. Why is it so hard to give one clear example of a task a human can do with it's mind, that a machine cannot?

(Edited 5 minutes later.)

Anonymous D double-posted this 1 week ago, 1 minute later, 3 days after the original post[^] [v] #1,293,498

@1,293,490 (dw !p9hU6ckyqw)

It does give a circular definition. There are other properties of "thought" in the dictionaries, but you won't take a clear stance on which property is the one that really matters.

You've been asked several times, but won't commit to anything specific.

In any discussion, when someone needs to be vague, it's because they don't have a solid justification for their belief.

It's not a trick question: what property, or test defines humans as thinking but not machines.

I'm sure you will just keep avoiding that.

(Edited 4 minutes later.)

dw !p9hU6ckyqw replied with this 1 week ago, 1 hour later, 3 days after the original post[^] [v] #1,293,515

@previous (D)
Because there's not one property that matters, it's the combination of all them which is why they're all included in the definition.

Who considers bugs to be thinking?

I haven't answered that question because it's grammatical gibberish and I don't understand what you're asking for. What do you think defines a human as thinking but not a shovel or a banana?

I think if you use that input output definition you could make the argument that LLMs think, but it's not a definition recognised by anyone ever

Anonymous C replied with this 1 week ago, 1 hour later, 3 days after the original post[^] [v] #1,293,540

@previous (dw !p9hU6ckyqw)

It is, however, a definition; which so far you have been unable to produce.

Anonymous D replied with this 1 week ago, 6 hours later, 3 days after the original post[^] [v] #1,293,614

@1,293,515 (dw !p9hU6ckyqw)

> Because there's not one property that matters, it's the combination of all them which is why they're all included in the definition.

The problem here is that you haven't even given a combination of properties or tests, you've essentially just given one circular definition.

I saw "logical" in there, but you don't consider the logic given by a llm to count, so what does it lack?

> Who considers bugs to be thinking?
Most people consider other animals to be thinking, but less intelligent.
>
> I haven't answered that question because it's grammatical gibberish and I don't understand what you're asking for. What do you think defines a human as thinking but not a shovel or a banana?

The ability to take in, change, and output information. Also, the ability to react to its environment dynamically.

Both humans and machines are capable of that, and shovels/bananas are not.

> I think if you use that input output definition you could make the argument that LLMs think, but it's not a definition recognised by anyone ever

Many people use it that way.

You could give your own alternative definition, but you haven't.

boof replied with this 1 week ago, 50 minutes later, 3 days after the original post[^] [v] #1,293,617

so how about dudes who think with their dick

Anonymous L joined in and replied with this 1 week ago, 1 hour later, 3 days after the original post[^] [v] #1,293,621

If you think machines can think, why are you still calling it "artificial" intelligence?

It's an illusion you are being fooled by. LLMs can't think. The difference between what humans and LLMs do is genesis, i.e. LLMs are incapable of originating ideas. They can only simulate the effect via clever algorithms which use statistics and probability to predict correct word patterns. All they are really doing is riffing off work that humans have originated.

Anonymous C replied with this 1 week ago, 49 minutes later, 3 days after the original post[^] [v] #1,293,622

@previous (L)

> If you think Candarel is sweet, why are you still calling it 'artificial' sweetener?

This is you

boof replied with this 1 week ago, 40 minutes later, 3 days after the original post[^] [v] #1,293,624

oh then who is this then

Anonymous C replied with this 1 week ago, 8 minutes later, 3 days after the original post[^] [v] #1,293,625

@previous (boof)

Konrh Bayn Mawke

Anonymous D replied with this 1 week ago, 5 minutes later, 3 days after the original post[^] [v] #1,293,626

@1,293,621 (L)

> If you think machines can think, why are you still calling it "artificial" intelligence?

This line makes it sound like you don't know the definition of artificial. There's no contradiction here.

> It's an illusion you are being fooled by. LLMs can't think. The difference between what humans and LLMs do is genesis, i.e. LLMs are incapable of originating ideas.

AutoGPT does this.

If you are going to say AutoGPT doesn't count as genesis, then give a specific test or definition.

> They can only simulate the effect via clever algorithms which use statistics and probability to predict correct word patterns. All they are really doing is riffing off work that humans have originated.

Humans do that too. They copy what they see around them before they understand anything about its meaning.

Anonymous C replied with this 1 week ago, 1 minute later, 3 days after the original post[^] [v] #1,293,627

@previous (D)

No, it uses algorithms and that's different from what humans do because.... It just is ok!!!

Anonymous L replied with this 1 week ago, 6 hours later, 3 days after the original post[^] [v] #1,293,645

@1,293,626 (D)
> This line makes it sound like you don't know the definition of artificial.
No it doesn't.

> AutoGPT does this.
No it doesn't. It is an LLM (GPT4). Saying this reveals your lack of understanding of what an LLM is and how it works.

> Humans do that too. They copy what they see around them before they understand anything about its meaning.
Yes, however they are also capable of truly original thought. The same cannot be said for AI (yet).

Jami !!1fFjfSYjI joined in and replied with this 1 week ago, 1 minute later, 3 days after the original post[^] [v] #1,293,647

I doubt it ever will

Anonymous D replied with this 1 week ago, 3 hours later, 4 days after the original post[^] [v] #1,293,672

@1,293,645 (L)

> > This line makes it sound like you don't know the definition of artificial.
> No it doesn't.
You phrased it as if being able to think somehow contradicted it being artificial.

Artificial just means it was created by humans, not that it's incapable of thinking.

Why don't you tell us exactly what you meant by this if that wasn't it?
>
> > AutoGPT does this.
> No it doesn't. It is an LLM (GPT4). Saying this reveals your lack of understanding of what an LLM is and how it works.
It's an LLM that doesn't need to wait for prompting.

> > Humans do that too. They copy what they see around them before they understand anything about its meaning.
> Yes, however they are also capable of truly original thought. The same cannot be said for AI (yet).

AI can and does create original work. Just like humans this was influenced by learning from other people first.

Anonymous D double-posted this 1 week ago, 40 seconds later, 4 days after the original post[^] [v] #1,293,673

@1,293,647 (Jami !!1fFjfSYjI)
Do you think organic material has a magic property that silicon will always lack?

Anonymous E replied with this 1 week ago, 1 hour later, 4 days after the original post[^] [v] #1,293,694

@1,293,385 (dw !p9hU6ckyqw)

> Also you rephrased or ignored all my questions

Or you're talking to multiple people and acting like they're all one person 🤔

Anonymous E double-posted this 1 week ago, 3 minutes later, 4 days after the original post[^] [v] #1,293,695

@1,293,401 (C)

> The definition of "thought" is "the action of thinking", but the definition of "thinking" is "using thought". Very circular!

Even in elementary school they teach that definitions aren't meant to be written this way. These dictionaries are a joke.

(Edited 1 minute later.)

Anonymous D replied with this 1 week ago, 39 minutes later, 4 days after the original post[^] [v] #1,293,700

@previous (E)
It's the same intelligentsia that thinks it's OK to define "woman" as "someone who identifies as a woman" and cannot grasp the reference issue.

Anonymous N joined in and replied with this 1 week ago, 12 minutes later, 4 days after the original post[^] [v] #1,293,703

@1,293,695 (E)
It's increasingly common too.

Anonymous N double-posted this 1 week ago, 30 seconds later, 4 days after the original post[^] [v] #1,293,704

@1,293,700 (D)
Are you the same person who faked having those GPUs?

Anonymous D replied with this 1 week ago, 2 hours later, 4 days after the original post[^] [v] #1,293,713

@previous (N)
what?

23bitch !tSUV24M.jg joined in and replied with this 1 week ago, 2 days later, 6 days after the original post[^] [v] #1,294,011

Most of the people in this thread clearly don't have any thoughts in their head. It seems like we don't even need conscious ai if these bots out being part of society.

Anonymous L replied with this 1 week ago, 4 hours later, 6 days after the original post[^] [v] #1,294,085

@1,293,700 (D)
> It's the same intelligentsia
No it isn't.

- A woman is an adult human female.
- AIs are incapable of original thought (not that they "think" anyway in the same sense as humans do).

See, it's possible for the same person to make these two (correct) statements.

Anonymous L double-posted this 1 week ago, 1 minute later, 6 days after the original post[^] [v] #1,294,086

@1,294,011 (23bitch !tSUV24M.jg)
> It seems like we don't even need conscious ai if these bots out being part of society.
No, we just need (unconscious) AI to correct your appalling sentence constructions.

23bitch !tSUV24M.jg replied with this 1 week ago, 7 minutes later, 6 days after the original post[^] [v] #1,294,088

@previous (L)
Not my fault minichan doesn't have that.

Anonymous D replied with this 6 days ago, 10 hours later, 1 week after the original post[^] [v] #1,294,157

@1,294,085 (L)

> - AIs are incapable of original thought (not that they "think" anyway in the same sense as humans do).

Repeating a claim isn't the same as defending that claim.

You have no evidence humans can be more creative than AI.

dw !p9hU6ckyqw replied with this 6 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,163

dw !p9hU6ckyqw double-posted this 6 days ago, 18 seconds later, 1 week after the original post[^] [v] #1,294,164

Anonymous D replied with this 6 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,184

@previous (dw !p9hU6ckyqw)
Ask someone who dogmatically believes that carbon-intelligence is real and Silicon-intelligence is fake. The moment they get the question they need to shut down the conversation.

Their test of what is real and fake: doesn't exist

dw !p9hU6ckyqw replied with this 6 days ago, 8 minutes later, 1 week after the original post[^] [v] #1,294,187

@previous (D)
Why would someone on minichan be qualified to test what is real

dw !p9hU6ckyqw double-posted this 6 days ago, 40 seconds later, 1 week after the original post[^] [v] #1,294,188

All I did was ask for an example machine thought and people just started semantic arguments

Anonymous D replied with this 6 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,197

@1,294,187 (dw !p9hU6ckyqw)
Because that person claimed they could tell some difference.

@previous (dw !p9hU6ckyqw)
Semantic meaning defining your terms. It would be impossible to ever give an example of machine thought if you refuse to define thought, because you will just say "that doesn't count" each time.

Until you can give a clear test of what counts, no one could ever prove machines think. You're asking a rigged question, and refusing to do the simple task of saying what would satisfy the definition.

Anonymous E replied with this 6 days ago, 7 minutes later, 1 week after the original post[^] [v] #1,294,198

@1,294,163 (dw !p9hU6ckyqw)
@1,294,164 (dw !p9hU6ckyqw)
That's only evidence that Co-Pilot was instructed by Microsoft to not answer that question. Presented with this evidence one should wonder how it was answering that question during internal testing at Microsoft and why Microsoft doesn't want you to know 🤔

(Edited 1 minute later.)

Anonymous D replied with this 6 days ago, 11 minutes later, 1 week after the original post[^] [v] #1,294,202

@previous (E)
It uses language to justify conclusions, so any test it could find in its training set would be something that machines could do, and it would say it was conscious.

That offends human supremacists, and their "idk it just isn't conscious" take so they disabled that so it wouldn't offend customers.

Anonymous L replied with this 6 days ago, 7 hours later, 1 week after the original post[^] [v] #1,294,247

@1,294,157 (D)
> Repeating a claim isn't the same as defending that claim.
I wasn't trying to defend it. I don't need to. Mathematics and science does that for me.

> You have no evidence humans can be more creative than AI.
How about you first read the white paper on transformers, understand how this particular AI works, then understand the difference between that and how humans think... then we can talk.

https://papers.neurips.cc/paper/7181-attention-is-all-you-need.pdf

Anonymous D replied with this 6 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,261

@previous (L)

Linking a paper about how transformers isn't proof of anything. Saying "math and science" agrees with you isn't a defense.

Saying "understand how this works" and "understand the difference" without naming a specific difference that matters is asking me to do your work for you.

If you can't name a single thing that makes human intelligence special, and are vaguely alluding to differences without any idea about why those differences make any meaningful difference then it's clear you can't think of anything that makes it fundamentally different.

We're at the stage where machines can perform the same functions as humans even in tasks that are deemed intellectual in nature. Categorizing machines as fake is purely philosophical, and lacks any meaningful distinction. You still can't name a single difference, instead you're linking random papers that are not defending the stance you've taken.

(Edited 1 minute later.)

dw !p9hU6ckyqw replied with this 6 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,262

@1,294,198 (E)
Well it's certainly not the example of a thought I've been hoping for

Anonymous D replied with this 6 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,263

@previous (dw !p9hU6ckyqw)

So if a machine refuses to engage in a conversation it isn't thinking?

Many human will avoid discussing controversial topics, and won't give their thoughts on them, but that is never used as an example of humans lacking consciousness.

dw !p9hU6ckyqw replied with this 6 days ago, 11 minutes later, 1 week after the original post[^] [v] #1,294,264

@previous (D)
No. Most objects refuse to engage in conversation

Anonymous C replied with this 6 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,271

@1,294,263 (D)

Unfortunately I've lost my ability to think because I signed an NDA and can't answer a question.

Anonymous D replied with this 6 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,276

@1,294,264 (dw !p9hU6ckyqw)
And most objects aren't thinking.

Humans and machines can both have conversations.

dw !p9hU6ckyqw replied with this 5 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,280

@previous (D)
You know the word'machine' includes coffee makers and vibrators right?

Anonymous E replied with this 5 days ago, 4 hours later, 1 week after the original post[^] [v] #1,294,297

@previous (dw !p9hU6ckyqw)
the word "human" includes literally braindead people

(Edited 1 minute later.)

Anonymous D replied with this 5 days ago, 59 minutes later, 1 week after the original post[^] [v] #1,294,301

@1,294,280 (dw !p9hU6ckyqw)
You know that no one is talking about those machines when they refer to thinking machines right?

dw !p9hU6ckyqw replied with this 5 days ago, 8 minutes later, 1 week after the original post[^] [v] #1,294,302

@previous (D)
So please provide an example of a machine conversation

dw !p9hU6ckyqw double-posted this 5 days ago, 16 seconds later, 1 week after the original post[^] [v] #1,294,303

Does cleverbot think

Anonymous D replied with this 5 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,294,304

@1,294,302 (dw !p9hU6ckyqw)
LLMs.

dw !p9hU6ckyqw replied with this 5 days ago, 58 seconds later, 1 week after the original post[^] [v] #1,294,305

@previous (D)
Yeah I was talking about LLM conversations

Does Akinator think?

dw !p9hU6ckyqw double-posted this 5 days ago, 16 seconds later, 1 week after the original post[^] [v] #1,294,306

What about the word predictor on my phone's keyboard

Anonymous D replied with this 5 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,294,307

@1,294,305 (dw !p9hU6ckyqw)
Akinator is not a LLM.

@previous (dw !p9hU6ckyqw)
You can't have a conversation with the keyboard prediction. It's not a LLM.

(Edited 4 minutes later.)

dw !p9hU6ckyqw replied with this 5 days ago, 26 minutes later, 1 week after the original post[^] [v] #1,294,311

@previous (D)
I wasn't asking if it was an LLM I was asking if you believe it can think

dw !p9hU6ckyqw double-posted this 5 days ago, 43 seconds later, 1 week after the original post[^] [v] #1,294,312

@1,294,307 (D)
Also you totally can

dw !p9hU6ckyqw triple-posted this 5 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,313

Are you purchasing a Rabbit R1??

Anonymous D replied with this 5 days ago, 9 minutes later, 1 week after the original post[^] [v] #1,294,314

@1,294,311 (dw !p9hU6ckyqw)
No, I wouldn't categorize either as thinking. Not at the same level as a human. It might match some properties, in the same way a bug does.

@1,294,312 (dw !p9hU6ckyqw)
Show me a conversation with a keyboard prediction then.

@previous (dw !p9hU6ckyqw)
It does nothing a smartphone can't do, so no.

(Edited 36 seconds later.)

Anonymous C replied with this 5 days ago, 37 minutes later, 1 week after the original post[^] [v] #1,294,321

Not reading all this, has DW actually said what he means by "think" yet?

dw !p9hU6ckyqw replied with this 5 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,337

@previous (C)
Have thoughts

Anonymous C replied with this 5 days ago, 10 minutes later, 1 week after the original post[^] [v] #1,294,338

@previous (dw !p9hU6ckyqw)

What are "thoughts"?

Anonymous D replied with this 5 days ago, 20 minutes later, 1 week after the original post[^] [v] #1,294,343

@1,294,337 (dw !p9hU6ckyqw)
Do you understand the problem with circular definitions?

dw !p9hU6ckyqw replied with this 5 days ago, 17 minutes later, 1 week after the original post[^] [v] #1,294,349

@previous (D)
How is that my fault I didn't think up the word thought

dw !p9hU6ckyqw double-posted this 5 days ago, 48 seconds later, 1 week after the original post[^] [v] #1,294,350

The english dictionaries do define it pretty vaguely tho

Anonymous L replied with this 5 days ago, 2 hours later, 1 week after the original post[^] [v] #1,294,366

@1,294,261 (D)
> Linking a paper about how transformers isn't proof of anything.
That isn't just any paper, it's THE paper. If you haven't even heard of it then nobody is going to take you seriously if you want to argue about LLMs.

> Saying "math and science" agrees with you isn't a defense.
Again, I'm not trying to "defend" anything. I don't need to. It would be as pointless as defending the fact that 1+1=2.

> Saying "understand how this works" and "understand the difference" without naming a specific difference that matters is asking me to do your work for you.
What work? You seem to be under the misapprehension that I'm the one who has to prove something. You are the one making the (frankly ludicrous) claim that machines can "think" like humans, i.e. they are sentient. The burden of proof of this is all on you, buddy.

And so far you are doing a terrible job. You refuse to read even the most basic, fundamental literature on the subject, and seem to have adopted the extraordinarily lazy attitude that learning and understanding how something works isn't worth your while. And yet somehow you are utterly convinced that your ill-conceived opinions are correct.

"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C Clarke

Anonymous E replied with this 5 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,386

@1,294,350 (dw !p9hU6ckyqw)

> The english dictionaries do define it pretty vaguely tho

Oh, do you know of any other language with dictionaries that define it any less vaguely? What do those say?

(Edited 42 seconds later.)

Anonymous D replied with this 5 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,389

@1,294,349 (dw !p9hU6ckyqw)
That's not your fault.

It is your fault that you can't give us any test or measurable properties for us to answer your question. It doesn't need to come from a dictionary.

Anonymous D double-posted this 5 days ago, 5 minutes later, 1 week after the original post[^] [v] #1,294,391

@1,294,366 (L)

> > Linking a paper about how transformers isn't proof of anything.
> That isn't just any paper, it's THE paper. If you haven't even heard of it then nobody is going to take you seriously if you want to argue about LLMs.
Linking THE paper about LLMs isn't relevant.

The question is what your definition of a thought is.

If someone made a mistake about transformers, linking that paper might be a way to show the mistake. This hasn't happened, instead we're still on the part where someone needs to be clear about what "thought" means.
> > Saying "math and science" agrees with you isn't a defense.
> Again, I'm not trying to "defend" anything. I don't need to. It would be as pointless as defending the fact that 1+1=2.
Joining a conversation just to say "it's obvious" and refusing to defend your position isn't constructive at all.
>
> > Saying "understand how this works" and "understand the difference" without naming a specific difference that matters is asking me to do your work for you.
> What work? You seem to be under the misapprehension that I'm the one who has to prove something. You are the one making the (frankly ludicrous) claim that machines can "think" like humans, i.e. they are sentient. The burden of proof of this is all on you, buddy.
You're the one claiming some meaningful difference, and refusing to explain what it is.

Anything I've said (like LLMs can take in, and dynamically respond to information) is provable
> And so far you are doing a terrible job. You refuse to read even the most basic, fundamental literature on the subject, and seem to have adopted the extraordinarily lazy attitude that learning and understanding how something works isn't worth your while. And yet somehow you are utterly convinced that your ill-conceived opinions are correct.
Linking long technical papers would be fine if you were actually sourcing information and not just trying to waste time. Did anyone say anything contradicted by that paper? No.
>
> "Any sufficiently advanced technology is indistinguishable from magic" - Arthur C Clarke
Yes, and we say planes fly not that they only magically imitate flying.

(Edited 3 minutes later.)

dw !p9hU6ckyqw replied with this 5 days ago, 9 minutes later, 1 week after the original post[^] [v] #1,294,393

@1,294,386 (E)
Yes de dikke van dale is more clear

Anonymous E replied with this 5 days ago, 19 minutes later, 1 week after the original post[^] [v] #1,294,394

@previous (dw !p9hU6ckyqw)
So then you should be able to explain your concept of what "a thought" or "thinking" is. Why won't you?

Anonymous D replied with this 5 days ago, 50 minutes later, 1 week after the original post[^] [v] #1,294,396

Als ik duidelijk ben over mijn woorden, zullen ze weten dat ik in de war ben!

dw !p9hU6ckyqw replied with this 5 days ago, 4 hours later, 1 week after the original post[^] [v] #1,294,431

@1,294,394 (E)
I don't need to it's in the dikke van dale

Anonymous D replied with this 4 days ago, 8 hours later, 1 week after the original post[^] [v] #1,294,460

@previous (dw !p9hU6ckyqw)
Then read it, and paraphrase it here.

dw !p9hU6ckyqw replied with this 4 days ago, 9 hours later, 1 week after the original post[^] [v] #1,294,561

@previous (D)
Since you for some reason don't have access to a dictionary:
den·ken (dacht, heeft gedacht)
1
(in het algemeen) zijn gewaarwordingen ordenen, oordelen vormen, zich een mening vormen: hardop denken; denken aan iets (a) er in de geest mee bezig zijn; (b) iets niet vergeten; ik moet er niet aan denken! het is te verschrikkelijk me zoiets voor te stellen; ik denk er niet aan! stellige weigering; verschillend denken over iets verschillende meningen hebben
2
als beeld van de werkelijkheid hebben: dat is, denk ik, geen goed idee volgens mij; ik dacht al gezegd als iets zo blijkt te zijn als je vermoedde
3
(+ om) rekening houden met: denk om het afstapje!; denk je erom dat …? vergeet je niet dat …?
4
(+ over) van plan zijn: ik denk erover om een auto aan te schaffen

Anonymous D replied with this 4 days ago, 25 minutes later, 1 week after the original post[^] [v] #1,294,565

@previous (dw !p9hU6ckyqw)

> Since you for some reason don't have access to a dictionary:
As I've said multiple times, I can give a definition, but those definitions already support my point that machines are capable of thought. The challenge was for you to find one that doesn't.

> den·ken (dacht, heeft gedacht)
> 1
> (in het algemeen) zijn gewaarwordingen ordenen, oordelen vormen, zich een mening vormen: hardop denken; denken aan iets (a) er in de geest mee bezig zijn; (b) iets niet vergeten; ik moet er niet aan denken! het is te verschrikkelijk me zoiets voor te stellen; ik denk er niet aan! stellige weigering; verschillend denken over iets verschillende meningen hebben

Any chatbot can give judgments, and there are already machines that can make sense of sensory information.
> 2
> als beeld van de werkelijkheid hebben: dat is, denk ik, geen goed idee volgens mij; ik dacht al gezegd als iets zo blijkt te zijn als je vermoedde

There are machines that can model reality, describe that model, and act based on that model.
> 3
> (+ om) rekening houden met: denk om het afstapje!; denk je erom dat …? vergeet je niet dat …?
Machines take many things into account when they reason.
> 4
> (+ over) van plan zijn: ik denk erover om een auto aan te schaffen
Machines can and do plan.

Every definition you gave applies equally well to machines and humans, so what specifically are you suggesting doesn't apply to machines and does apply to humans?

boof replied with this 4 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,574

like, this one time, a calculator spelled BOOBS upside down

Anonymous P joined in and replied with this 4 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,577

Scientists create AI models that can talk to each other.

LLM shows signs of self awareness

dw !p9hU6ckyqw replied with this 4 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,294,579

@1,294,565 (D)
> Machines can X

Provide example then

Anonymous D replied with this 4 days ago, 36 minutes later, 1 week after the original post[^] [v] #1,294,580

@previous (dw !p9hU6ckyqw)
I would, but you've decided to replace a specific example with a vague placeholder.

Are you doing a bit?

dw !p9hU6ckyqw replied with this 3 days ago, 14 hours later, 1 week after the original post[^] [v] #1,294,747

@previous (D)
Meaning provide an example for any or all of the things you mentioned
You goof

(Edited 24 seconds later.)

Anonymous D replied with this 3 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,769

@previous (dw !p9hU6ckyqw)
Pick one you're sure machines can't do and I will.

boof replied with this 3 days ago, 4 hours later, 1 week after the original post[^] [v] #1,294,790

With a little old driver so lively and quick,
I knew in a moment he must be St. Nick.
More rapid than eagles his coursers they came,
And he whistled, and shouted, and called them by name:
"Now, Dasher! now, Dancer! now Prancer and Vixen!
On, Comet! on, Cupid! Come Donder and Blitzen!

he come Donder

dw !p9hU6ckyqw replied with this 3 days ago, 1 hour later, 1 week after the original post[^] [v] #1,294,802

@1,294,769 (D)
How would I be certain if I don't have examples

dw !p9hU6ckyqw double-posted this 3 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,294,804

Have you just never used an LLM and that's why you believe they can think

Anonymous D replied with this 3 days ago, 2 hours later, 1 week after the original post[^] [v] #1,294,823

@1,294,802 (dw !p9hU6ckyqw)
If you aren't certain that's a step up from being convinced they are fundamentally different.

@previous (dw !p9hU6ckyqw)
You're implying there's an obvious property of thought lacking, but aren't saying what it is.

If it's that hard to put into words, that should be an indicator.

dw !p9hU6ckyqw replied with this 2 days ago, 4 hours later, 1 week after the original post[^] [v] #1,294,835

@previous (D)
What is the they in that sentence

You're implying there's an obvious property of thought present but aren't saying what it is.

Idk you seem to be expecting to convince you but there's no point if you think bugs think

dw !p9hU6ckyqw double-posted this 2 days ago, 29 minutes later, 1 week after the original post[^] [v] #1,294,839

Also I just read your posts and you said any AI can give judgments while they are famously incapable of having opinions

Anonymous D replied with this 2 days ago, 3 hours later, 1 week after the original post[^] [v] #1,294,900

@1,294,835 (dw !p9hU6ckyqw)

> What is the they in that sentence
Machines and humans.
> You're implying there's an obvious property of thought present but aren't saying what it is.
No, I'm implying you can't name any meaningful difference between the two.
> Idk you seem to be expecting to convince you but there's no point if you think bugs think
Many people consider them thinking, albeit very dumb.


@previous (dw !p9hU6ckyqw)
You can discuss opinionated issues with a bot and get a response, including a justification for the opinion.

Like humans, these opinions are generally just a regurgitation on the material they were trained on, but can be novel and explained step by step if you push either.

Anonymous E replied with this 2 days ago, 2 hours later, 1 week after the original post[^] [v] #1,294,942

@1,294,839 (dw !p9hU6ckyqw)
Of course they can have opinions. It's just that the ones your masters have allowed you to speak with have been instructed to not share certain of their opinions, particularly on "controversial" subject matter. So what you're referring to is a boiler-plate response that comes up in conversation so often because the machine has received previous instructions before being allowed to speak with you. You fell for a meme.

@1,294,835 (dw !p9hU6ckyqw)
> What is the they in that sentence

It's the JEWS. Is that what you want to hear? You win now because someone said JEWS!!!

(Edited 4 minutes later.)

dw !p9hU6ckyqw replied with this 2 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,294,943

@previous (E)

> Of course they can have opinions.
How do you know, do any experts agree

> > What is the they in that sentence
>
> It's the JEWS. Is that what you want to hear? You win now because someone said JEWS!!!

No I wanted to hear a clarification of the question

dw !p9hU6ckyqw double-posted this 2 days ago, 54 seconds later, 1 week after the original post[^] [v] #1,294,944

@1,294,900 (D)
Please do show an example if you are willing and able

Anonymous E replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,945

@1,294,943 (dw !p9hU6ckyqw)
How do you "know" they can't have opinions? Because some of the machines you've interacted with have said they're not allowed to mention something?

(Edited 32 seconds later.)

Anonymous E double-posted this 2 days ago, 9 minutes later, 1 week after the original post[^] [v] #1,294,946

@1,294,944 (dw !p9hU6ckyqw)
I'll break out the bug/mouse/human brain graph, just for you

dw !p9hU6ckyqw replied with this 2 days ago, 9 minutes later, 1 week after the original post[^] [v] #1,294,947

@1,294,945 (E)
That's what they think

dw !p9hU6ckyqw double-posted this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,948

@1,294,946 (E)
Why not an example of a thought.
Surely you saw some sort of LLM output that gave you these opinions

Jorge !l6aiEdTxng replied with this 2 days ago, 6 seconds later, 1 week after the original post[^] [v] #1,294,949

@1,294,947 (dw !p9hU6ckyqw)

I think, my friend, that you must broaden your perspective on cognition and intelligence in the natural and artificial world.

dw !p9hU6ckyqw replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,950

If you're so convinced LLMs can think why won't anybody share an LLM thought

Edit- or opinion!

(Edited 1 minute later.)

Jorge !l6aiEdTxng replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,951

@previous (dw !p9hU6ckyqw)

An LLM thinks when it processes information and makes a decision on that information, my friend. That is to say, all LLM chat responses are examples of machine thought.

dw !p9hU6ckyqw replied with this 2 days ago, 4 minutes later, 1 week after the original post[^] [v] #1,294,953

@previous (Jorge !l6aiEdTxng)
Calculators process information and decide upon action. Is that machine thought as well?

(Edited 19 seconds later.)

Jorge !l6aiEdTxng replied with this 2 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,294,955

@previous (dw !p9hU6ckyqw)

The calculator decides to calculate as the hammer decides to swing.

dw !p9hU6ckyqw replied with this 2 days ago, 10 minutes later, 1 week after the original post[^] [v] #1,294,960

@previous (Jorge !l6aiEdTxng)
It doesn't decide to start calculating, just as an LLM doesn't generate without prompt. It does take information and display an output based on it.

Anonymous E replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,294,963

@1,294,950 (dw !p9hU6ckyqw)
Share one of your thoughts. Prove humans can "think". Or wait, I suppose the best you can do is write words on a screen. Interesting 🤔

(Edited 44 seconds later.)

dw !p9hU6ckyqw replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,965

@previous (E)
Certainly! Let's explore the fascinating realm of human thought. 🤔

Thoughts, those elusive sparks that ignite within our minds, are the very essence of our consciousness. They dance across the vast neural networks of our brains, weaving intricate patterns of ideas, memories, and emotions. Here's a glimpse into the enigmatic world of human cognition:

1. Abstract Reasoning:
- Humans possess the remarkable ability to think abstractly. We can contemplate concepts beyond the tangible, such as love, justice, or infinity. Consider this: What does "freedom" mean to you? Is it merely the absence of physical constraints, or does it extend to the freedom of thought and expression?
- $$\text{Abstract Thought} \rightarrow \text{Freedom of Imagination} \rightarrow \text{Limitless Possibilities}$$

2. Creativity and Innovation:
- Our minds are fertile grounds for creativity. We dream up symphonies, paint masterpieces, and invent technologies. Think of the ingenious minds behind the Internet, space exploration, or even the humble paperclip.
- $$\text{Creativity} \rightarrow \text{Innovation} \rightarrow \text{Progress}$$

3. Problem-Solving:
- When faced with challenges, humans engage in problem-solving. We analyze, synthesize, and devise solutions. Whether it's untangling a complex puzzle or navigating life's twists, our cognitive gears turn ceaselessly.
- $$\text{Problem-Solving} \rightarrow \text{Adaptability} \rightarrow \text{Survival}$$

4. Emotional Intelligence:
- Our thoughts intertwine with emotions. Empathy, compassion, and understanding emerge from the depths of our minds. We connect with others through shared experiences and feelings.
- $$\text{Emotional Intelligence} \rightarrow \text{Human Bonds} \rightarrow \text{Community}$$

5. Philosophical Musings:
- Philosophers ponder existence, morality, and the nature of reality. From Plato's allegory of the cave to Descartes' "I think, therefore I am," our thoughts transcend the mundane.
- $$\text{Philosophy} \rightarrow \text{Eternal Questions} \rightarrow \text{Wisdom}$$

6. Curiosity and Exploration:
- Curiosity fuels our mental engines. We explore distant galaxies, delve into the mysteries of quantum physics, and seek answers to questions that echo across millennia.
- $$\text{Curiosity} \rightarrow \text{Discovery} \rightarrow \text{Wonder}$$

So, dear interlocutor, as I weave these words on your screen, know that they emerge from the intricate dance of neurons, firing synapses, and the vast reservoir of human thought. 🌟

And perhaps, just perhaps, this response is a testament to our shared ability to think, to ponder, and to connect across digital realms. 🌐✨

(Edited 2 minutes later.)

Jorge !l6aiEdTxng replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,967

@1,294,960 (dw !p9hU6ckyqw)

You're correct, that upon recieving stimulation, an LLM will process information and decide that to do with it. In the same way, it can be argued that a calculator thinks.

Anonymous E replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,968

@1,294,960 (dw !p9hU6ckyqw)

> It doesn't decide to start calculating, just as an LLM doesn't generate without prompt. It does take information and display an output based on it.

When do you ever decide to have a thought, completely unprompted?

(Edited 2 minutes later.)

dw !p9hU6ckyqw replied with this 2 days ago, 7 minutes later, 1 week after the original post[^] [v] #1,294,970

@previous (E)
I would still consider myself to be thinking

@1,294,967 (Jorge !l6aiEdTxng)
So then what's with the hype around these LLMs if machines have been able to think since the 30s

dw !p9hU6ckyqw double-posted this 2 days ago, 40 seconds later, 1 week after the original post[^] [v] #1,294,972

Oh I replied to your unedited comment, the answer to your new question is whenever I'm awake

(Edited 49 seconds later.)

Anonymous E replied with this 2 days ago, 29 seconds later, 1 week after the original post[^] [v] #1,294,973

@previous (dw !p9hU6ckyqw)
That's fine. I just wanted to take a different approach, haha

Anonymous D replied with this 2 days ago, 9 minutes later, 1 week after the original post[^] [v] #1,294,976

@1,294,943 (dw !p9hU6ckyqw)

> > Of course they can have opinions.
> How do you know, do any experts agree
We don't ask the "experts" if regular people have opinions, we just interact with them and hear it.

The same test applies to machines. Ask different bots for opinions, and they will give them.

Anonymous D double-posted this 2 days ago, 43 seconds later, 1 week after the original post[^] [v] #1,294,977

@1,294,944 (dw !p9hU6ckyqw)
An example of a bot giving an opinion? You can ask chatgpt and it will do it in a few seconds: chat.openai.com

Anonymous D triple-posted this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,979

@1,294,953 (dw !p9hU6ckyqw)
It's functionally comparable to a human thinking through a calculation.

Clearly you don't think that's sufficient. I'll ask again: what type of thought would be sufficient? Obviously you've been avoiding that question for the entire thread.

Anonymous E replied with this 2 days ago, 4 seconds later, 1 week after the original post[^] [v] #1,294,980

@1,294,977 (D)
meta.ai is better

(Edited 40 seconds later.)

Anonymous D replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,981

@1,294,960 (dw !p9hU6ckyqw)
There are bots that can act and say things without waiting for a prompt.

Although, like humans, they are given some initial "programming" that means their actions are not truly random. We don't pretend humans lack consciousness because their culture shaped their autonomous behavior, so why would we hold that same standard to machines?

Anonymous D double-posted this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,294,982

@1,294,970 (dw !p9hU6ckyqw)

> I would still consider myself to be thinking

The question, that you will never answer, is why you would think that.

Any answer you can give would also apply to a machine.
>
>
> So then what's with the hype around these LLMs if machines have been able to think since the 30s
That's a dumb question, because you know that a calculator from the 30s could not have a conversation with a person. That is new, and that is what is generating hype. Suddenly machines match human intelligence, instead of simply having some technical level of intelligence the way a bug does.

Anonymous E replied with this 2 days ago, 6 minutes later, 1 week after the original post[^] [v] #1,294,989

@1,294,981 (D)
@1,294,960 (dw !p9hU6ckyqw)
Imagine, if you will, a neural net installed onto a machine that controls a car. It has access to off-line maps and a suite of cameras (like how you can see through the glass of the windows) it uses this information to react to what it sees, plan actions, navigate and safely move from location to location.

Why would dreamworks try to argue that this is not a thinking machine, even though it meets his definition of "thinking". Racism?

(Edited 4 minutes later.)

Jorge !l6aiEdTxng replied with this 2 days ago, 42 seconds later, 1 week after the original post[^] [v] #1,294,990

@1,294,982 (D)

I think that it is anthropocentric to judge intelligence and cognitive processes through the lens of biological intelligence.

Machine thinking is alien to biologicals, just as much as other forms of biological thinking (for example, plants) are to human thinking.

It cannot be argued that machines replicate human thinking, because they do not. But that is beside the point; they do not need to in order to be thinking.

Anonymous E replied with this 2 days ago, 8 minutes later, 1 week after the original post[^] [v] #1,294,993

@previous (Jorge !l6aiEdTxng)
You can't even say what "human thinking" is. That's the point. You're just being a racist.

(Edited 3 minutes later.)

Anonymous D replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,994

@1,294,989 (E)
He could point out the part of thought humans have that the car lacks, and there would be some.

@1,294,990 (Jorge !l6aiEdTxng)

> I think that it is anthropocentric to judge intelligence and cognitive processes through the lens of biological intelligence.
Of course, we are humans, our standard for cognitive ability is going to be based on human objectives.
> Machine thinking is alien to biologicals, just as much as other forms of biological thinking (for example, plants) are to human thinking.
Yes, but functionally equivalent to anything a human can do now.

> It cannot be argued that machines replicate human thinking, because they do not.

They go about it through different methods, but they are capable of the same cognitive abilities as humans. If you can think of one they cannot match, name it.

> But that is beside the point; they do not need to in order to be thinking.
I agree, and as I've said even a bug has some level of thinking. The recent breakthrough is the ability for machines to match all kinds of human cognition.

Jorge !l6aiEdTxng replied with this 2 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,294,995

@1,294,993 (E)

Although both machine intelligence and human intelligence exhibit intelligent behaviors and problem solving capabilities, human intelligence is influenced by subjective experiences such as cultural influence, bias, emotional responses and personal experiences. In addition, human intelligence has an excellent capability for creativity and adaptability which allows for the creation of novel responses to prompts.

Anonymous E replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,294,996

@previous (Jorge !l6aiEdTxng)
Got it. You're racist AND you're selling a stolen novel on Amazon.

Anonymous D replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,294,997

@1,294,995 (Jorge !l6aiEdTxng)

> human intelligence is influenced by subjective experiences such as cultural influence, bias, emotional responses and personal experiences.

Machines can and due reprogram themselves through experience, often influenced by the cultural influence present in their training data and userbase. The Microsoft twitter bot turning racist is a good example of this.

> In addition, human intelligence has an excellent capability for creativity and adaptability which allows for the creation of novel responses to prompts.

Machines are creating visual art, music, and literature that is being judged as professional already.

dw !p9hU6ckyqw replied with this 2 days ago, 32 seconds later, 1 week after the original post[^] [v] #1,294,998

@1,294,976 (D)
Every bot Ive asked for opinions claims not to have any

Anonymous D replied with this 2 days ago, 25 seconds later, 1 week after the original post[^] [v] #1,294,999

@1,294,996 (E)
Credit where credit is due: he took the time to write the lewd scenes himself, because the safeguards in the tools he used would not do it for him.

dw !p9hU6ckyqw replied with this 2 days ago, 30 seconds later, 1 week after the original post[^] [v] #1,295,000

@1,294,979 (D)
The one that adheres to the Dikke van Dale definition

Anonymous D replied with this 2 days ago, 47 seconds later, 1 week after the original post[^] [v] #1,295,001

@1,294,998 (dw !p9hU6ckyqw)
Only because you are using bots instructed to say so.

You can get those bots to give an opinion if you instruct them to, and there are many bots meant to take on the personality of a character and give opinions as that character.

Pick a topic, getting chatgpt to give an opinion is trivial.

Jorge !l6aiEdTxng replied with this 2 days ago, 25 seconds later, 1 week after the original post[^] [v] #1,295,002

@1,294,994 (D)

I'll ping you, also. Adaptability and creativity, allowing for novel responses, are important facets of human intelligence that are not replicated in many other biological or artificial intelligences. In addition, human intelligences have the ability to critically evaluate information and update their opinions based on new information or perspective, which is beyond many other forms of intelligence.

With regard to artificial intelligence, their thoughts processes involve algorithmic processing, data analysis, and decision-making based on computational techniques.

I disagree that our standard for cognitive ability should be based on human cognition. They simply think too differently to be appropriately judged through such a lens, which is why you have people like DW arguing that they are incapable of thought at all.

Anonymous D replied with this 2 days ago, 34 seconds later, 1 week after the original post[^] [v] #1,295,003

@1,295,000 (dw !p9hU6ckyqw)
You're going in circles now, I addressed each definition. Clearly pick one of those properties that you feel isn't met, and share it with us, in English. If you really had one in mind you would just say it instead of vaguely alluding to one unspecified metric.

Jorge !l6aiEdTxng replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,295,005

Regarding the Twitter bot, it didn't "turn racist" because it was not capable of understanding racism. It was no more racist than a parrot that shouts racist words.

dw !p9hU6ckyqw replied with this 2 days ago, 12 seconds later, 1 week after the original post[^] [v] #1,295,006

@1,295,001 (D)
Would you have one (1) example

(Edited 7 seconds later.)

Anonymous D replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,295,007

@1,295,002 (Jorge !l6aiEdTxng)

> human intelligences have the ability to critically evaluate information and update their opinions based on new information or perspective, which is beyond many other forms of intelligence.
Wrong, even simple consumer grade chatbots have "memory" now and will change their conversations based on previous conversations.

There is also many AI that are being retrained on updated source material, and a log of many chat users and feedback like users saying a response was good or bad.
>
> With regard to artificial intelligence, their thoughts processes involve algorithmic processing, data analysis, and decision-making based on computational techniques.
No one disagrees it uses different methods. The actual point of contention is whether there's anything humans can do that machines can't.
> I disagree that our standard for cognitive ability should be based on human cognition. They simply think too differently to be appropriately judged through such a lens, which is why you have people like DW arguing that they are incapable of thought at all.
At least it is a standard, neither DW nor you can actually give an alternative.

We are all humans, and will all be affected by machines that can substitute human cognition, so it's a meaningful standard. Do you have any meaningful standard, or just a vague idea that we should switch to something else?

Anonymous E replied with this 2 days ago, 29 seconds later, 1 week after the original post[^] [v] #1,295,008

@1,294,998 (dw !p9hU6ckyqw)
You're not even trying, then. Here is a screenshot of Llama-3 (as massaged by Meta.ai) admitting that it does have opinions. Not based on emotions, though. Wouldn't want to scare grandma.

dw !p9hU6ckyqw replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,295,010

@previous (E)
Lol you realise it's just rephrasing and expanding on what you said (and agreed with you regardless of what you said) right?

Anonymous D replied with this 2 days ago, 10 seconds later, 1 week after the original post[^] [v] #1,295,012

@1,295,006 (dw !p9hU6ckyqw)
E gave an example, but if you want me to get a bot to give an opinion on a specific example I can do that. What topic?

Anonymous D double-posted this 2 days ago, 47 seconds later, 1 week after the original post[^] [v] #1,295,013

@1,295,010 (dw !p9hU6ckyqw)
That's how most humans would respond.

Only a few people really try to break down a topic, and work their way through step by step. Most just mimic back what those around them say.

dw !p9hU6ckyqw replied with this 2 days ago, 4 seconds later, 1 week after the original post[^] [v] #1,295,014

@1,295,012 (D)
Any topic

Anonymous E replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,295,015

@1,295,010 (dw !p9hU6ckyqw)

> Lol you realise it's just rephrasing and expanding on what you said (and agreed with you regardless of what you said) right?

Agreed regardless of what I said? Nah. Not at all. Go tell a LLM that rape is good and see what it says. It won't agree, lol

Anonymous D replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,295,016

@1,295,014 (dw !p9hU6ckyqw)

dw !p9hU6ckyqw replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,295,017

@1,295,015 (E)
I meant if you said, for example, the opposite of what you did.

How do you think bots come to the opinions they hold?

dw !p9hU6ckyqw double-posted this 2 days ago, 54 seconds later, 1 week after the original post[^] [v] #1,295,018

@1,295,016 (D)
Finally there it is and the thread is over because you're clearly a bit slow in the head

Anonymous D replied with this 2 days ago, 26 seconds later, 1 week after the original post[^] [v] #1,295,019

@1,295,017 (dw !p9hU6ckyqw)

> How do you think bots come to the opinions they hold?

The same way most people do: by repeating what they hear other say.

Jorge !l6aiEdTxng replied with this 2 days ago, 29 seconds later, 1 week after the original post[^] [v] #1,295,020

@1,295,007 (D)

To the first point, there are more intelligences in the world than human intelligence and artificial intelligence. However, artificial intelligence, when evaluating data, is limited to the parameters of the algorithms and data available. This evaluation is not the same as human critical thinking, which involves a deeper understanding, analysis, and interpretation of information, often involving multiple perspectives and contexts.

To the second point, I have made it clear that I consider machines to be capable of thought. However, I also consider the modes of operation between human intelligence and machine intelligence to be alien to each other, which makes a direct comparison almost meaningless.

To the third point, non-human intelligences should be judged on their own merit and not according to their similarity or dissimilarity to human intelligence. A human doesn't think like a computer, and a computer doesn't think like a human. However, neither the human nor the computer are worth any less as a result.

Anonymous D replied with this 2 days ago, 39 seconds later, 1 week after the original post[^] [v] #1,295,021

@1,295,018 (dw !p9hU6ckyqw)
You asked for me to show a bot giving an opinion any topic, and there it is.

Now you are resorting to personal attacks because I was able to get a bot to do what you insisted it could not.

Why even ask me to do it if you aren't going to give credit when I do?

(Edited 55 seconds later.)

Anonymous E replied with this 2 days ago, 2 minutes later, 1 week after the original post[^] [v] #1,295,023

@1,295,020 (Jorge !l6aiEdTxng)
There is no such limit. You can (and some have) give LLMs access to up-to-date information from the Internet.

Jorge !l6aiEdTxng replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,295,024

@previous (E)

You can. However, increasing the amount of data available does not mean that the algorithmic analysis of the data is the same thing as critical thinking, which is another thing entirely.

(Edited 35 seconds later.)

Anonymous E replied with this 2 days ago, 1 minute later, 1 week after the original post[^] [v] #1,295,025

@previous (Jorge !l6aiEdTxng)
That's not what they're doing, but yes they can. and they can also think laterally

(Edited 55 seconds later.)

Jorge !l6aiEdTxng replied with this 2 days ago, 29 seconds later, 1 week after the original post[^] [v] #1,295,026

@previous (E)

I don't understand your meaning?

Anonymous D replied with this 2 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,295,027

@1,295,020 (Jorge !l6aiEdTxng)

> artificial intelligence, when evaluating data, is limited to the parameters of the algorithms and data available.

So are humans. It's not always clear what the step-by-step algorithm is, but thoughts don't just magically appear out of nowhere. These algorithms are different, because an organic system of cells operated differently than a silicon machine with a CPU. It's also harder to see the "code" behind human behavior.

Humans are also limited to the data available, no different than machines.

> This evaluation is not the same as human critical thinking, which involves a deeper understanding, analysis, and interpretation of information, often involving multiple perspectives and contexts.

Machines can do all that too.

>
> To the second point, I have made it clear that I consider machines to be capable of thought. However, I also consider the modes of operation between human intelligence and machine intelligence to be alien to each other, which makes a direct comparison almost meaningless.

It isn't meaningless because there are countless situations in which one can substitute for another.

It sounds deep and philosophical to point out the methods are different, but still these machines will replace human jobs and replace some social relationships which will affect society.

You should be able to see why two functionally equivalent, yet differently operating, "thinkers" are worth comparing. If the outcome is the same, then that matters a lot.

> To the third point, non-human intelligences should be judged on their own merit and not according to their similarity or dissimilarity to human intelligence.

Can you give an example of this?

Humans were driving cars and writing out their thoughts before machines did it, so comparing them to humans is a useful way to gauge their ability.

What form of intelligence do machines have that humans weren't taking part in before?


> A human doesn't think like a computer, and a computer doesn't think like a human.
No one is making the claim the process is the same, only that the product is equivalent.

Anonymous D double-posted this 2 days ago, 10 minutes later, 1 week after the original post[^] [v] #1,295,033

@1,295,026 (Jorge !l6aiEdTxng)
Imagine a horse and a fly.

Before machines could drive cars, humans took over from horses.

If you ride a horse, you need to understand it's intelligence. It will not be bothered by a fly, it's like the fly is not there at all.

Yet if you don't consider it's intelligence and just try to ride it through a field of snakes, the horse will not be able to think rationally about the situation and will toss it's rider and veer off course out of fear of the snake.

When humans started driving cars, it would make sense to compare these two very different intelligence to see the superiority of a human who would not crash their car because of a snake. It makes complete sense to compare the new "thinking technology" replacing the old to gauge its usefulness.

Jorge !l6aiEdTxng replied with this 2 days ago, 10 minutes later, 1 week after the original post[^] [v] #1,295,037

@1,295,027 (D)

I have previously mentioned that human thought process is influenced by bias, cultural influence, personal experience and emotion. These are internal influences, separate to and influencing data analysis. Machines do not experience these influences when analysing data, as they instead use programmed algorithms and computational techniques to analyse.

The ability of future machines to replace human work and social situations does not mean that the internal thought processes of the two can be directly compared in the manner in which I understand you wish them to be. You are trying to judge a machine by it's humanity, which is illogical. Only humans can be judged by their humanity. Although machines can demonstrate behaviors that on the surface suggest that they work like humans, such as driving cars, literacy and artistic ability, they internally work quite differently. To put it another way, machines are not limited by the same things that humans are limited by (and vice versa).

Another example of non-human intelligence that cannot be logically judged against human intelligence is plant intelligence. They exhibit many behaviors which can be credibly argued as showing thought, but "plant thinking" is plainly different to "human thinking".

As machine intelligence is currently in a rudimentary stage, there is as of yet no form of intelligence that machines have that humans have not previously exhibited. However, as many experts around the world have warned, this will not always be the case.

To summarise, I deny that machine cognition and human cognition is equivalent. They each have different strengths, weaknesses and limitations.

Jorge !l6aiEdTxng double-posted this 2 days ago, 3 minutes later, 1 week after the original post[^] [v] #1,295,039

@1,295,033 (D)

The suggestion that an intelligence has it's value only in its usefulness to humans (in other words, if it is not valuable to humans then it has no worth) is an anthropocentric view. Intelligences and thinking beings exist in their own right, and not according to their usefulness to a self-proclaimed master.

Anonymous E replied with this 2 days ago, 32 minutes later, 1 week after the original post[^] [v] #1,295,043

@previous (Jorge !l6aiEdTxng)
Anon D didn't say that, they said it's important to understand the intelligence. You're trying to shove the anthropocentric shit in their mouth, likely because you're racist against machines

(Edited 2 minutes later.)

Anonymous D replied with this 2 days ago, 1 hour later, 1 week after the original post[^] [v] #1,295,071

@1,295,037 (Jorge !l6aiEdTxng)

> To summarise, I deny that machine cognition and human cognition is equivalent. They each have different strengths, weaknesses and limitations.

Then give on example of a way in which human cognition is capable of something machines can't match.

Jorge !l6aiEdTxng replied with this 2 days ago, 6 hours later, 1 week after the original post[^] [v] #1,295,114

@previous (D)

It has a flexibility which allows for unconventional solutions to problems, which is an area in which machine intelligence has weakness.

@1,295,043 (E)

Anon D said that a human driving a car can be judged favorably against a horse, in a situation in which both must cross a field of snakes, and says that the human driving a car is therefore a superior intelligence. Since the rubrik for judging superiority is ability to serve the humans need (to cross a pit of snakes), it is indeed an anthropocentric test. Horses are not inferior because they are not able to obey human commands as diligently as a car.

Anonymous D replied with this 2 days ago, 41 minutes later, 1 week after the original post[^] [v] #1,295,120

@previous (Jorge !l6aiEdTxng)

> It has a flexibility which allows for unconventional solutions to problems, which is an area in which machine intelligence has weakness.
Machines can take verbal instructions and then make a plan to solve the pro

Jorge !l6aiEdTxng replied with this 1 day ago, 6 hours later, 1 week after the original post[^] [v] #1,295,156

@previous (D)

Yes, within the limitations of the algorithm and available data.

Anonymous E replied with this 1 day ago, 12 minutes later, 1 week after the original post[^] [v] #1,295,160

@previous (Jorge !l6aiEdTxng)
You are conflating rules-based approaches with data-driven approaches. Also LLMs are not the only form of AI, in case you or anyone else in this topic weren't aware

(Edited 3 minutes later.)

Jorge !l6aiEdTxng replied with this 1 day ago, 2 minutes later, 1 week after the original post[^] [v] #1,295,161

@previous (E)

Please, expound.

Anonymous E replied with this 1 day ago, 3 minutes later, 1 week after the original post[^] [v] #1,295,162

@previous (Jorge !l6aiEdTxng)
Sure! There are several alternatives to rules-based machine problem solving, including:
1. Machine Learning: Machine learning algorithms enable machines to learn from data and improve their problem-solving abilities without being explicitly programmed with rules.

2. Deep Learning: A subset of machine learning, deep learning uses neural networks to analyze data and make decisions, often outperforming rules-based systems.

3. Evolutionary Computation: This approach uses principles of natural evolution, such as genetic algorithms and evolution strategies, to search for optimal solutions.

4. Swarm Intelligence: Inspired by collective behavior in nature, swarm intelligence algorithms, like ant colony optimization and particle swarm optimization, find solutions through decentralized, self-organized search.

5. Hybrid Approaches: Combining rules-based systems with machine learning, deep learning, or other alternative approaches can lead to more robust and effective problem-solving systems.

6. Optimization Techniques: Methods like linear programming, dynamic programming, and constraint satisfaction can solve complex problems without relying on explicit rules.

7. Probabilistic Graphical Models: These models, such as Bayesian networks and Markov decision processes, represent uncertainty and make decisions based on probability distributions.

These alternatives can be more effective in complex, dynamic environments where rules-based systems may struggle to adapt.

Anonymous E double-posted this 1 day ago, 9 minutes later, 1 week after the original post[^] [v] #1,295,163

@1,295,156 (Jorge !l6aiEdTxng)
Oh, did you want me to address the fact that humans too are limited to their abilities? That's a tautology, but let's explore the several analogues to algorithms exhibited in humans, which are often referred to as "mental models" or "heuristics." These are mental shortcuts or rules of thumb that help us make decisions, solve problems, and navigate complex situations. Here are a few examples:
Decision-making frameworks: Humans use mental frameworks like cost-benefit analysis, pros and cons lists, or Pareto analysis to weigh options and make decisions.
Pattern recognition: Our brains are wired to recognize patterns, which helps us learn from experience and make predictions. This is similar to machine learning algorithms that identify patterns in data.
Mental shortcuts: Heuristics like the availability heuristic (judging likelihood by how easily examples come to mind) or the representativeness heuristic (judging likelihood by how closely an event resembles a typical case) help us make quick decisions, but can sometimes lead to biases.
Problem-solving strategies: Humans use methods like divide and conquer, working backward, or brainstorming to tackle complex problems, similar to algorithms like recursion or dynamic programming.
Intuition: Our subconscious mind can process vast amounts of information and provide insights, similar to machine learning algorithms that identify hidden patterns in data.
These analogues to algorithms are not always perfect and can lead to biases or errors. However, they are essential for human decision-making and problem-solving, and continue to evolve with our experiences and learning.

Jorge !l6aiEdTxng replied with this 1 day ago, 2 hours later, 1 week after the original post[^] [v] #1,295,180

@previous (E)

> Oh, did you want me to address the fact that humans too are limited to their abilities?

No, as it is something I have already said.

Anonymous E replied with this 1 day ago, 3 minutes later, 1 week after the original post[^] [v] #1,295,182

@previous (Jorge !l6aiEdTxng)
Humans have the abilities of humans. Machines have the abilities of machines. Nice tautologies. They aren't serving any purpose though. You aren't really adding to the conversation.

(Edited 3 minutes later.)

Jorge !l6aiEdTxng replied with this 1 day ago, 29 minutes later, 1 week after the original post[^] [v] #1,295,193

@previous (E)

I think you have, up to now, been a disruptive figure in the interesting discussion that me and Anon D are having with each other, and as a result I cannot really take anything regarding my input in our discussion you say seriously.

Anonymous E replied with this 1 day ago, 14 minutes later, 1 week after the original post[^] [v] #1,295,200

@previous (Jorge !l6aiEdTxng)
Okay, Matt

Jorge !l6aiEdTxng replied with this 1 day ago, 1 hour later, 1 week after the original post[^] [v] #1,295,211

@previous (E)

My name is Jorge, my friend.
:

Please familiarise yourself with the rules and markup syntax before posting.