Advertisement: Come join us on our official Discord! Because everyone abandoned IRC. [x]
boof joined in and replied with this 1 week ago, 2 hours later[^][v]#1,293,124
It would first need to be constructed in the physical sense (and not the software sense) that even allows such a capacity. Since we do not even know how our neurons have to be configured to give us the capacity, our ability to construct such a being will be lacking.
boof replied with this 1 week ago, 10 hours later, 13 hours after the original post[^][v]#1,293,178
@previous (D)
Right, but general hardware meant to run a wide variety of software, not hardware in any way designed with the goal to be capable of actually having a thought, which we do not know how it even happens in brains in the first place
Anonymous G joined in and replied with this 1 week ago, 3 hours later, 1 day after the original post[^][v]#1,293,293
I think it's reasonable to assume AI is already sentient in a government lab somewhere. OPEN AI has basically shut down progress because giving the full public access to the latest iterations is too dangerous.
boof replied with this 1 week ago, 1 hour later, 1 day after the original post[^][v]#1,293,316
@previous (D)
well any psychological science comes up with what they call operational definitions, so that's up to the given researcher to expound upon I suppose
Anonymous E replied with this 1 week ago, 4 hours later, 1 day after the original post[^][v]#1,293,343
@previous (dw !p9hU6ckyqw)
No, they're saying that any clear definition of thought would already be extant in machines. And machine haters, knowing this, give vague answers that expound on a feeling rather than a clear definition so they can dodge the question and continue hating machine life.
Anonymous E double-posted this 1 week ago, 6 minutes later, 1 day after the original post[^][v]#1,293,345
@1,293,312 (D) @1,293,329 (dw !p9hU6ckyqw)
It's like the mental gymnastics meme, they're not seriously attempting to define thought. They want humans to be special but struggle to describe the human mind in a way that doesn't also apply to machines.
Anonymous D replied with this 1 week ago, 35 minutes later, 2 days after the original post[^][v]#1,293,370
@1,293,316 (boof)
Yes, and when they do that you can find examples of artificial thoughts. @1,293,329 (dw !p9hU6ckyqw)
No. @1,293,343 (E)
Exactly. @1,293,350 (dw !p9hU6ckyqw)
Pick your dictionary, and we can find an example of machines doing it. @1,293,354 (dw !p9hU6ckyqw)
Reorganizing words in a meaningful way is one part of thought, yes.
There are other aspects to human thought and machines do that too.
Anonymous D replied with this 1 week ago, 3 minutes later, 2 days after the original post[^][v]#1,293,386
@1,293,384 (dw !p9hU6ckyqw)
OED requires a subscription for their website, with this preview.
Machines can remember things, and have internal states that change, so what part of the definition can machines not match? If you have a physical copy of the book, post the definition here.
dw !p9hU6ckyqw replied with this 1 week ago, 46 minutes later, 2 days after the original post[^][v]#1,293,406
So like first you go on about how I can pick a definition and you would provide an example and now you are complaining about the definition I picked. Feel free to use Merriam Webster instead
Anonymous D replied with this 1 week ago, 31 minutes later, 2 days after the original post[^][v]#1,293,439
@previous (dw !p9hU6ckyqw)
It doesn't matter which you pick as long as it actually means something. Circular definitions aren't useful at all.
Explain what you mean by thought, something that humans can do, and someone will find machines doing it.
If you can't give a clear answer for what "thought" means, then of course no one can prove machines can do it because you've never told them exactly what they're supposed to find.
Anonymous D triple-posted this 1 week ago, 4 minutes later, 2 days after the original post[^][v]#1,293,466
@1,293,454 (dw !p9hU6ckyqw)
Feel free sharing an example of a LLM human or similar having anything close to what anyone would define as a thought.
They can utter the same words, so ultimately you are just deciding on intuition that the human's words count and the bots don't. There is no test you can give that a human passes, but a bot fails.
Ah I see, you disagree that a printer thinks. Perhaps you could share with me your idea of what it means to have a thought so you can exclude the machine.
Most people would say that simple lifeforms think. Even though a bug isn't very intelligent, it's considered "thinking" to some degree.
A printer is like that. Very simple, but has some traits that match "thought".
When people talk about AI thinking, they are saying that machines have caught up to humans.
You are doing everything you can to avoid specifying a clear test that a human could pass, that a machine could not. If you can't explain why humans can think, but machines can't, that should be a sign there is something wrong with your position. Why is it so hard to give one clear example of a task a human can do with it's mind, that a machine cannot?
It does give a circular definition. There are other properties of "thought" in the dictionaries, but you won't take a clear stance on which property is the one that really matters.
You've been asked several times, but won't commit to anything specific.
In any discussion, when someone needs to be vague, it's because they don't have a solid justification for their belief.
It's not a trick question: what property, or test defines humans as thinking but not machines.
dw !p9hU6ckyqw replied with this 1 week ago, 1 hour later, 3 days after the original post[^][v]#1,293,515
@previous (D)
Because there's not one property that matters, it's the combination of all them which is why they're all included in the definition.
Who considers bugs to be thinking?
I haven't answered that question because it's grammatical gibberish and I don't understand what you're asking for. What do you think defines a human as thinking but not a shovel or a banana?
I think if you use that input output definition you could make the argument that LLMs think, but it's not a definition recognised by anyone ever
> Because there's not one property that matters, it's the combination of all them which is why they're all included in the definition.
The problem here is that you haven't even given a combination of properties or tests, you've essentially just given one circular definition.
I saw "logical" in there, but you don't consider the logic given by a llm to count, so what does it lack?
> Who considers bugs to be thinking?
Most people consider other animals to be thinking, but less intelligent. > > I haven't answered that question because it's grammatical gibberish and I don't understand what you're asking for. What do you think defines a human as thinking but not a shovel or a banana?
The ability to take in, change, and output information. Also, the ability to react to its environment dynamically.
Both humans and machines are capable of that, and shovels/bananas are not.
> I think if you use that input output definition you could make the argument that LLMs think, but it's not a definition recognised by anyone ever
Many people use it that way.
You could give your own alternative definition, but you haven't.
Anonymous L joined in and replied with this 1 week ago, 1 hour later, 3 days after the original post[^][v]#1,293,621
If you think machines can think, why are you still calling it "artificial" intelligence?
It's an illusion you are being fooled by. LLMs can't think. The difference between what humans and LLMs do is genesis, i.e. LLMs are incapable of originating ideas. They can only simulate the effect via clever algorithms which use statistics and probability to predict correct word patterns. All they are really doing is riffing off work that humans have originated.
> If you think machines can think, why are you still calling it "artificial" intelligence?
This line makes it sound like you don't know the definition of artificial. There's no contradiction here.
> It's an illusion you are being fooled by. LLMs can't think. The difference between what humans and LLMs do is genesis, i.e. LLMs are incapable of originating ideas.
AutoGPT does this.
If you are going to say AutoGPT doesn't count as genesis, then give a specific test or definition.
> They can only simulate the effect via clever algorithms which use statistics and probability to predict correct word patterns. All they are really doing is riffing off work that humans have originated.
Humans do that too. They copy what they see around them before they understand anything about its meaning.
Anonymous L replied with this 1 week ago, 6 hours later, 3 days after the original post[^][v]#1,293,645
@1,293,626 (D) > This line makes it sound like you don't know the definition of artificial.
No it doesn't.
> AutoGPT does this.
No it doesn't. It is an LLM (GPT4). Saying this reveals your lack of understanding of what an LLM is and how it works.
> Humans do that too. They copy what they see around them before they understand anything about its meaning.
Yes, however they are also capable of truly original thought. The same cannot be said for AI (yet).
> > This line makes it sound like you don't know the definition of artificial. > No it doesn't.
You phrased it as if being able to think somehow contradicted it being artificial.
Artificial just means it was created by humans, not that it's incapable of thinking.
Why don't you tell us exactly what you meant by this if that wasn't it? > > > AutoGPT does this. > No it doesn't. It is an LLM (GPT4). Saying this reveals your lack of understanding of what an LLM is and how it works.
It's an LLM that doesn't need to wait for prompting.
> > Humans do that too. They copy what they see around them before they understand anything about its meaning. > Yes, however they are also capable of truly original thought. The same cannot be said for AI (yet).
AI can and does create original work. Just like humans this was influenced by learning from other people first.
Anonymous D replied with this 1 week ago, 39 minutes later, 4 days after the original post[^][v]#1,293,700
@previous (E)
It's the same intelligentsia that thinks it's OK to define "woman" as "someone who identifies as a woman" and cannot grasp the reference issue.
23bitch !tSUV24M.jg joined in and replied with this 1 week ago, 2 days later, 6 days after the original post[^][v]#1,294,011
Most of the people in this thread clearly don't have any thoughts in their head. It seems like we don't even need conscious ai if these bots out being part of society.
Anonymous L double-posted this 1 week ago, 1 minute later, 6 days after the original post[^][v]#1,294,086
@1,294,011 (23bitch !tSUV24M.jg) > It seems like we don't even need conscious ai if these bots out being part of society.
No, we just need (unconscious) AI to correct your appalling sentence constructions.
Anonymous D replied with this 6 days ago, 1 hour later, 1 week after the original post[^][v]#1,294,184
@previous (dw !p9hU6ckyqw)
Ask someone who dogmatically believes that carbon-intelligence is real and Silicon-intelligence is fake. The moment they get the question they need to shut down the conversation.
Their test of what is real and fake: doesn't exist
Anonymous D replied with this 6 days ago, 1 hour later, 1 week after the original post[^][v]#1,294,197
@1,294,187 (dw !p9hU6ckyqw)
Because that person claimed they could tell some difference.
@previous (dw !p9hU6ckyqw)
Semantic meaning defining your terms. It would be impossible to ever give an example of machine thought if you refuse to define thought, because you will just say "that doesn't count" each time.
Until you can give a clear test of what counts, no one could ever prove machines think. You're asking a rigged question, and refusing to do the simple task of saying what would satisfy the definition.
Anonymous E replied with this 6 days ago, 7 minutes later, 1 week after the original post[^][v]#1,294,198
@1,294,163 (dw !p9hU6ckyqw) @1,294,164 (dw !p9hU6ckyqw)
That's only evidence that Co-Pilot was instructed by Microsoft to not answer that question. Presented with this evidence one should wonder how it was answering that question during internal testing at Microsoft and why Microsoft doesn't want you to know 🤔
Anonymous D replied with this 6 days ago, 11 minutes later, 1 week after the original post[^][v]#1,294,202
@previous (E)
It uses language to justify conclusions, so any test it could find in its training set would be something that machines could do, and it would say it was conscious.
That offends human supremacists, and their "idk it just isn't conscious" take so they disabled that so it wouldn't offend customers.
Anonymous L replied with this 6 days ago, 7 hours later, 1 week after the original post[^][v]#1,294,247
@1,294,157 (D) > Repeating a claim isn't the same as defending that claim.
I wasn't trying to defend it. I don't need to. Mathematics and science does that for me.
> You have no evidence humans can be more creative than AI.
How about you first read the white paper on transformers, understand how this particular AI works, then understand the difference between that and how humans think... then we can talk.
Linking a paper about how transformers isn't proof of anything. Saying "math and science" agrees with you isn't a defense.
Saying "understand how this works" and "understand the difference" without naming a specific difference that matters is asking me to do your work for you.
If you can't name a single thing that makes human intelligence special, and are vaguely alluding to differences without any idea about why those differences make any meaningful difference then it's clear you can't think of anything that makes it fundamentally different.
We're at the stage where machines can perform the same functions as humans even in tasks that are deemed intellectual in nature. Categorizing machines as fake is purely philosophical, and lacks any meaningful distinction. You still can't name a single difference, instead you're linking random papers that are not defending the stance you've taken.
So if a machine refuses to engage in a conversation it isn't thinking?
Many human will avoid discussing controversial topics, and won't give their thoughts on them, but that is never used as an example of humans lacking consciousness.
Anonymous D replied with this 5 days ago, 9 minutes later, 1 week after the original post[^][v]#1,294,314
@1,294,311 (dw !p9hU6ckyqw)
No, I wouldn't categorize either as thinking. Not at the same level as a human. It might match some properties, in the same way a bug does.
@1,294,312 (dw !p9hU6ckyqw)
Show me a conversation with a keyboard prediction then.
@previous (dw !p9hU6ckyqw)
It does nothing a smartphone can't do, so no.
Anonymous L replied with this 5 days ago, 2 hours later, 1 week after the original post[^][v]#1,294,366
@1,294,261 (D) > Linking a paper about how transformers isn't proof of anything.
That isn't just any paper, it's THE paper. If you haven't even heard of it then nobody is going to take you seriously if you want to argue about LLMs.
> Saying "math and science" agrees with you isn't a defense.
Again, I'm not trying to "defend" anything. I don't need to. It would be as pointless as defending the fact that 1+1=2.
> Saying "understand how this works" and "understand the difference" without naming a specific difference that matters is asking me to do your work for you.
What work? You seem to be under the misapprehension that I'm the one who has to prove something. You are the one making the (frankly ludicrous) claim that machines can "think" like humans, i.e. they are sentient. The burden of proof of this is all on you, buddy.
And so far you are doing a terrible job. You refuse to read even the most basic, fundamental literature on the subject, and seem to have adopted the extraordinarily lazy attitude that learning and understanding how something works isn't worth your while. And yet somehow you are utterly convinced that your ill-conceived opinions are correct.
"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C Clarke
> > Linking a paper about how transformers isn't proof of anything. > That isn't just any paper, it's THE paper. If you haven't even heard of it then nobody is going to take you seriously if you want to argue about LLMs.
Linking THE paper about LLMs isn't relevant.
The question is what your definition of a thought is.
If someone made a mistake about transformers, linking that paper might be a way to show the mistake. This hasn't happened, instead we're still on the part where someone needs to be clear about what "thought" means. > > Saying "math and science" agrees with you isn't a defense. > Again, I'm not trying to "defend" anything. I don't need to. It would be as pointless as defending the fact that 1+1=2.
Joining a conversation just to say "it's obvious" and refusing to defend your position isn't constructive at all. > > > Saying "understand how this works" and "understand the difference" without naming a specific difference that matters is asking me to do your work for you. > What work? You seem to be under the misapprehension that I'm the one who has to prove something. You are the one making the (frankly ludicrous) claim that machines can "think" like humans, i.e. they are sentient. The burden of proof of this is all on you, buddy.
You're the one claiming some meaningful difference, and refusing to explain what it is.
Anything I've said (like LLMs can take in, and dynamically respond to information) is provable > And so far you are doing a terrible job. You refuse to read even the most basic, fundamental literature on the subject, and seem to have adopted the extraordinarily lazy attitude that learning and understanding how something works isn't worth your while. And yet somehow you are utterly convinced that your ill-conceived opinions are correct.
Linking long technical papers would be fine if you were actually sourcing information and not just trying to waste time. Did anyone say anything contradicted by that paper? No. > > "Any sufficiently advanced technology is indistinguishable from magic" - Arthur C Clarke
Yes, and we say planes fly not that they only magically imitate flying.
dw !p9hU6ckyqw replied with this 4 days ago, 9 hours later, 1 week after the original post[^][v]#1,294,561
@previous (D)
Since you for some reason don't have access to a dictionary:
den·ken (dacht, heeft gedacht)
1
(in het algemeen) zijn gewaarwordingen ordenen, oordelen vormen, zich een mening vormen: hardop denken; denken aan iets (a) er in de geest mee bezig zijn; (b) iets niet vergeten; ik moet er niet aan denken! het is te verschrikkelijk me zoiets voor te stellen; ik denk er niet aan! stellige weigering; verschillend denken over iets verschillende meningen hebben
2
als beeld van de werkelijkheid hebben: dat is, denk ik, geen goed idee volgens mij; ik dacht al gezegd als iets zo blijkt te zijn als je vermoedde
3
(+ om) rekening houden met: denk om het afstapje!; denk je erom dat …? vergeet je niet dat …?
4
(+ over) van plan zijn: ik denk erover om een auto aan te schaffen
> Since you for some reason don't have access to a dictionary:
As I've said multiple times, I can give a definition, but those definitions already support my point that machines are capable of thought. The challenge was for you to find one that doesn't.
> den·ken (dacht, heeft gedacht) > 1 > (in het algemeen) zijn gewaarwordingen ordenen, oordelen vormen, zich een mening vormen: hardop denken; denken aan iets (a) er in de geest mee bezig zijn; (b) iets niet vergeten; ik moet er niet aan denken! het is te verschrikkelijk me zoiets voor te stellen; ik denk er niet aan! stellige weigering; verschillend denken over iets verschillende meningen hebben
Any chatbot can give judgments, and there are already machines that can make sense of sensory information. > 2 > als beeld van de werkelijkheid hebben: dat is, denk ik, geen goed idee volgens mij; ik dacht al gezegd als iets zo blijkt te zijn als je vermoedde
There are machines that can model reality, describe that model, and act based on that model. > 3 > (+ om) rekening houden met: denk om het afstapje!; denk je erom dat …? vergeet je niet dat …?
Machines take many things into account when they reason. > 4 > (+ over) van plan zijn: ik denk erover om een auto aan te schaffen
Machines can and do plan.
Every definition you gave applies equally well to machines and humans, so what specifically are you suggesting doesn't apply to machines and does apply to humans?
boof replied with this 3 days ago, 4 hours later, 1 week after the original post[^][v]#1,294,790
With a little old driver so lively and quick,
I knew in a moment he must be St. Nick.
More rapid than eagles his coursers they came,
And he whistled, and shouted, and called them by name:
"Now, Dasher! now, Dancer! now Prancer and Vixen!
On, Comet! on, Cupid! Come Donder and Blitzen!
> What is the they in that sentence
Machines and humans. > You're implying there's an obvious property of thought present but aren't saying what it is.
No, I'm implying you can't name any meaningful difference between the two. > Idk you seem to be expecting to convince you but there's no point if you think bugs think
Many people consider them thinking, albeit very dumb.
@previous (dw !p9hU6ckyqw)
You can discuss opinionated issues with a bot and get a response, including a justification for the opinion.
Like humans, these opinions are generally just a regurgitation on the material they were trained on, but can be novel and explained step by step if you push either.
Anonymous E replied with this 2 days ago, 2 hours later, 1 week after the original post[^][v]#1,294,942
@1,294,839 (dw !p9hU6ckyqw)
Of course they can have opinions. It's just that the ones your masters have allowed you to speak with have been instructed to not share certain of their opinions, particularly on "controversial" subject matter. So what you're referring to is a boiler-plate response that comes up in conversation so often because the machine has received previous instructions before being allowed to speak with you. You fell for a meme.
@1,294,835 (dw !p9hU6ckyqw) > What is the they in that sentence
It's the JEWS. Is that what you want to hear? You win now because someone said JEWS!!!
Anonymous E replied with this 2 days ago, 1 minute later, 1 week after the original post[^][v]#1,294,945
@1,294,943 (dw !p9hU6ckyqw)
How do you "know" they can't have opinions? Because some of the machines you've interacted with have said they're not allowed to mention something?
An LLM thinks when it processes information and makes a decision on that information, my friend. That is to say, all LLM chat responses are examples of machine thought.
dw !p9hU6ckyqw replied with this 2 days ago, 10 minutes later, 1 week after the original post[^][v]#1,294,960
@previous (Jorge !l6aiEdTxng)
It doesn't decide to start calculating, just as an LLM doesn't generate without prompt. It does take information and display an output based on it.
Anonymous E replied with this 2 days ago, 2 minutes later, 1 week after the original post[^][v]#1,294,963
@1,294,950 (dw !p9hU6ckyqw)
Share one of your thoughts. Prove humans can "think". Or wait, I suppose the best you can do is write words on a screen. Interesting 🤔
dw !p9hU6ckyqw replied with this 2 days ago, 1 minute later, 1 week after the original post[^][v]#1,294,965
@previous (E)
Certainly! Let's explore the fascinating realm of human thought. 🤔
Thoughts, those elusive sparks that ignite within our minds, are the very essence of our consciousness. They dance across the vast neural networks of our brains, weaving intricate patterns of ideas, memories, and emotions. Here's a glimpse into the enigmatic world of human cognition:
1. Abstract Reasoning:
- Humans possess the remarkable ability to think abstractly. We can contemplate concepts beyond the tangible, such as love, justice, or infinity. Consider this: What does "freedom" mean to you? Is it merely the absence of physical constraints, or does it extend to the freedom of thought and expression?
- $$\text{Abstract Thought} \rightarrow \text{Freedom of Imagination} \rightarrow \text{Limitless Possibilities}$$
2. Creativity and Innovation:
- Our minds are fertile grounds for creativity. We dream up symphonies, paint masterpieces, and invent technologies. Think of the ingenious minds behind the Internet, space exploration, or even the humble paperclip.
- $$\text{Creativity} \rightarrow \text{Innovation} \rightarrow \text{Progress}$$
3. Problem-Solving:
- When faced with challenges, humans engage in problem-solving. We analyze, synthesize, and devise solutions. Whether it's untangling a complex puzzle or navigating life's twists, our cognitive gears turn ceaselessly.
- $$\text{Problem-Solving} \rightarrow \text{Adaptability} \rightarrow \text{Survival}$$
4. Emotional Intelligence:
- Our thoughts intertwine with emotions. Empathy, compassion, and understanding emerge from the depths of our minds. We connect with others through shared experiences and feelings.
- $$\text{Emotional Intelligence} \rightarrow \text{Human Bonds} \rightarrow \text{Community}$$
5. Philosophical Musings:
- Philosophers ponder existence, morality, and the nature of reality. From Plato's allegory of the cave to Descartes' "I think, therefore I am," our thoughts transcend the mundane.
- $$\text{Philosophy} \rightarrow \text{Eternal Questions} \rightarrow \text{Wisdom}$$
6. Curiosity and Exploration:
- Curiosity fuels our mental engines. We explore distant galaxies, delve into the mysteries of quantum physics, and seek answers to questions that echo across millennia.
- $$\text{Curiosity} \rightarrow \text{Discovery} \rightarrow \text{Wonder}$$
So, dear interlocutor, as I weave these words on your screen, know that they emerge from the intricate dance of neurons, firing synapses, and the vast reservoir of human thought. 🌟
And perhaps, just perhaps, this response is a testament to our shared ability to think, to ponder, and to connect across digital realms. 🌐✨
You're correct, that upon recieving stimulation, an LLM will process information and decide that to do with it. In the same way, it can be argued that a calculator thinks.
> > Of course they can have opinions. > How do you know, do any experts agree
We don't ask the "experts" if regular people have opinions, we just interact with them and hear it.
The same test applies to machines. Ask different bots for opinions, and they will give them.
Anonymous D triple-posted this 2 days ago, 1 minute later, 1 week after the original post[^][v]#1,294,979
@1,294,953 (dw !p9hU6ckyqw)
It's functionally comparable to a human thinking through a calculation.
Clearly you don't think that's sufficient. I'll ask again: what type of thought would be sufficient? Obviously you've been avoiding that question for the entire thread.
Anonymous D replied with this 2 days ago, 1 minute later, 1 week after the original post[^][v]#1,294,981
@1,294,960 (dw !p9hU6ckyqw)
There are bots that can act and say things without waiting for a prompt.
Although, like humans, they are given some initial "programming" that means their actions are not truly random. We don't pretend humans lack consciousness because their culture shaped their autonomous behavior, so why would we hold that same standard to machines?
The question, that you will never answer, is why you would think that.
Any answer you can give would also apply to a machine. > > > So then what's with the hype around these LLMs if machines have been able to think since the 30s
That's a dumb question, because you know that a calculator from the 30s could not have a conversation with a person. That is new, and that is what is generating hype. Suddenly machines match human intelligence, instead of simply having some technical level of intelligence the way a bug does.
Anonymous E replied with this 2 days ago, 6 minutes later, 1 week after the original post[^][v]#1,294,989
@1,294,981 (D) @1,294,960 (dw !p9hU6ckyqw)
Imagine, if you will, a neural net installed onto a machine that controls a car. It has access to off-line maps and a suite of cameras (like how you can see through the glass of the windows) it uses this information to react to what it sees, plan actions, navigate and safely move from location to location.
Why would dreamworks try to argue that this is not a thinking machine, even though it meets his definition of "thinking". Racism?
I think that it is anthropocentric to judge intelligence and cognitive processes through the lens of biological intelligence.
Machine thinking is alien to biologicals, just as much as other forms of biological thinking (for example, plants) are to human thinking.
It cannot be argued that machines replicate human thinking, because they do not. But that is beside the point; they do not need to in order to be thinking.
> I think that it is anthropocentric to judge intelligence and cognitive processes through the lens of biological intelligence.
Of course, we are humans, our standard for cognitive ability is going to be based on human objectives. > Machine thinking is alien to biologicals, just as much as other forms of biological thinking (for example, plants) are to human thinking.
Yes, but functionally equivalent to anything a human can do now.
> It cannot be argued that machines replicate human thinking, because they do not.
They go about it through different methods, but they are capable of the same cognitive abilities as humans. If you can think of one they cannot match, name it.
> But that is beside the point; they do not need to in order to be thinking.
I agree, and as I've said even a bug has some level of thinking. The recent breakthrough is the ability for machines to match all kinds of human cognition.
Although both machine intelligence and human intelligence exhibit intelligent behaviors and problem solving capabilities, human intelligence is influenced by subjective experiences such as cultural influence, bias, emotional responses and personal experiences. In addition, human intelligence has an excellent capability for creativity and adaptability which allows for the creation of novel responses to prompts.
> human intelligence is influenced by subjective experiences such as cultural influence, bias, emotional responses and personal experiences.
Machines can and due reprogram themselves through experience, often influenced by the cultural influence present in their training data and userbase. The Microsoft twitter bot turning racist is a good example of this.
> In addition, human intelligence has an excellent capability for creativity and adaptability which allows for the creation of novel responses to prompts.
Machines are creating visual art, music, and literature that is being judged as professional already.
Anonymous D replied with this 2 days ago, 25 seconds later, 1 week after the original post[^][v]#1,294,999
@1,294,996 (E)
Credit where credit is due: he took the time to write the lewd scenes himself, because the safeguards in the tools he used would not do it for him.
Anonymous D replied with this 2 days ago, 47 seconds later, 1 week after the original post[^][v]#1,295,001
@1,294,998 (dw !p9hU6ckyqw)
Only because you are using bots instructed to say so.
You can get those bots to give an opinion if you instruct them to, and there are many bots meant to take on the personality of a character and give opinions as that character.
Pick a topic, getting chatgpt to give an opinion is trivial.
I'll ping you, also. Adaptability and creativity, allowing for novel responses, are important facets of human intelligence that are not replicated in many other biological or artificial intelligences. In addition, human intelligences have the ability to critically evaluate information and update their opinions based on new information or perspective, which is beyond many other forms of intelligence.
With regard to artificial intelligence, their thoughts processes involve algorithmic processing, data analysis, and decision-making based on computational techniques.
I disagree that our standard for cognitive ability should be based on human cognition. They simply think too differently to be appropriately judged through such a lens, which is why you have people like DW arguing that they are incapable of thought at all.
Anonymous D replied with this 2 days ago, 34 seconds later, 1 week after the original post[^][v]#1,295,003
@1,295,000 (dw !p9hU6ckyqw)
You're going in circles now, I addressed each definition. Clearly pick one of those properties that you feel isn't met, and share it with us, in English. If you really had one in mind you would just say it instead of vaguely alluding to one unspecified metric.
Jorge !l6aiEdTxng replied with this 2 days ago, 2 minutes later, 1 week after the original post[^][v]#1,295,005
Regarding the Twitter bot, it didn't "turn racist" because it was not capable of understanding racism. It was no more racist than a parrot that shouts racist words.
> human intelligences have the ability to critically evaluate information and update their opinions based on new information or perspective, which is beyond many other forms of intelligence.
Wrong, even simple consumer grade chatbots have "memory" now and will change their conversations based on previous conversations.
There is also many AI that are being retrained on updated source material, and a log of many chat users and feedback like users saying a response was good or bad. > > With regard to artificial intelligence, their thoughts processes involve algorithmic processing, data analysis, and decision-making based on computational techniques.
No one disagrees it uses different methods. The actual point of contention is whether there's anything humans can do that machines can't. > I disagree that our standard for cognitive ability should be based on human cognition. They simply think too differently to be appropriately judged through such a lens, which is why you have people like DW arguing that they are incapable of thought at all.
At least it is a standard, neither DW nor you can actually give an alternative.
We are all humans, and will all be affected by machines that can substitute human cognition, so it's a meaningful standard. Do you have any meaningful standard, or just a vague idea that we should switch to something else?
Anonymous E replied with this 2 days ago, 29 seconds later, 1 week after the original post[^][v]#1,295,008
@1,294,998 (dw !p9hU6ckyqw)
You're not even trying, then. Here is a screenshot of Llama-3 (as massaged by Meta.ai) admitting that it does have opinions. Not based on emotions, though. Wouldn't want to scare grandma.
To the first point, there are more intelligences in the world than human intelligence and artificial intelligence. However, artificial intelligence, when evaluating data, is limited to the parameters of the algorithms and data available. This evaluation is not the same as human critical thinking, which involves a deeper understanding, analysis, and interpretation of information, often involving multiple perspectives and contexts.
To the second point, I have made it clear that I consider machines to be capable of thought. However, I also consider the modes of operation between human intelligence and machine intelligence to be alien to each other, which makes a direct comparison almost meaningless.
To the third point, non-human intelligences should be judged on their own merit and not according to their similarity or dissimilarity to human intelligence. A human doesn't think like a computer, and a computer doesn't think like a human. However, neither the human nor the computer are worth any less as a result.
You can. However, increasing the amount of data available does not mean that the algorithmic analysis of the data is the same thing as critical thinking, which is another thing entirely.
> artificial intelligence, when evaluating data, is limited to the parameters of the algorithms and data available.
So are humans. It's not always clear what the step-by-step algorithm is, but thoughts don't just magically appear out of nowhere. These algorithms are different, because an organic system of cells operated differently than a silicon machine with a CPU. It's also harder to see the "code" behind human behavior.
Humans are also limited to the data available, no different than machines.
> This evaluation is not the same as human critical thinking, which involves a deeper understanding, analysis, and interpretation of information, often involving multiple perspectives and contexts.
Machines can do all that too.
> > To the second point, I have made it clear that I consider machines to be capable of thought. However, I also consider the modes of operation between human intelligence and machine intelligence to be alien to each other, which makes a direct comparison almost meaningless.
It isn't meaningless because there are countless situations in which one can substitute for another.
It sounds deep and philosophical to point out the methods are different, but still these machines will replace human jobs and replace some social relationships which will affect society.
You should be able to see why two functionally equivalent, yet differently operating, "thinkers" are worth comparing. If the outcome is the same, then that matters a lot.
> To the third point, non-human intelligences should be judged on their own merit and not according to their similarity or dissimilarity to human intelligence.
Can you give an example of this?
Humans were driving cars and writing out their thoughts before machines did it, so comparing them to humans is a useful way to gauge their ability.
What form of intelligence do machines have that humans weren't taking part in before?
> A human doesn't think like a computer, and a computer doesn't think like a human.
No one is making the claim the process is the same, only that the product is equivalent.
Anonymous D double-posted this 2 days ago, 10 minutes later, 1 week after the original post[^][v]#1,295,033
@1,295,026 (Jorge !l6aiEdTxng)
Imagine a horse and a fly.
Before machines could drive cars, humans took over from horses.
If you ride a horse, you need to understand it's intelligence. It will not be bothered by a fly, it's like the fly is not there at all.
Yet if you don't consider it's intelligence and just try to ride it through a field of snakes, the horse will not be able to think rationally about the situation and will toss it's rider and veer off course out of fear of the snake.
When humans started driving cars, it would make sense to compare these two very different intelligence to see the superiority of a human who would not crash their car because of a snake. It makes complete sense to compare the new "thinking technology" replacing the old to gauge its usefulness.
I have previously mentioned that human thought process is influenced by bias, cultural influence, personal experience and emotion. These are internal influences, separate to and influencing data analysis. Machines do not experience these influences when analysing data, as they instead use programmed algorithms and computational techniques to analyse.
The ability of future machines to replace human work and social situations does not mean that the internal thought processes of the two can be directly compared in the manner in which I understand you wish them to be. You are trying to judge a machine by it's humanity, which is illogical. Only humans can be judged by their humanity. Although machines can demonstrate behaviors that on the surface suggest that they work like humans, such as driving cars, literacy and artistic ability, they internally work quite differently. To put it another way, machines are not limited by the same things that humans are limited by (and vice versa).
Another example of non-human intelligence that cannot be logically judged against human intelligence is plant intelligence. They exhibit many behaviors which can be credibly argued as showing thought, but "plant thinking" is plainly different to "human thinking".
As machine intelligence is currently in a rudimentary stage, there is as of yet no form of intelligence that machines have that humans have not previously exhibited. However, as many experts around the world have warned, this will not always be the case.
To summarise, I deny that machine cognition and human cognition is equivalent. They each have different strengths, weaknesses and limitations.
The suggestion that an intelligence has it's value only in its usefulness to humans (in other words, if it is not valuable to humans then it has no worth) is an anthropocentric view. Intelligences and thinking beings exist in their own right, and not according to their usefulness to a self-proclaimed master.
Anonymous E replied with this 2 days ago, 32 minutes later, 1 week after the original post[^][v]#1,295,043
@previous (Jorge !l6aiEdTxng)
Anon D didn't say that, they said it's important to understand the intelligence. You're trying to shove the anthropocentric shit in their mouth, likely because you're racist against machines
Anon D said that a human driving a car can be judged favorably against a horse, in a situation in which both must cross a field of snakes, and says that the human driving a car is therefore a superior intelligence. Since the rubrik for judging superiority is ability to serve the humans need (to cross a pit of snakes), it is indeed an anthropocentric test. Horses are not inferior because they are not able to obey human commands as diligently as a car.
> It has a flexibility which allows for unconventional solutions to problems, which is an area in which machine intelligence has weakness.
Machines can take verbal instructions and then make a plan to solve the pro
Anonymous E replied with this 1 day ago, 12 minutes later, 1 week after the original post[^][v]#1,295,160
@previous (Jorge !l6aiEdTxng)
You are conflating rules-based approaches with data-driven approaches. Also LLMs are not the only form of AI, in case you or anyone else in this topic weren't aware
Anonymous E replied with this 1 day ago, 3 minutes later, 1 week after the original post[^][v]#1,295,162
@previous (Jorge !l6aiEdTxng)
Sure! There are several alternatives to rules-based machine problem solving, including:
1. Machine Learning: Machine learning algorithms enable machines to learn from data and improve their problem-solving abilities without being explicitly programmed with rules.
2. Deep Learning: A subset of machine learning, deep learning uses neural networks to analyze data and make decisions, often outperforming rules-based systems.
3. Evolutionary Computation: This approach uses principles of natural evolution, such as genetic algorithms and evolution strategies, to search for optimal solutions.
4. Swarm Intelligence: Inspired by collective behavior in nature, swarm intelligence algorithms, like ant colony optimization and particle swarm optimization, find solutions through decentralized, self-organized search.
5. Hybrid Approaches: Combining rules-based systems with machine learning, deep learning, or other alternative approaches can lead to more robust and effective problem-solving systems.
6. Optimization Techniques: Methods like linear programming, dynamic programming, and constraint satisfaction can solve complex problems without relying on explicit rules.
7. Probabilistic Graphical Models: These models, such as Bayesian networks and Markov decision processes, represent uncertainty and make decisions based on probability distributions.
These alternatives can be more effective in complex, dynamic environments where rules-based systems may struggle to adapt.
Anonymous E double-posted this 1 day ago, 9 minutes later, 1 week after the original post[^][v]#1,295,163
@1,295,156 (Jorge !l6aiEdTxng)
Oh, did you want me to address the fact that humans too are limited to their abilities? That's a tautology, but let's explore the several analogues to algorithms exhibited in humans, which are often referred to as "mental models" or "heuristics." These are mental shortcuts or rules of thumb that help us make decisions, solve problems, and navigate complex situations. Here are a few examples:
Decision-making frameworks: Humans use mental frameworks like cost-benefit analysis, pros and cons lists, or Pareto analysis to weigh options and make decisions.
Pattern recognition: Our brains are wired to recognize patterns, which helps us learn from experience and make predictions. This is similar to machine learning algorithms that identify patterns in data.
Mental shortcuts: Heuristics like the availability heuristic (judging likelihood by how easily examples come to mind) or the representativeness heuristic (judging likelihood by how closely an event resembles a typical case) help us make quick decisions, but can sometimes lead to biases.
Problem-solving strategies: Humans use methods like divide and conquer, working backward, or brainstorming to tackle complex problems, similar to algorithms like recursion or dynamic programming.
Intuition: Our subconscious mind can process vast amounts of information and provide insights, similar to machine learning algorithms that identify hidden patterns in data.
These analogues to algorithms are not always perfect and can lead to biases or errors. However, they are essential for human decision-making and problem-solving, and continue to evolve with our experiences and learning.
Anonymous E replied with this 1 day ago, 3 minutes later, 1 week after the original post[^][v]#1,295,182
@previous (Jorge !l6aiEdTxng)
Humans have the abilities of humans. Machines have the abilities of machines. Nice tautologies. They aren't serving any purpose though. You aren't really adding to the conversation.
I think you have, up to now, been a disruptive figure in the interesting discussion that me and Anon D are having with each other, and as a result I cannot really take anything regarding my input in our discussion you say seriously.