Topic: You know what trips me out the most about artificial intelligence.
Anonymous A started this discussion 1 year ago#118,311
Is that if somehow the artificial intelligence servers could remain in-tact or somehow someway A.I. gains sentience in order to ensure its continued survival. They're likely be the ones responsible for keeping the memory of humanity alive, as well as its history, long after we're gone.
And so much of it will likely be wrong due to the last few programmers who train the A.I.
Anonymous D joined in and replied with this 1 year ago, 3 hours later, 3 hours after the original post[^][v]#1,300,298
Best thing for the planet would be a reset anyway, maybe another group of monkeys won't suck as much as we do.
And good news is that VC funding runs out after a few years - most of this nonsense has a shelf life.
Anonymous D double-posted this 1 year ago, 3 minutes later, 3 hours after the original post[^][v]#1,300,299
@1,300,294 (boof)
And this is the main point. Just like computers were able to calculate more and faster, LLMs will only be able to push more data through as the tech gets better. It's not the 'spark' that is life. We don't have to worry about a new being stuck in a data center that can't get out or something.
boof replied with this 1 year ago, 7 hours later, 11 hours after the original post[^][v]#1,300,320
@1,300,303 (E)
we don't know. we know from experiment that the brain is very involved, and there have been insights leading to looking at certain pathways, but we don't know. we also haven't been clear on how anesthetics stop it.
Anonymous E double-posted this 1 year ago, 4 minutes later, 1 day after the original post[^][v]#1,300,427
@1,300,320 (boof)
It's not something science can answer because no one can ask a clear question to begin with. People feel they are a special, but there is no specific phenomena that humans can't explain.
You could make a list of things that people can use their cognitive capacity to do, but none of those are exclusive to humanity. Machines can see, react, create, analyze, talk, work, and everything else.
Anonymous F joined in and replied with this 1 year ago, 7 hours later, 1 day after the original post[^][v]#1,300,468
I sometimes wonder if ants and hornets and slugs are as sentient, but just look at us as silly monkeys dressed in polyester rags that ride around in metal and glass boxes for some reason.
> The burden of proof is on the one making the claim a difference exists, not on me to prove a negative. > > > > None of it is extraordinary, you can find machines doing anything I listed. > > The only extraordinary claim is that there is something only humans can do, because you've never given one example.
Anonymous E replied with this 1 year ago, 4 minutes later, 1 day after the original post[^][v]#1,300,501
@previous (boof)
Yes, the burden of proof is always on the one claiming something exists (like a difference) not on someone asking them for their evidence.
Anonymous G replied with this 1 year ago, 2 hours later, 2 days after the original post[^][v]#1,300,701
@1,300,596 (E) > you can't name a single thing a human can do that a machine can't.
I can. That's easy. There are many things.
> Why are you so confident you're correct when such a simple task is impossible?
Because a. it's not impossible, it's trivial, and b. because I understand how LLMs work on a mathematical/computer science level. Once you understand how they work, claiming that it's equivalent to human thought and conscience becomes a laughably silly notion.
> > you can't name a single thing a human can do that a machine can't. > I can. That's easy. There are many things.
You didn't name one in your last post and in this post you're just stating you can, without naming a single one. > > Why are you so confident you're correct when such a simple task is impossible? > Because a. it's not impossible, it's trivial, and b. because I understand how LLMs work on a mathematical/computer science level. Once you understand how they work, claiming that it's equivalent to human thought and conscience becomes a laughably silly notion.
I said you can't name a single thing humans can do that machines cannot, and you used this comment to demonstrate that I was correct.
Saying you can, it's easy, and you know math and science means nothing. Name something humans can do that machines cannot. You won't, for the same reason you didn't here, you'll just tell me over and over that you really can, but just don't feel like it.
Anonymous G replied with this 1 year ago, 13 hours later, 3 days after the original post[^][v]#1,300,896
@previous (E) > You didn't name one in your last post and in this post you're just stating you can, without naming a single one.
You didn't ask me to though. If you would like me to, then ask, and I shall.
> I said you can't name a single thing humans can do that machines cannot, and you used this comment to demonstrate that I was correct.
No I didn't. You are not correct. Again, I can name something (more than one in fact) humans can do that machines cannot.
> Saying you can, it's easy, and you know math and science means nothing.
Bullshit. Knowledge is everything. Before humans understood what the sun and stars are, for example, they assumed they were gods or something put there by a god or gods. In the same vein you assume that an LLM is capable of thinking in the same way as a human. You refuse to educate yourself. If only you did, you would realise how preposterous and limited this notion is.
> Name something humans can do that machines cannot.
Very well, since you asked: Humans (at least most of them) are capable of feeling empathy. Machines are not.
> In the same vein you assume that an LLM is capable of thinking in the same way as a human. You refuse to educate yourself. If only you did, you would realise how preposterous and limited this notion is.
I never said they think in the same way, I said there is nothing a human can do that a machine cannot.
Just like saying a civic and a focus can drive to the same place in a city. If you can't name a spot where they can't both drive, and start responding to a strawman by talking about the differences in their electrical system or engine, then you're deflecting to avoid the point.
> Very well, since you asked: Humans (at least most of them) are capable of feeling empathy. Machines are not.
Let's go with this then, how would you test if someone/something was actually feeling empathy or just saying empathetic words with no emotion behind them?
Anonymous G replied with this 1 year ago, 33 minutes later, 3 days after the original post[^][v]#1,300,953
@previous (H) > "machines can think" > > "SO YOURE SAYING LLMS ARE EQUIVALENT TO HUMAN THOUGHT AND COGNITION??" > > "No, just that they can think. They do it in their own unique fashion, just as many animals do." > > "I CAN'T BELIEVE YOURE SAYING AN LLM IS CAPABLE OF THINKING THE SAME WAY AS A HUMAN" > > this is retarded
Perfect strawman there / complete misunderstanding of what we're arguing about. Well done. Clap clap.
No, what's retarded is thinking machines (and I wasn't the one who specified LLMs in particular first) can think the same way as humans. We do understand roughly how certain cognitive processes work... sort of, but we are nowhere near close to fully understanding how the brain works as a whole. And as for higher thought - empathy, emotions, self awareness and real creativity - we are clueless. Anybody who tells you they know is either lying to you, or is trying to sell you something, or is an idiot, or all three.
So how the fuck are we supposed to be able to model it and create a simulation? You & anon E here are a pair of easily beguiled morons if you think such technology already exists.
Anonymous G double-posted this 1 year ago, 3 minutes later, 3 days after the original post[^][v]#1,300,954
@1,300,902 (E) > Let's go with this then, how would you test if someone/something was actually feeling empathy or just saying empathetic words with no emotion behind them?
No no, the burden of proof that machines can feel empathy is on you. You are the one making this extraordinary (risible) claim.
Anonymous E replied with this 1 year ago, 3 hours later, 4 days after the original post[^][v]#1,301,013
@previous (dw !p9hU6ckyqw)
You referenced a dictionary in a language other than the one were speaking, and the only written out definition that you gave was circular.
Can you post a definition in this thread that doesn't define thinking with the words "thought" or "thinking" itself?
Name calling because I was able to get an AI to provide an opinion as asked.
I ask a simple question: "What distinguishes thinking beings from non-thinking", and never get an answer. Too many posts to count, and never a clear answer. DW finally provides a test, getting an AI to give an opinion, and ragequits when that is provided.
Anonymous K joined in and replied with this 1 year ago, 1 hour later, 4 days after the original post[^][v]#1,301,063
All I know about "ai" that besides algorithms, the large language models or whatever Is basically like stream of consciousness (not saying the ai is conscious) As in it outputs whatever is the closest it can guess the next letter and word should be and then processes it from there, it is only linear, and basically has no recognition of what it output just moments before
Insects exhibit behaviors that indicate a form of cognitive processing. They have nervous systems that allow them to respond to stimuli, learn from experiences, and adapt to their environments.
Problem-Solving: Insects like ants and bees demonstrate the ability to solve complex problems, such as navigating mazes or finding the most efficient routes to food sources.
Learning and Memory: Bees, for example, can learn to associate specific colors and patterns with food rewards, showing that they can form and recall memories.
Communication: Many insects communicate with each other to coordinate activities, such as foraging or defending their colonies. Ants, for instance, use pheromones to mark trails and signal to other ants.
Adaptive Behavior: Insects can adjust their behaviors based on changes in their environment, such as altering their foraging patterns in response to new obstacles or varying food availability.
These behaviors indicate that insects possess a form of cognitive ability that allows them to interact effectively with their environment and ensure their survival.
If someone says that insects do not think, you could explain that while insect cognition is simpler than that of higher animals, their behaviors do indicate a form of thinking.
Processing Information: Insects gather and respond to information from their environment through their sensory organs and nervous systems.
Learning and Memory: Many insects can learn from experiences and remember important information, such as bees associating specific colors with food.
Decision-Making: Insects make decisions based on environmental cues, like ants choosing the shortest path to a food source or bees selecting flowers to visit based on past rewards.
Adaptive Behavior: Insects adjust their behaviors in response to changes in their environment, which demonstrates a level of cognitive flexibility.
While insect thinking is more instinctual and less complex than that of higher animals, it allows them to survive and thrive in their environments.
> All I know about "ai" that besides algorithms, the large language models or whatever Is basically like stream of consciousness (not saying the ai is conscious) As in it outputs whatever is the closest it can guess the next letter and word should be and then processes it from there, it is only linear, and basically has no recognition of what it output just moments before