Notice: You have been identified as a bot, so no internal UID will be assigned to you. If you are a real person messing with your useragent, you should change it back to something normal.
Anonymous A started this discussion 2 years ago#115,775
So far I've tested:
* A conversational gpt bot that creates websites in seconds, and then gives you a link you can use to review the site, and give feedback to make changes.
* An agent that monitors a git repo for comments, and when an authorized account requests a change, immediately modifies the code to add the new feature. Did that add a bug? Add another comment and the agent will fix the bug. Each time it leaves comments explaining it's changes too.
It's already a replacement for junior devs. With context windows expanding, and better code models being released it will be able to replace senior devs, especially as it gets easier to finetune custom models and project outlines and relevant documentation and programming textbooks.
Nearly every dev I hear talk about this is in complete denial, yet they will admit that it does automate a lot of the grunt work of looking up functions, and saves them time typing and formatting code. Even if that was all it does, automating half the job means companies need half the staff, and the remaining staff has no leverage to demand the high wages of the past.
Anonymous B joined in and replied with this 2 years ago, 1 hour later[^][v]#1,279,086
I never expected we'd see "intellectual work", programming, writing, art, medical diagnoses automated before truck drivers. They'll be next. But here we are. Society isn't prepared for this and we'll see a long slow struggle to attempt to slow down the unavoidable. The more prestigious and organised professions, like doctors, will no doubt hold out the longest. But they'll get their time on the chopping block soon enough. I hope I'm still alive to see it.
Anonymous D joined in and replied with this 2 years ago, 3 minutes later, 1 hour after the original post[^][v]#1,279,089
@1,279,086 (B)
You don’t get it. US doctors are basically religious figures. People have a blind trust in them which keeps them in business and will continue to
Anonymous B replied with this 2 years ago, 11 minutes later, 1 hour after the original post[^][v]#1,279,091
@previous (D)
It's perfectly possible for me to believe both what I said and what you said. Not sure why you thought otherwise. The less prestigious and less organised professions don't have that pull. That says nothing about their actual legitimacy.
There's some trucks that repeat drives on easy routes already.
> The more prestigious and organised professions, like doctors, will no doubt hold out the longest. But they'll get their time on the chopping block soon enough. I hope I'm still alive to see it.
Google already has an AI that combined with doctors provides better diagnoses. Even better, when you remove the doctor, and just use the AI, the diagnoses are even more accurate: https://www.nature.com/articles/d41586-024-00099-4
Doctors have more lobbying power, and have a monopoly on drug dealing so it will be more than just getting a better AI. Their friends in government aren't going to just sign away billions in easy money.
Anonymous B replied with this 2 years ago, 40 minutes later, 2 hours after the original post[^][v]#1,279,107
@previous (A)
I know they've been around. I was only commenting that I would've thought they'd have become far more common by the time AI had advanced to where we are now. I agree with what you've said in the rest of your comment as well.
Anonymous B replied with this 2 years ago, 13 minutes later, 3 hours after the original post[^][v]#1,279,127
@previous (A)
The only one that I know is real is becky, but she has a gimp leg, so I'm not sure we should count her fully. I remember when she drove off the other real one, Carebear, because of how toxic becky was to her. Jealously I guess. But now becky's mostly gone, so we have room for another, hopefully less toxic, one to join in. Although I think we'll sooner find a replacement with AI long before that happens.
Anonymous E joined in and replied with this 2 years ago, 6 hours later, 10 hours after the original post[^][v]#1,279,344
> explaining it's changes too.
Your lack of understanding of the difference between "its" and "it's" is related to the reason we don't rely on AI (at least not in the form of a LLM) to author software that works reliably. The devil is in the details.
I mean, AI produced stuff is ok as long as you don't give a shit about quality and you're not interested in creating something new and different.
Also there are many good ways of automating grunt work that don't involve AI, but you still have to have a human driving it at some point.
> AI has better quality than humans in most tasks now.
It's funny how the automobile industry has been working on self driving cars for decades, has spent hundreds of billions of dollars on prototypes, and still can't get it to work.
tteh !MemesToDNA joined in and replied with this 2 years ago, 17 minutes later, 19 hours after the original post[^][v]#1,279,406
A friend of mine used ChatGPT 4 to "fix" at least two major issues he was facing with a personal project lol. The code it provided was superficially appealing at first glance but introduced a whole host of new problems, which ChatGPT promptly apologised for and then failed to fix repeatedly before deciding it simply wasn't capable. The language wasn't exactly a popular one - Rexx - but it was almost comical how badly it performed.
I know there are better tools (and GitHub's Copilot etc.) but it's amusing to me how many people think ChatGPT is imminently going to replace competent programmers.
The time he wasted pissing about with ChatGPT could easily have been spent just fixing it himself (or summoning the help of someone familiar with Rexx). Maybe this story speaks more to my friend being retarded. Funny though.
Anonymous E double-posted this 2 years ago, 2 minutes later, 23 hours after the original post[^][v]#1,279,437
@1,279,406 (tteh !MemesToDNA)
Yeah, anybody who thinks ChatGPT can write better code than a human clearly hasn't spent a lot of time working in the software development industry.
Anonymous A (OP) replied with this 2 years ago, 33 minutes later, 1 day after the original post[^][v]#1,279,444
ITT:
> it can't replace senior devs today...
Using LLMs to code at all wasn't an option a year ago. Now it can automate a lot of time, reducing the total demand of devs. Humans are still required, but at fewer numbers than before, which is why the layoffs are so big this year.
In another year it will be several times better, and they'll need even fewer devs.
Self-driving cars already exist. Mercedes has a level 3.
Anonymous A (OP) replied with this 2 years ago, 8 minutes later, 1 day after the original post[^][v]#1,279,449
Meat programmers have very little working memory, and need to constantly restart their thought processes. When they finally get done with it, they need to return to fix the errors.
large language models can do this in seconds, and then be prompted to check for runtime errors and fix them in a few seconds too. LLMs can even create project outlines, and save time looking up a single function. Creating regular expressions is done with ease.
An experienced programmer has a longer context window, and more training than an out-of-the-box coding AI that is barely a year old. The competitive edge for ape wetware will persist in a minority of settings for a few years longer, but will continue to shrink until the efficiency and breadth of knowledge of coding AI completely replace them.
Anonymous B replied with this 2 years ago, 47 seconds later, 1 day after the original post[^][v]#1,279,450
@1,279,406 (tteh !MemesToDNA)
Extremely myopic to not see how quickly LLMs have improved in just 5-10 years. It's like thinking the Wright brothers just had created a cool novelty that would never affect transportation because they couldn't stay in the air more than 3 seconds at a time. It's almost like because LLMs didn't some how magically appear to us in a fully finalized state, they'll never rival or exceed human capability.
> The code it provided was superficially appealing at first glance but introduced a whole host of new problems, which ChatGPT promptly apologised for and then failed to fix repeatedly before deciding it simply wasn't capable.
This honestly sounds indistinguishable from many people I've worked with. Not all, but many.
Anonymous B replied with this 2 years ago, 10 minutes later, 1 day after the original post[^][v]#1,279,453
@previous (A)
Depends on what you mean by impossible. Transformers were first published in 2017. ChatGPT is not a stock transformer, but it's remarkably close. The recurring theme in ML is that data and scale are the most important things. At least with the many transformer variants people have come up with, the particulars don't appear to matter all that much. So I think in principle we could've had something like ChatGPT in 2017 or 2018 if enough hardware were allocated earlier into training massive models.
Anonymous A (OP) replied with this 2 years ago, 4 minutes later, 1 day after the original post[^][v]#1,279,454
@previous (B)
We could have had airplanes before the wright brothers, but we didn't so the airline industry didn't develop during that time.
All the applications that use code-llms, all progress to replace devs in corporations with these applications, and all the training have been done in the last moment.
In a year the development and rollout of coding llms is going to be unrecognizable.
Anonymous B replied with this 2 years ago, 7 minutes later, 1 day after the original post[^][v]#1,279,456
@previous (A) > We could have had airplanes before the wright brothers, but we didn't so the airline industry didn't develop during that time.
Yes it would've been impossible to have commercial airlines before the advent of flight, but there was nothing preventing the development of flight before that time on a theoretical level at least. The physical theory needed to understand flight had already been developed like a century before. It's similar with AI, although the gap was years, not a century or more like with flight.
Anonymous A (OP) replied with this 2 years ago, 7 minutes later, 1 day after the original post[^][v]#1,279,457
@previous (B)
I'm talking about the development pace itself.
We could have, but didn't have, development during that time. The past year it's actually been on, and the growth is too fast for most to keep track of.
In one more year the development into this tech will be more than double. Those already involved will have spent twice the time making their products. Many more organizations will be investing in this tech, and users will be even more familiar with how to start automated development processes.
How are new devs even going to get hired? Who wants to dedicate the time onboarding someone who takes 100x longer to write the same code? Each new dev needs to learn for themselves, but when the AI is updated all the AI learns it. What domain is going to be better for humans in the long run?
Unlimited context windows have been tested, and there are services to have an entire repo maintained by a gpt. Autogpt can use a goal prompt, and continuously iterate on that goal without needing a human to push it along.
When you first turn it on it will make 2 new bugs trying to add one feature, but leave it on for a week and it will quickly run out of errors to solve because it doesn't need breaks or sleep.
Autogpt and git-integrated LLMs are even more recent than GPT4.
Anonymous E replied with this 2 years ago, 10 minutes later, 1 day after the original post[^][v]#1,279,464
@1,279,438 (G) > Yeah right, I'm going to give you tips on evading detection. Do you think it works that way?
Um... OK. Sorry, do I think what works what way?
Anonymous B replied with this 2 years ago, 5 minutes later, 1 day after the original post[^][v]#1,279,465
@1,279,462 (A)
Thanks. That's not unlimited, though, is it? I've seen a handful of methods like this to compress context info, but they're not limitless.
It seems like the field is slowly rediscovering LSTMs in one form or another. It's ironic because transformers were proposed as a replacement for them. Now it seems like the field is swinging back the other way.
Anonymous A (OP) replied with this 2 years ago, 16 minutes later, 1 day after the original post[^][v]#1,279,466
@previous (B)
The system will switch back and forth depending on what its current task is. It doesn't always need to know the entire program's context, it just needs to write a simple function that helps a dev finish a task most of the time.
Management already realized they could cut down to a skeleton crew, and get the same requirements checked off for the client if the remaining engineers were using new tools.
Some companies are already using automated speech and calling to communicate with clients and develop a project plan.
LLMs can take those tasks, and delegate them to devs (or autogpt) who then use copilot to write new features even faster.
LLM with vision can evaluate what the user sees without needing human QA to identify bugs in the UI.
Converting AWS or Azure credits into an app described by a customer should let anyone make a call and receive a product within a day.
Anonymous B replied with this 2 years ago, 13 minutes later, 1 day after the original post[^][v]#1,279,472
@previous (A) > recurrent neural nets aren't going to be used all the time, it's just an option to fall back on.
I think you're dead wrong on this point. The human brain is highly recurrent and that's probably no accident.
Any RNN can be unrolled into a non-recurrent, but larger, model. Think about it for a second. Recurrence provides an incredible efficiency gain.
They're a devil to train though, but I have no doubt it will be figured out soon enough. LSTMs were just a glimpse of being able to train them semi-efficiently. I think once it is figured out, no one will even consider using feedforward only transformers.
My bet is that models containing a combination of RNNs with a healthy amount of brute force transformer-style exponentialness wrt context length, plus a wider diversity in optimization functions (auto-generation + RL + maybe adversarial) all rolled together and trained together will get us to and beyond human levels. I mean a wholistic approach, not doing one training task then hacking on some fine-tuning or RL at the end.
Anonymous B replied with this 2 years ago, 4 minutes later, 1 day after the original post[^][v]#1,279,478
@previous (A)
Now that the discussion is getting into specifics and you can't continue just blowing hot air, you step out. Maybe stop reading press releases and learn what you're talking about before trying to engage in one of these conversations again 😉
Anonymous B replied with this 2 years ago, 4 minutes later, 1 day after the original post[^][v]#1,279,482
@1,279,480 (A)
More likely you didn't understand what I said and instead of trying to, you gave up. I guess it's easier to convince yourself you don't need to actually know anything about ML, because anything you don't understand must be GPT and therefore meritless.
Hey wait a minute, I thought you were claiming the opposite about GPT just a moment ago?