Every day there’s more big job cuts at tech and games companies. I’ve not seen anything explaining why they all seam to be at once like this. Is it coincidence or is there something driving all the job cuts?
Every day there’s more big job cuts at tech and games companies. I’ve not seen anything explaining why they all seam to be at once like this. Is it coincidence or is there something driving all the job cuts?
This is true right now. If you know how to use AI tools, it’s not that hard to work 5-10x faster as a programmer than it used to be. You still have to know what you’re doing, but a lot of the grunt work and typing that used to comprise the job is now basically gone.
I have no idea, but I can’t possibly imagine that that’s having no impact on resource allocation and hiring / firing decisions.
deleted by creator
Want to have a programming contest where speed is a factor?
I actually looked this up, and the studies seem to agree with you. That one says a 55% increase in speed, and another says 126%.
All I can really say is, I’d agree with the statement that a single 3-hour task isn’t real representative of the actual overall speedup, and my experience has been that it can be a lot more than that. It can’t replace the human who needs to understand the code and what needs to happen and what’s going wrong when it’s not working, but depending on what you’re doing it can be a huge augmentation.
What you’re missing is that 95% of programming projects fail, and it’s never because the programmer didn’t code fast enough.
Speed-up isn’t why I have a team instead of being a solo act.
There’s also the pure reality that, yeah, it’s easier today to get a project off the ground than ever before, and AI is good at that, but you know what AI is absolute shit at? Modifying ludicrously cumbersome, undocumented, brutally hacked together legacy code and addressing technical debt - the two most common tasks of most actual software engineers.
True.
Good thing most companies aren’t stuck with ludicrously cumbersome, undocumented, brutally hacked together legacy code bases. /s
I can’t even type that with a straight face.
So true.
Working with GPT actually helped me write better code, because it’s more familiar with good patterns in unfamiliar languages and frameworks and can write idiomatically. It got me out of one-language-centric habits I hadn’t known I had.
Yes, it’s 100% true that the person driving the AI needs to have good design sense of what they want the final system to look like, and still work well with their team. You can fuck it up faster if you can code faster, absolutely that’s true.
What you said, I know that. Do the people that run these companies know that?
Yeah. Exactly. They did the same to various degrees when web frameworks first hit the scene, and numerous other advancements before and after.
But as you said, the new tech genuinely does make us both faster and better.
It just doesn’t fix the crap parts of the job that the CEOs always hope it will. (As someone else pointed out, it specifically doesn’t magic wand away decades of technical debt, haha.)
I would say a 3-hour-task isn’t representative in the other way around. When you tackle a 1000-hour task, you’ll probably spend more than 1000 hours working out what the requirements even are. A significant portion of my workday are meetings, not coding.
And with a long-form task, you’ll go back reading existing code much more often than writing new code, too.
I love the speed-up. And I’m sure it factors into CEO and CIO decisions. But they’re on their way to learning, once again, that “code faster” never had anything to do with success or failure in efforts that require programmers.
Source: I sought great power, and I became one of the fastest coders, but it didn’t make my problems or my boss’s problems go away.
Can you elaborate on this part? What’s your idea of proper usage?
So maybe I don’t know what I’m talking about. I will only share what I have experienced from using them. In particular I haven’t messed with Copilot very much after the upgrade to GPT-4, so maybe it’s a lot more capable now.
In my experience, Copilot does a pretty poor job at anything except writing short blocks of new code where the purpose is pretty obvious from context. That’s, honestly, not that helpful in a lot of scenarios, and it makes the flow of generating code needlessly awkward. And at least when I was messing with it there didn’t seem to be a way to explicitly hint to it “I need you to look at this interface and these other headers in order to write this code in the right way.” And, most crucially, it’s awkward to use it to modify or refactor existing blocks of code. It can do small easy stuff for you a little faster, but it doesn’t help with the big stuff or modifying existing code, where those are most of your work day.
To me, the most effective way to work with AI tools was to copy and paste back and forth from GPT-4 – give it exactly the headers it needs to look at, give it existing blocks of code and tell it to modify them, or have it generate blocks of boilerplate to certain specifications (“make tests for this code, make sure to test A/B/C types of situations”). Then it can do like 20-30 minutes’ worth of work in a couple of minutes. And critically you get to hold onto your mental stamina; you don’t have to dive into deep focus in order to go through a big block of code looking for things that use old-semantics and convert them to new-semantics. You can save your juice for big design decisions or task prioritization and let it do the grunt-work. It’s like power tools.
Again, this is simply my experience – I’ll admit that maybe there are better workflows that I’m just not familiar with. But to me it seemed like after the GPT-4 transition was when it actually became capable of absorbing relatively huge amounts of code and making new code to match with them, or making modifications of a pretty high level of complexity in a fraction of the time that a human needs to spend to do it.
I wonder if it might be the specific type of work that you do that allows for this. I don’t pay for ChatGPT, so I wouldn’t know the quality of the code it outputs with GPT-4, but I personally wouldn’t blindly trust any code that comes out of it regardless, meaning I’d have to read through and understand all the generated code (do you save time by skipping this part maybe?), and reading code always takes longer and is overall more difficult than writing it. On top of that, the actual coding part only accounts for a small fraction of the work I do. So much of it is spend deciding what to code in order to reach a certain end goal, and a good chunk of the coding (in my case at least) is for things that are much easier to describe with code than words. So I’m still finding it hard to imagine how you could possibly get anything more than a 1.5x output improvement.
The main time savings I’ve found with generative AI is in writing boilerplate code, documentation, or writing code for a domain that I’m intimately familiar with since those are very easy to skim over and immediately know if the output is good or not.
I actually got curious about it specifically because of this thread, and earlier today did a little experimentation with Copilot’s Cmd-I feature as compared with copying and pasting to GPT. I’m actually pretty convinced now that the issue is that Copilot using a cheaper model for reasons of computational cost. Giving Copilot the exact same task I was giving to GPT, it struggled to create code that could even compile, even after multiple rounds of me trying to help it, where GPT-4 was able to just give output and its output worked.
I think the assumption that it’s being set up under is that people will be doing a ton of queries throughout the work day, more so than the average GPT-4 user will type into the chat interface, and so they can’t realistically do all that computation on people’s behalf for $20/month.
(Edit: And this page makes some statements about “priority access” to GPT-4, indicating that they’re throttling access to the more capable models depending on demand.)
In practice, the majority of the time I’m carefully looking over diffs anyway before committing anything, since as you mentioned the vast majority of work time is spent modifying existing code. So the times it messes up aren’t a real serious issue. But again I think (after some pretty minimal experimentation today) that the real issue you’re seeing is just that GPT-4 is way more capable at this stuff than is GPT-3.5 / Copilot.
But this is guessing based on some pretty minimal experimentation with it. I sounded real confident in my initial statement but now that I’m looking at it maybe that’s not warranted.
Do you work in a technical role? I’ve dabbled in using AI to help out when working on projects, but I would say it’s hit or miss on actually helping, as in sometimes it helps me move a bit faster and sometimes it slows me down.
However, that’s just for the raw “let’s write some code part of the work”. Anything beyond that in my roles and responsibilities doesn’t really intersect with what AI can currently do, so I’m not sure where I would get a 5-10x speed-up from.
Honestly I’m not sure if I’m taking a wrong approach or if everyone else is blowing things out of proportion.