AI’s Threat To Software Developers

Disclosure: The company I work for is very heavily invested in generative AI, specifically developer tools. As a result, I have financially benefited from recent AI popularity and I’ve had involvement with building software products that incorporate modern generative AI. These are my opinions, but my closeness to AI has surely influenced these opinions.

I’ve been seeing a lot of dread and fear from my peers about generative AI and the worries that it’s a threat to our industry and our livelihoods.

If this is a thing that keeps you up at night, let me give you a few salient points that might help put your mind at ease:

  • Have you ever actually tried leaning on generative AI to do anything nontrivial for programming? It largely can’t, and it certainly isn’t going to help you with big refactorings on large codebases because the context window would need to be way too large.
  • Have you tried troubleshooting something complicated by relying only on the AI? AI can’t troubleshoot because it can’t interact with the system it’s allegedly helping you write. It can’t go too far past platitudes. They’re somewhat personalized platitudes, sure, but writing software is a discipline where you get backed into super specific corners.
  • Have you tried using an LLM-based tool with your large codebase to ask questions about the codebase itself? It gets hit and miss really fast.
  • Have you tried asking an LLM to do anything remotely creative? It falls flat.
  • Have you ever asked the LLM for a code sample on how to accomplish something, only to find that it generates code that references APIs that don’t actually exist but sound plausible? That’s not really a bug; it’s core to how LLMs are designed; they don’t have a notion of whether APIs exist.
  • Have you tried leaning on AI tools to make your writing great? At best you get text with this milquetoast voice and sameness to it. It’s forgettable and completely lacking in character.

LLMs as we know them today aren’t capable of logical reasoning (at best, we have future ones that pretend they can reason about things). They’re incapable of interacting with the outside world as part of their operation. All they do is receive input and generate statistically plausible sounding output using that input plus their training data.

And these deficiencies of LLMs aren’t mere kinks that can be ironed out. Even if you added several orders of magnitude more tokens to prompts and could feed it your entire company’s code and the code for all of your dependencies the LLM would still fall short because all that extra context still isn’t helping the LLM understand what it’s helping you write because it doesn’t understand anything in the first place.

To build an AI that can make software would require building a fundamentally different set of tools altogether.

When ChatGPT came onto the scene a couple years ago it felt like it came out of left field, but ChatGPT is actually the result of decades of incremental research. It feels like it’s evolved fast in the last 2 years but that’s with billions being poured into improving it, and at the end of the day ChatGPT and its peers are all still fundamentally LLMs, and–say it with me: all LLMs do is receive input and use that input with their training to generate statistically plausible output.

Now, that’s not useless! LLMs can be really handy while programming. They can be great at helping you start a project. You can ask them high-level questions to help you learn new things (assuming that you’re asking about something it has a lot of training data for). They can often cut through the toil of certain tasks, like writing tests.

But the moment you try to really lean hard on the LLM to help you with something (programming or not), you realize the LLM’s “knowledge” is ultimately very shallow, and they get really hand-wavy as soon as you’re asking about anything nontrivial.

I understand why LLMs feel scary. They make for remarkable demos. We’ve been conditioned for decades to think of computers as being really rigid and picky about how you communicate with them, and ChatGPT suddenly comes along and breaks that assumption with uncannily human-seeming responses. But the corollary of that is that we also assume the output of ChatGPT is highly reliable and consistent like a computer but it’s not; it’s giving output that’s just as loosey-goosey as the input we get to give it. It looks plausible and in many cases it will give factually correct output, but it’s never correct because it knew anything; the math just worked out in its favor.

You can’t do knowledge work without knowledge.

I’m not suggesting we be complacent. Complacent industries get disrupted all the time. But so much of the fear I see from engineers about generative AI is based on this assumption that it’s capable of things it can’t actually do. If you look past the hype and try to really understand what today’s AI can can’t can’t do, you’ll realize that AI’s about as capable of stealing your engineering job as Alexa or Siri.

A much bigger worry for me (and something that we saw happening in ’22–23) is companies getting into a panic about their spending and deciding to lay off engineers, usually saying they’re overstaffed. The companies probably aren’t actually overstaffed, but when you fire people you see those cash savings on your balance sheet instantly, and you won’t actually start seeing the effects of firing a chunk of your team until months or years down the road, and leadership doesn’t really learn anything.

Software engineers are one of the only roles who have enjoyed decent leverage negotiating compensation in the last 20 years, and companies resent them for that.

Companies can lay off their engineers regardless of whether LLMs can help fill in the gap or not (and they do), but reality eventually catches up with you.

One response to “AI’s Threat To Software Developers”

  1. Incredibly profound and relatable. As a current Computer Science undergrad at Georgia Tech, it’s insane to see the amount of people around me using these LLMs as substitutes instead of tools. It becomes quite evident in group settings as to which members use their LLMs as crutches rather than as prosthetics. It’s interesting because these same peers are the ones stuffing their resumés with “experiences” and “knowledge of application” of tools and frameworks that they haven’t truly implemented, but instead replicated from their GPT substitutes. Initially, like most of us, I was guilty myself of using these tools to take advantage of a new system unfamiliar with the realm of “AI.” I am terrified at the realization of our next generation, currently in middle/high-school, choosing to reject a pursuit of knowledge, learning, and application, in favor of using these LLMs to supplement their assignments for an easy A. After all, without the proper knowledge and understanding of these LLM models, schools lack the proper tools to accurately measure individual knowledge independent from the propensity of corruption from artificial sources. Even when these students make it to college, it becomes harder to use these LLMs, it takes longer to inject the right prompt, it even approaches becoming obsolete as higher education demands higher knowledge. And soon, they’ll find themselves in a limbo, unable to progress any further as an individual because they’ve been living on a false foundation of imitant skills. Even as these LLMs become stronger, more accurate, dare I say even more “intelligent,” I don’t believe they can ever become replacements, because as you said, these LLMs can’t “do anything remotely creative… they fall flat.” I’m still unsure of what I’m even trying to convey, but in all honesty, I’m scared. I’m not scared that AI is encroaching on our ability to get jobs—we already know they fail to accurately represent large codebases, but I’m scared that basic entry-level programming can be substantially replaced by pre-programmed specified LLMs designed to automate simple tasks. Not only does this significantly dent the job market potential, but it also creates a job environment where positions are only available for the best of the best—the people at the highest skill level. I’m not afraid of work, but I’m afraid of the negative impact this has towards the propensity of being able to learn and apply skills in a real-world environment. How can a CS Student, like me, be expected to find an entry-level internship in something like python automation or organization, when these basic skills are already being replicated almost perfectly by LLMs due to their simplicity. I fear a bottleneck effect where students can make it far enough to get their degrees, but are limited by the ability to apply these skills leading to a butterfly effect where failure to enhance these skills leads to a degredation in performance once reaching a higher position. And even if some entry-level positions aren’t necessarily replaced, their basic functions could be, causing applications demanding even-increasing “basic” requirements. Begs the question, will pursuing a CS degree even be worth it? I think so, but only with intense dedication, appreciation, and passion for doing so. Yes, it may filter out the ones with a tunnel-vision for income and flexibility, it may filter out those who are lazy or unmotivated, it may even (and unfortunately) filter out those whose parents forced them to pursue a career in CS because “it is the future!” Again, don’t ask me what I’m trying to convey, because even I can’t organize all my thoughts and feelings with my ADHD brain, but I think there is a responsibility that both companies and students should be cognizant of; companies and students shouldn’t be overly reliant on LLMs—they should use them as tools rather than crutches. Students should desire to pursue independent knowledge that they can accurately and adeptly apply so that companies will desire them. Otherwise, everyone gets screwed.

Leave a Reply

Your email address will not be published. Required fields are marked *