I quit my FAANG job because it'll be automated by the end of 2025

Jasper Gilley
March 2025

Until this February, I had gainful employment at [redacted FAANG co] doing machine learning engineering for fine-tuning LLMs on language translation tasks. It was a great gig, and I enjoyed the work and my coworkers. However, taking a medium-term look at the market dynamics surrounding my employment prompted me to quit a few weeks ago. I'm now convinced that my former job there will be obsolete by the end of the year.

Engineering jobs, ca. early 2025

For context — I started my career as a backend engineer at a late-stage startup before transitioning into doing ML work for a few early-stage AI startups and finally for FAANGco. It's wild how much change the industry has already undergone since I started my career <5 years ago in 2021. My first job consisted of taking on atomic tasks in the form of Jira tickets that had been assigned points for difficulty and progressively knocking them out, sometimes in the form of pair programming with another engineer (gasp!) If this work model isn't obsolete yet, it certainly feels like a relic of a dying age. Frontier companies' cost per line of code must be approaching 10x cheaper than it was in these days. Most of the microservices repos at my first company could now probably be built in a day by one person with the aid of an advanced coding agent like Claude Code. It's not hard to imagine why the demand for junior devs has all but gone away when most backend work at these sorts of companies now consists of debugging thorny edge cases that only a senior+ level engineer is likely to be able to tackle.

ML work is different insofar as it requires cost management w.r.t. hardware that is nontrivially expensive, from the perspective of cost, availability, and execution time — even at hyperscalers. For a while, it seemed to me like the high cost of botched experiments would be a saving grace from automation in this field. AI slop code might be fine when it comes to the frontend of your Next.js side project, but when it makes false assumptions that cause silent errors in your training runs, you've got a problem.

This might continue to be the case for the foreseeable future to some extent, but there are two catches, from the perspective of an ML engineer who would like to keep their job:

  1. Just as the cost per line of code has fallen by an order of magnitude already, the cost per hour of debugging is currently in freefall with the aid of coding agents that comb through your code much faster than you can, and
  2. Is a job that consists of you being the context management glue on either end of an AI system cum debugger-in-chief for all the gnarliest problems that frontier agents can't quite solve yet really one worth having?

A little while ago, I got the pervasive sense in the SF tech scene that as AI coding gets better, the place you want to be is in a rest-and-vest job with a publicly-traded FAANG company that will let you slowly automate your own job while you collect RSUs.

Sorry, the Efficient Market Hypothesis is more true than that.

  1. Managerial expectations can increase faster than AI coding agents can get better
  2. This means that engineering jobs will increasingly be rate limited not by code-writing but by infrastructure management, documentation writing/AI context management, and testing. And, most frustratingly of all, talking to non-technical people
  3. A lot more people are capable of doing at least a semi-decent job of the paste spec → plug into coding agent → test loop than the traditional loop that also included writing all the code
  4. Tech cos aren't that greedy about money on short timescales, but they are greedy on medium timescales. They don't want to do layoffs during good times for reputational reasons but they'll definitely do them during medium-to-bad times when there's lots of money to be saved.

This is the state of engineering now. What does engineering look like by the end of 2025 or 2026?

Right now, most of your engineering job not spent in the IDE consists of writing/clarifying specs, disambiguating them with stakeholders, and checking if things work as intended. These tasks are mostly beyond the capabilities of current models, less because they're fundamentally outside the scope of AI models' cognitive capabilities, and more because they involve navigating internal software, referencing documents, and being able to devise and run reasonable tests on the efficacy of a project, in the context of its desired purpose.

You don't have to extrapolate the capabilities of current computer-use agents very far at all to imagine them being able to autonomously do this sort of fairly menial context-gathering. As someone with a background in GUI agentsI worked on a startup in the field and consulted for another leading startup that you've definitely heard of, I'd be surprised if we don't have this capability by the end of 2025. At this point, it's about keeping relevant information in context, disregarding some of it when necessary, and knowing where to look to load it back. None of this seems prohibitive for a slightly more advanced version of current agent systems. This isn't even taking into account non-GUI-based tools for providing context to AI agents, like Anthropic's MCP, which could well further speed adoption and accuracy at companies that are on the frontier of these things.

As a sidenote, one of the key distinctions for agentic systems is whether you're telling them to use first-party or third-party software. Tasks like "order me a sandwich on the UberEats website" are fairly likely to get blocked by CAPTCHAs or other bot detection systems if you do them too much (I promise you, you can't fool profiling algorithms at scale unless you're a nation-state actor.) You yourself, however, can make sure that your bots don't get blocked from accessing internal documentation via approved channels. As a result, GUI agents might see much faster adoption for internal tasks than external ones.

All in all, I think the evidence points to a scenario where by the end of this year or next, engineering and other pure knowledge work IC jobs can be mostly done by AI agents, with some fraction of the people who formerly held those IC jobs acting as de facto product managers cum infrastructure janitors for agent swarms.

On one hand, this is great! We'll have essentially infinite working software at a price of ≈$0/line of code. The actual job of being an AI agent product manager cum infrastructure janitor, however, sounds awfully boring to me personally. That's why I quit my FAANG job a few weeks ago.

What should humans do in the era of infinite machine intelligence?

Contrary to what some tech people seem to think, there are factors of production other than intelligence. There may even be factors of production that are intrinsically tied to being a human, outside of the raw intelligence outputted by some people's brains. For the time being, those non-intelligence factors should retain economic value.

If you have a knowledge work IC job and are not interested in being an AI agent wrangler, where you go from here is obviously closely tied to your personal attributes and skills. To help get a clearer picture of what those are, I suggest talking to a language model. Specifically, I'd brainstorm jobs such that even if they could be done very well by AI systems, people wouldn't want to use AI systems to do them.

You always can stay on as an AI wrangler, of course. But for me, part of the appeal of getting into a profession like engineering was the intellectual challenge of using intelligence to solve problems, and it seems to be specifically the intelligence part of the job that's most directly in the line of fire for automation. Humans aren't defined by their intelligence, of course, but they're certainly not defined by their unique capacity to be infrastructure janitors for AI agents either.

Personally, I think some types of sales jobs fall under the category of "even if they could be done by AI agents, nobody would want them to be." It's broadly in the rational economic interestsocial proof against scamming for the buyer; price discrimination for the seller of both buyer and seller that there be a human in the loop at this phase of the deal lifecycle. So the next step in my career is going to be talking to people about great AI products that are being built and helping them figure out if they'd find these products useful. If you're building a great product and would like somebody to tell people about it, please don't hesitate to reach outmy email is my name, Jasper Gilley, at gmail dot com, all lowercase!


Jasper Gilley
Twitter
Github