If you’re an intellectual worker, like:

  • Programmer
  • Writer
  • Designer
  • Artist

You see that AI has been getting exponentially better the past few years.

Will AI Take My Job?

Yes and no. GPT passed an L3 Google test that I probably couldn’t have passed recently. Though that’s more of a reflection of how bad the interview process is, and not how good an AI is. Anybody can write code and solve issues with StackOverflow. Not many can juggle office politics and manage coming up with both high-level / strategic solutions all the way down to minute details.

AI can’t lead nor architect.

AI can’t even debug, unless the code’s very basic. I use ChatGPT + GPT3 to develop, and oftentimes I come up with a solution much quicker with StackOverflow than telling them to code it up for me.

Conversely, ChatGPT + GPT3 are very fast in terms of solving known issues / boilerplate stuff, which is extremely helpful. Interview problems are hard but it’s very easy for an AI to memorize all the solutions and the right ways to explain it because the corpus is all over LeetCode and StackOverflow.

AI as as good as the test-taker that memorized all the solutions and regurgitated it in the final exam. But has no fundamental understanding of what’s going on.

So yes, AI will take your job if your skills are basic enough where you can be destroyed in a contest of rote memorization and regurgitating known answers.

No, your job won’t be taken if you have to come up with new and novel solutions constantly, or have to deal with extremely complicated things.

Complicated things are things where explaining / giving the prompt to AI to solve the problem appropriately will take much longer than doing it manually. Consider a codebase with 20k+ lines of code and you know all the intricacies of the code and spec.

Now let’s say there’s a complicated bug. It may take you 2 hours to fix it. But to “ramp up” an AI to fix it will be extremely difficult since your prompt needs to be a brain dump of everything you know.

You can dump the entire codebase into the AI one day, but it’s unlikely to know all the cornercases / use cases for your code. Example:

  1. I wanted to develop a rich text editor
  2. There are tons of Javascript to write
  3. I ask ChatGPT to write Javascript
  4. It would get the ‘order’ wrong in a lot of things and just don’t consider basic things.
    1. It might clear a listener in one function while it just instantiated in another function.
    2. If I tell it to ‘replace highlighted text,’ it’ll write code to add text, somewhere way after where the user highlighted it. And after much prompting, it still doesn’t understand what I’m asking it for.
    3. It’ll create CSS classes that overwrite Boostrap/Bulma/built-in classes despite the fact I tell it explicitly to avoid creating those classes.
    4. It doesn’t consider corner cases, and you’ll have to prompt every single corner case to them — this takes a long time, and it’s very unlikely to get it right at the end. If you ever find yourself ‘fighting with the AI,’ just go to StackOverflow.
    5. It doesn’t really consider the consequences of code they provided earlier in the chat. If they give you a piece of code 2 messages ago and give you another piece of code to compliment the code 2 messages ago, it’s very likely to be incompatible and you need to tell it why it is incompatible before it can correctly combine everything together.
    6. It seems like for coding, it cannot accurate understand what’s going on if the conversation’s long. The best solutions are the ones where it gives you the HTML/CSS/JS in one response. If they need to give you answers in multiple responses, it becomes less accurate. Shame, because longer responses always run into “Error Generating A Response” from ChatGPT. AI is limited in the number of tokens it can use because reading 1000 tokens (750 words roughly) and then outputting 1000 tokens (750 words, which isn’t a very long answer) is like having to do 1000x1000x1000 operations. This scales poorly and so if you toss a large codebase in it where ONLY the input is going to incur something on the order of 100Kx100Kx100K (1e15) operations, it’s never going to give you a great solution.

High Level People Are Safe, For Now

To my earlier point in 4f, you can train your model to your codebase which could take months. But the learning rate of an AI for your codebase is probably much better than how fast a human could read and understand everything that’s going on.

I feel like proper prompt engineering to train an AI on detailed spec + corresponding code can have it be an expert in your codebase, thereby doing your job even if you’re a deep domain expert. I haven’t seen this in action though so I can’t say much about it, but I know it’s possible.

But even if it can code and debug large and complicated codebases, the idea for a new product will still come from human beings.

After all, AI only trains on available data and new/novel ideas by definition, haven’t been available yet.

So we’re safe, for now—until AI figures out how to come up with ideas and essentially becomes sentient.