Tech Workshop ... AI and coding; a display of a cognitive abyss

Recently we’ve all been witnessing more and more pieces of software and for fuck’s sake even hardware fall into the trap of AI code. From a relatively mild things to allowing AI-assisted contributions (which isn’t exactly reasasuring and we’ll get into that) to full-on embracing the “hollow text generator” machine. I’m not going to go into the ethical side of things in this write-up because that’d be for an entire book which would take me multiple lifetimes to finish. What I’d like to dive into here is how delving into the AI toolchains revealed a MASSIVE cognitive canyon in tech developers’ minds, both technical and social. Long story short: Techies are idiots. Yes, we are and it’s our damn responsibility to do something about it. And no, the solution doesn’t like in tech field. It very much lies in the field you’re oh so arrogantly shove away as “useless” (been there myself and luckily didn’t allow my braincells to fry). You want long story long? Well, find a cosy corner and let’s dive in.

As you’ve probably caught from the intro, I don’t like “AI”. I personally refuse to call present day’s “bullshit generator” AI because that stuff isn’t even remotely intelligent. Tbh, it’s barely knowledgable. AI generated imagery makes me roll my eyes, AI generated memes make me want to pull the spine out of the “creator’s” arsehole (yeah, nice job, self-titled “liberator”/”democracy defender”) and whenever someone mentions words “AI summary”, I have the utmost urge to summarily grind their face into a paste with the heaviest book I can find. Being a creative person as my hobby, it makes my blood boil to even use the term “content creator”. But I’m not here to talk about this. I’m here to talk about my “work” field or at least part of it; coding and the involvment of AI in it. And whoo boy do we need to talk because the level of flippant shit-flinging in this field is like watching a RW-nutjobs meeting “for better XYZ”.

AI-assisted coding != Vibe-coding

This. This right here is something everyone in this discussion should drill into their skill. These two, while related, are very distinct things. How? AI-assisted coding means you’re using an AI tool to help you put the pieces of code together with you, the programmer, providing the competence to the tool. You’re essentially turning the AI tool of your choice into a second pair of hands; a really dumb and massively energy inefficient pair of hands which you have to constantly watch and guide so it doesn’t do anything stupid. And to make matters worse, it won’t even learn anything because guess what, it doesn’t know how to learn (but guess who would actually learn something *looks at many undergrads/juniors*). No, pouring a bucket of knowledge into a machine isn’t learning. It needs understand why and when. And no, statistics aren’t the solution.

Vibe-coding (I’m seriously going to find whoever coined the term and fix their vibes by cracking their nuts, if they have any) on the other hand is prompting the AI tool until it produces something that looks like you want (spoiler alert, it’s absolutely nothing like what you want) without a sliver of an idea what the fuck are you doing. Like, in the case of the former there’s at least a somewhat competent person using an incompetent tool. In this case … You know the so-many-times-repeated story of “if you close a crowd of monkeys into a room and let them randomly type, one of them will eventually type a Shakespeare’s play”, right? Well, congratulations. You’re exactly that; a monkey randomly typing. Except you have no idea what a typewriter is or who Shakespeare is or why are you even in that room in the first place.

Now, why am I making this distinction? Because people tend to conflate these two together. This results in rather flippant and reactionary (sounds familiar?) environment surrounding the field where instead of trying to find some way out of this mess, people are too busy throwing shit at one another (sounds familiar?). Should you stay clear of vibe-coded projects? Absolutely because the person behind them has no idea what they’re doing. Should you be wary of a project which allows AI-assisted contributions? Absolutely because doing so puts a HUGE competence requirement on the maintainers of the project. This jump is so massive that the competence needed will very likely exceed the capabilites of the maintainers which can either result in the team spiraling further down into the vibe-coding hole (then it’s time to jump ship) or the team seriously re-evaluating the AI strategy and making sure there are strict guardrails and competence development going along with the tool assessment (I know this sounds corporate as shit but if you adopt corporate tools, you’re going to need corporate-level procedures). And the reason is …

AI is a power amplifier

It’s been a while since I stumbled upon some articles discussing the effects of AI on “productivity” and from these two thoughts stuck to my mind; “The workers who use AI burn out way quicker.” and the other being the name of this section “AI is a power amplifier.” And if you think about it, AI is indeed exactly that. It doesn’t make your work faster, it makes it more intense. And this amplification applies to everything; there suddenly more to do, more to keep an eye on, more to evaluate. More, more, more. But human mind can’t handle more. Not without expenses in other areas. Which then leads to, you guessed it, burnout.

The increase in cognitive requirement is absolutely gigantic and will easily exceed one’s cognitive abilities. In terms of code, this increase means you have to put more effort into making sure the code you’re working with is competently made because you need to account for both your margin for error and the margin of error of the AI tool. On the reviewer side, you need to put more effort into the actual review because there’s an increased chance of dealing with a code that was made not by someone who probably has some holes in their knowledge (at which point it’s up to you to be their mentor for a bit) but by someone who probably has no idea what they did and you have to teach them or at least lead them to the relevant knowledge. Notice how I’m not saying to keep them away from the AI toolchain and you’re probably wondering why it’s not a solution. The answer to that is simple:

Sloppy code is sloppy code. Period.

So, let’s have a situation like this. A maintainer of a project rejects a contribution as they don’t accept anything AI-assisted but they extend the “grace” to have the code redone by the contributor without using any AI tools. The contributor comes back after some time, providing a human-written code only to have it rejected again because it doesn’t meet the required standards.

What’s the problem? The contrubitor learnt nothing. Partly because they don’t know any better (the previous AI venture didn’t provide them with the knowledge required to handle the task properly) AND they weren’t provided any guidance. The latter is however a fault of the maintainer for not giving the contributor any direction. I’m not saying you’re supposed to mentor the contributor all the time. But if the will to learn is there, giving them at least some guidelines will do a lot of good in the long run. Just to give an idea, this is what I learnt years ago from my coach:

Shunning the person away without giving them directions hurts both sides. It hurts the project because it gives away hostile environment and the sloppy coder will stay a sloppy coder.

Correct code is correct code, no matter how it’s made

Let’s take a look on something different. Let’s have piece of code that’s integrated into a project. The project then passes all validation with no issues and to top it up, it even satisfies any formal verification rules. Now, after you have all the results, you find out the code was AI-assisted. And yes, AI-assisted NOT vibe-coded. Should the code be rejected? Think about the chain of events; the piece of code went through a review WITHOUT being recognised or declared as AI-assisted, it was seamlessly integrated into the project which then passed all validation and verification criteria, including formal verification. There are three layers of proof speaking against rejection. Would rejection of the code be a correct approach?

One valid argument you could say is AI-assisted is recognisable. I’d disagree on the ground of the mechanism which “powers” programming languages; the formal languages. Programming languages, unlike natural languages, have much more rigid structure when it comes to syntax and semantics. There’s no hidden or secondary meaning that can be derived from context. And guess what operates well within a rigid structure of rules? Algorithms. Even algorithms which are heavily influenced by statistics like those used in machine-learning. Sure, you can reach a different result when it comes to optimising the solution for a given task but if that rule set is properlyu specified to the AI tool, with this guidance it’ll very likely reach the same solution as if a competent and experienced programmer would reach. Remember the “power amplifier” thing? In this specific use case the AI is indeed a cognitive amplifier in a sense that it allows you to “search” through the set of possible solutions quicker. At the same time however it increases the cognitive load of you having the ability to properly evaluate the proposed solution because it may be something you wouldn’t come up with in the first place. Not to mention the solution can have caveats, such as not being portable (if you know about genetic algortihms and their use in programmable HW design, then you might be familiar with some of them). But if one can’t distinguish between AI-assisted and fully human-made code when they both achieve the solution within given constraints, then either the review process is weak or the possible solutions are equivalent regardless of the approach.

So … what now?

That’s a good question. At the present time, the AI tools are so horribly inefficient for what they can do that I personally can’t justify their use, especially when the increased cognitive load is taken into account as well. It’s simply not within people’s mental capacity to handle the requirements and throwing more AI at the problem will only make it worse. And let’s be honest, when you think about the potential uses what do you end up with? You either get a second pair of hands which you have to constantly babysit, or you get a huge kick in the face showing you how utterly inept you are at the task you want to do, or a tool to do things you don’t even want to do in the first place at which point why are you even considering doing them?

But what to do now when so many people just don’t accept the problems? Well, let me at least try and propose the following when encountering a project which uses AI:

There we go. This is, at least in my eyes, the nearest road to navigate the messed up landscape of tech people grossly overestimating their mental abilities only to end up face first in the meatgrinder. Trust me, we techies are idiots who have a master class in denial and absolute ultra elite class in lack of interpersonal skills (btw, that lack is deliberate). On the bright side, both can be fixed. The former by actually sitting down and exchanging knowledge and the latter by sticking our heads out of our dungeons and, you know, talking to people. Trust me, you might find out that the amount of stupid people is way lower than everyone is trying to tell you, including your “comrades”. And believe me, the social issues are a way bigger troublemaker than any technical issues in our pet projects going industrial scale.

R.R.A.