Rule of thumb for AI usage

img: https://www.istockphoto.com/photo/rule-of-thumb-gm470042132-61952922

PSA: Introducing the hot-take

Oki, I should probably start at slightly less controversial topics than AI, but as I said this Blog was created because I had things that I wanted to say, and this is one of those things. This is also an opportunity to introduce a category of blog posts, the Hot-take that is to say a piece that isn't necessarily a massively researched and cited, piece of work. This is where you will find short observations and ad-hock thoughts or musings that currently are tumbling around my mind. So here we go.

The AI Rule of thumb.

Unless you have been living under a rock for the past year, and a half you all know that the world has gone AI crazy. I have repeatedly heard the introduction of ChatGPT as the internet browser moment for AI solutions. And while several sides of that analogy bear unpacking, there is something else I want to focus on in this post.

So these days everyone is trying to figure out when they should and shouldn't use AI. And I will not profess to have the ultimate answer to that question. but inspired by the Changelog Friends episode from 18. Aug 2023 where they discuss the need for a rule of thumb for the use of AI, I wanted to give it a go and contribute my current rule of thumb:

If you don't need AI to do what you want to do, you can use AI for it

and its complement

If you need AI to do it, you should not use AI

Based on these two rules of thumb, it should be clear that I am not interested in a banked ban or a full-on free-for-use. So I think it might be worth unpacking my statement a little. I see AI in its current state as a great enhancer of preexisting abilities. That is to say that if you are certain that you could have done the task at hand without AI, you can use it to increase the productivity of your work.

For example, I commonly use GitHub Copilot in my development work. But I treat it as little more than an advanced autocomplete. And I feel that even after several months of using it the number of unedited lines copilot has provided is in the double digits, compared to the thousands of lines of total code in the same time.

I would today consider it a negligence of duty if I did not use copilot in my coding work. However as soon as I start doing something that I do not know how to do, engage in a new technology, or a new library or framework, I refuse to use Copilot until I am confident that I understand how it works, and how to do it without Copilot, then I start using it again.

Needless to say, I think that AI has no place as the primary teaching or training tool for any new skill. That is not to say that it one day might not be useful for it, but in its current state, it is far too prone to hallucinations, misunderstanding context, or subtly changing facts and responses in such a way that only highly skilled people would catch errors. I sure know I have caught AI in some sneaky stuff.

People engaging with AI to learn things they do not know themselves run the risk of being given erroneous information. The only way to check or confirm the responses you get from the AI is conventional research on the topic, undoing all the efficiency gained from using AI by introducing the same steps as without the AI, but with additional uncertainty in the response.

While I have nothing to back this claim up with I have a suspicion that uncritical use of AI will increase the overall fragility of our software and data systems, while also increasing the dependency on the tool by the next generation developers.

Given our current times, this is unlikely to be the last post on AI, but for now, I wanted to share my thoughts on a workable rule of thumb that will both allow for experimentation and testing of the limits of the technology While reducing some of the potential risks of uncritical use.

Previous
Previous

Data Engineering Fundamentals (review)

Next
Next

Kill it with fire (review)