There's a lot of hype around using AI to write code, and I wanted to share my experience after testing it out.
Spoiler alert: I really dislike the term “vibe coding.” What a joke.
My Experience Using AI for Coding
I don’t use AI to write or refactor code—I just don’t find it that useful.
When I first tried it out, I noticed that any time I needed to fix or improve something in my codebase, using AI actually slowed me down. It often took longer to get something working and even when it did, I didn’t fully understand what the AI-generated code was doing.
A good example of this is when I decided to build a financial reporting desktop app that pulls data from a remote database server. I chose to use the Rust language for this and I wanted to build the app using Tauri, even though I hadn’t used that framework before.
As a test, I decided to use ChatGPT to help me get started.
Big mistake.
I ended up spending more time fixing bugs and wrestling with environment/setup issues than actually building the app. It took me nearly an entire day just to get the app to start up and display something.
Frustrated, I scrapped the AI approach and decided to build it myself with minimal AI involvement and only the Tauri docs. I then used ChatGPT to help explain things I didn’t fully understand.
By simply following the Tauri docs and examples, I was able to get 75% of the app built in just half a day. For comparison, with AI I had barely got to 5% progress in the same amount of time—and had no real understanding of what was going on.
How I Do Use AI
I use AI to help me with proofreading and editing blog posts like this one—because I know writing blog posts is definetly not my strongest skill.
When it comes to coding, I only use AI occasionally—for example, if I need a quick explanation of a concept or function that I’m not familiar with. But for actual code writing or debugging? Not so much.
What I Think About the Trend
AI definitely has its place, but I’ve noticed some concerning trends.
There seem to be more bugs creeping into open-source projects and packages, and I wouldn’t be surprised if AI-generated code is partly to blame. Some of this code just screams "written by Copilot" or ChatGPT—generic, sometimes insecure, and often not well thought through.
I’ve also noticed that some developers who rely heavily (or exclusively) on AI tools are slowly forgetting how to code. Repetition is key to learning, and if you're not writing and debugging code yourself, you’re not truly learning. There’s way too much copying and pasting going on.
Let’s also not forget how these AI tools are trained: mostly on publicly available code from GitHub, Stack Overflow, and other websites. A lot of that code is just... okay at best. And yes, that includes some of mine—no one writes perfect code.
The best, most secure, and well-tested code lives inside enterprise systems—and that’s not available for AI to train on.
Hope this didn’t come off as too much of a rant! It’s just something that’s been on my mind for a while, and I figured it was time to post about it.
Would love to hear other people’s experiences too.