It’s been two years since the hype of ChatGPT and AI, and the excitement is still going strong. People and companies alike have been talking about AI and its potential to revolutionize industries, with many seeing it as the future of technology. Companies have started applying AI to replace certain jobs, automate tasks, and even assist in creative processes. The hype is especially loud around AI’s ability to code, build functional software, and design UI interfaces with minimal human input.
We’ve witnessed a wave of new startups introducing AI solutions across various sectors, including design, coding, and content generation. Tools like AI-driven design platforms, AI code editors, or AI tools for writing have taken off. It sends a signal of a shift toward integrating AI into everyday workflows.
I’ve been using AI tools like ChatGPT, GitHub Copilot, and Cursor for some time, and I can say they’ve really boosted my productivity. These tools have certainly made my work more efficient and have provided valuable assistance in solving problems.
But, despite all these advancements, the question remains: is AI truly a game changer?
Yes, AI can code
In the world of software product engineering, there’s a growing belief that AI can completely take over the coding process. While it’s true that AI can generate code, the reality is a bit more nuanced. Coding, in product engineering, isn’t just about writing lines of syntax, it’s a tool we use to solve problems thoroughly and thoughtfully.
AI can write code, but it doesn’t have the deep understanding of the problem at hand. Engineers don’t just write code for the sake of it. We think through challenges, design solutions that are scalable and maintainable, and ensure the software will meet user’s needs. An important part of this process is considering the trade-offs between different solutions, something that requires human judgment and experience. Should we go for a solution that’s easier to implement but harder to scale? Or one that’s more complex but more robust in the long run? AI, at its current stage, lacks that level of insight and judgment.
You might challenge me here, suggesting that maybe we haven’t trained AI enough to reason or that we might use more advanced reasoning models in the future to handle these kinds of decisions. But in my perspective, AI is ultimately just a combination of algorithms designed to generate the most reasonable sequence of characters or pixels based on patterns in data. What sounds reasonable to one person may not make sense to someone else, depending on their perspective or experience. Making judgments as an engineer is about more than just generating an answer, it’s about actively collecting all the context around a problem and using your understanding to make the most suitable decision with the context. This is where human judgment and experience play a role, and it’s something that AI, as it stands, cannot fully replicate.
For example, you might prompt AI to create a simple form in ReactJS. The result may be functional, but will it consider performance implications? Will it handle unnecessary re-renders, which can hurt user experience, especially in larger applications? AI may not be aware of these concerns unless specifically instructed. This is where the experience and intuition of an engineer are essential. An engineer would know that optimizing performance in React requires careful handling of state changes, re-renders, and the use of hooks to avoid performance bottlenecks. AI doesn’t have the judgment or understanding to make these kinds of decisions on its own. It simply follows the instructions given, but doesn’t have the ability to weigh long-term effects or user impact unless explicitly told.
Yes, AI can generate functional software
Another common misconception is that you can simply prompt AI to build software from scratch. While this is true in some way, the question remains: will the software be maintainable? AI might be able to create code, but is it elegant, scalable, and capable of handling future requirements? Will the code follow best practices for readability and maintainability, or will it be a quick solution that works in the short term but becomes harder to maintain over time? An engineer, with experience in designing systems, can anticipate these challenges and structure the code in a way that can grow and evolve with the product. AI, on the other hand, might deliver something that works for now but lacks the foresight needed to ensure its long-term viability.
Let’s take an example, we have a simple task to add a new field to an API. AI might generate the code to do so, but will it ensure backward compatibility? Will it consider the various clients that use the API and how they might be affected by the changes? These are the types of considerations that AI often overlooks because it doesn’t have enough context. An engineer would think about how the change might break existing integrations or whether versioning the API is necessary to avoid disrupting clients. These are important decisions that ensure the long-term health of a software product, and they require experience and foresight.
You might ask, what if we build a network of AI agents that generate both client and server code? In this scenario, all code would be created by AI agents working autonomously, connecting with each other to gather the necessary context. While this idea sounds interesting, I don’t think we’re going to see it happen anytime soon. AI agents might be able to collaborate, but the complexity of context and judgment required for building software in a robust and scalable way is still far beyond what current AI models can handle on their own. Of course, I could be wrong, and perhaps in the future, we’ll see breakthroughs that bring us closer to this vision.
Yes, AI can help to explain concepts
People are also turning to AI to explain concepts, with the belief that AI can provide reliable answers to anything. While AI can certainly assist in learning, there’s a risk of over-relying on it and losing our ability to think critically. It’s easy to fall into the trap of treating AI as the final authority, simply regurgitating explanations without truly understanding them.
In the long run, this over-dependence could erode our ability to question ideas, challenge assumptions, and develop our own insights. While AI can be a helpful tool, it’s important that we continue to foster our critical thinking and problem-solving skills. AI should be an aid, not a replacement for our ability to think deeply and independently.
So, should we stop using AI?
No, absolutely not. As I’ve shared, I am an AI user myself. But I see it as a tool for enhancing productivity, not a replacement for human judgment. I consider AI like a junior engineer, capable of handling simple tasks with clear instructions. It helps me do things faster, like generating code snippets or refining my writing to make it clearer for the reader.
So, will we replace all junior engineers with AI? Of course not. If we don’t train junior engineers to grow, who will help us with things like considering trade-offs between different solutions, something that requires human judgment and experience? Junior engineers need mentorship and experience to develop the skills needed to make the right decisions, and that’s something AI simply can’t do at the moment.
That’s all
I didn’t write this blog to attack AI, rather, I wrote it to raise an important point about how we should think about AI and how we can best leverage it. AI is undoubtedly a powerful tool that can enhance productivity and help us solve problems more efficiently. But it’s essential to remember that it’s not a replacement for human judgment, creativity, or experience. As we continue to integrate AI into our workflows, let’s use it wisely, acknowledging its strengths while also being mindful of its limitations.
At the end of the day, AI should be a tool that works alongside us, not something that takes over the roles that require critical thinking, problem-solving, creativity, and deep expertise.
Discover more from Codeaholicguy
Subscribe to get the latest posts sent to your email.