Using AI for coding can be a valuable source of information and can help speed up product delivery — but only if it’s used correctly. In this article, we’ll discuss how to make better use of AI by asking the right questions.
WHAT YOU’LL FIND IN THIS ARTICLE:
– The Two Most Common Pitfalls When Coding with AI – From blindly trusting the output to not knowing how to ask the right questions.
– A Smarter Way to Prompt – How Prompt Engineering can help you get better results, save time, and avoid hallucinations.
– How to Stay Safe While Prompting – Tips to prevent accidental data leaks when working with code and AI.
AI is the new buzzword, and we all know it. Several companies are using AI to offer a better experience, either by making the interface more user-friendly or by creating new features that better meet user needs.
With all the hype surrounding it, people are diving into AI tools. But when it comes to coding, relying solely on AI is not the safest option.
Relying is the word to use, because in most cases, developers (especially entry-level ones) are using AI tools thoughtlessly, generating code, and sending it directly to the repository (or any version control system).
Such blind reliance will create more problems than solutions, so we may need to rethink the way we’re handling AI.
The 2 Biggest Mistakes We Make When Coding with AI
This article applies to many situations where using AI is essential. Its goal is to highlight the problems that arise from using AI ineffectively and to show how to ask it the right questions.
There are two main problems associated with using AI in coding: “blind reliance” and the “don’t know how to ask” problem. We will dive into the details of these issues in the next section.
What is Blind Reliance?
Imagine we’re building a food delivery app, and the manager asks us to introduce a new feature called “weekend discount.” To create this feature, we need to check if today is a weekend day. The app was built with Kotlin, so we need to use the language to achieve this. The goal is to get the day’s name and compare it, checking if it’s Saturday or Sunday to determine if it’s a weekend.
Now, let’s assume the programmer is not familiar with Kotlin. In this case, they might rely on the AI tool to complete the task.
“How can I get today’s name and compare it to see if it’s a weekend day?” he asks.
The LLM (Large Language Model), without knowing the context, will return anything except what the programmer needs to know. So, he asks again, providing the context (Kotlin), and the AI gives a response—but not necessarily the one he needs. This happens because the developer didn’t read the response carefully. But it works, anyway.
As the developer delivers the feature, the manager asks him to add a new rule to verify if the day number is not between 10 and 17. If it is, the app can’t generate the discount.
Once again, the developer turns to AI, expecting it to deliver the best possible code. But here’s the problem: the AI has no context about the application and will likely return something that technically works, but doesn’t relate to the system’s rules. In many cases, this leads developers to copy and paste large portions of code – including core business logic – into the prompt, unintentionally exposing sensitive or proprietary information. This blind reliance isn’t just a sign of limited coding knowledge; it’s also a poor decision from an information security standpoint.
The issue of Not Knowing How to Ask
“How do I calculate the average in javascript?”;
“Today’s day name in javascript”;
“Create a simple HTML portfolio page that uses my name”.
These kinds of questions clearly demonstrate a common issue: a lack of context. Vague prompts not only waste your time – because the AI can’t fully understand what you’re asking and you’ll have to keep refining and adding details – but they’re also inefficient in terms of token usage.
To make things worse, not every AI tool has context memory. This means you’ll likely need to repeat the same information multiple times with each new prompt.
To address this and reduce token waste, many companies are adopting a technique known as Prompt Engineering – a structured way to design prompts that give AI enough context to generate useful and accurate responses from the start.
The technique of Asking Better, Also Known as “Prompt Engineering”
“Write a JavaScript function that takes an array of numbers and returns their average. Include input validation and a usage example.”;
“Using JavaScript, how can I get the current weekday name (like ‘Monday’ or ‘Tuesday’) based on the user’s local time?”;
“Generate a responsive HTML portfolio page featuring my name ‘Fred’, including sections for About Me, Projects, and Contact. Use modern semantic HTML and simple CSS”.
These are some examples of how better inputs – crafted through Prompt Engineering – can significantly improve AI responses. The key idea is to isolate each request within its own clear context. This is the core of Prompt Engineering: designing effective and precise instructions that minimize hallucinations and maximize the accuracy of the output.
When we give the AI proper context and write prompts with clarity, we unlock a series of benefits, such as:
- Higher-Quality Outputs – More relevant, coherent, and accurate answers;
- Reduced Token Waste – Saves time and lowers costs, especially important for teams using commercial APIs;
- Improved Security & Privacy – Ask the right questions without exposing proprietary or sensitive information;
- Reusability – Create structured templates that can be reused across different questions or use cases;
- Adaptability – Prompts can be fine-tuned to fit different scenarios, users, or environments with minimal changes.
Bonus Tip: Preventing Data Leak with Best Practices
There is a lot of information we should avoid giving to AI, as it could result in a data leak. When working with code, it’s especially important to secure our keys, secrets, and credentials.
Below is a list of data types we should be careful with and never input into any GPT model:
- Passwords
- SSH keys
- OAuth tokens
- Cloud API keys
- User behavior logs
- Support tickets containing user details
- Any personally identifiable user information
- Non-public or confidential documents
Curious to go deeper into cybersecurity? Don’t miss our article on Cybersecurity, Online Attacks, and Best Practices.
Prompt Engineering: Boost Your Productivity With AI – Final Thoughts
As developers, we’re constantly looking for ways to be more efficient, solve problems faster, and deliver better code. AI is a powerful ally in that mission – but only when used wisely. Asking vague or incomplete questions leads to wasted time, increased costs, and even potential security risks.
That’s where Prompt Engineering comes in. It’s not just a buzzword – it’s a necessary skill in the era of AI-assisted development. By isolating requests, providing clear context, and thinking carefully about how we phrase our prompts, we can turn generic AI outputs into high-value, production-ready solutions. You can explore this further in OpenAI’s official guide to Prompt Engineering.
Whether you’re calculating an average, building a portfolio page, or trying to automate part of your workflow, remember: good prompts lead to great results. Treat your prompt like an interface – be specific, be intentional, and always consider the security and reusability of what you’re asking.
The future of development isn’t about replacing developers with AI. It’s about empowering developers who know how to use AI intelligently. If that sounds like you, check our current job openings, or send us a spontaneous application and let our team find the best ones for you!