Saturday, April 5, 2025

Using GenAI as a learning tool, not a crutch

Programming LanguageUsing GenAI as a learning tool, not a crutch


GenAI is fast changing the modus operandi of software development. But to build good code from AI, you need to combine traditional programming with the new skill of coaxing code from code, otherwise known as prompt engineering.

In the latest episode of Leaders of Code, GitLab Field CTO Lee Faus and Stack Overflow CEO Prashanth Chandrasekar discuss why syntax is becoming less important, how to build better prompts, and why AI should be treated as a learning tool, not a crutch.

Understanding legacy code has always been challenging. Developers need to understand the context of why the code was written and what it does. But AI-assisted development shows us that syntax isn’t the hard part anymore.

In the podcast episode, Faus explains, “The syntax doesn’t matter anymore. And we’re figuring out that the real power of computer science is people’s critical thinking and problem-solving skills. When we sprinkle creativity in, we’re able to do magical things with the computers in front of us.”

Faus suggests that developers should learn languages like Rust and C++ while nurturing AI prompting skills for embedded development as tech evolves to support connected environments with edge computing.

AI development is accelerating with new models like deep research from OpenAI and DeepSeek. Chandrasekar says, “It doesn’t take a lot of imagination to see where this is heading. You have to be building for where the puck is going.” He believes, “Super intelligent agents won’t be fallible, but they can give 80 to 90% of what you need very effectively.”

AI offers far more than automation – it’s a learning and skill growth tool. Faus believes developers who excel at GenAI today are figuring out how to connect and assemble information that can amplify developer capabilities rather than replacing them. Leveraging GenAI in this way can help drive business strategy, refine product-market fit, and prioritize features that support sustainable growth.

Until now, the industry has primarily focused on AI’s automation and productivity benefits. However, Recent research indicates that AI is being used to support critical thinking and complex problem-solving. Wharton Professor Ethan Mollick’s research with Procter and Gamble showed AI can become a teammate, not just a productivity tool. Mollick believes organisations that focus solely on AI for productivity will miss the opportunity to think and act bigger, and may face a backlash from workers concerned about job losses.

Prompt engineering is an emerging skill which professionals can develop by pooling knowledge about what works. Prompt engineering is like the launch of Google when people needed to understand the art of searching to get to the right information.

Precise prompting with the right information sources produces better results. Faus describes developers’ frustrations when GitLab Duo doesn’t deliver the code they need. This typically results from vague prompts that lack sufficient context, where some developers mistakenly approach AI like a search engine and expect perfect answers from simple queries.

Understanding prompt types improves results: system prompts set AI behavior while user prompts request specific actions. Retrieval-augmented generation (RAG) enables developers to access external data from sources like Stack Overflow, providing verified, real-time information rather than relying solely on the knowledge the AI has been explicitly trained on.

Prompt engineering should work side by side with traditional programming. Prompting can structure AI interactions with roles, tasks, and constraints for consistent outputs. Unlike programming’s formal syntax and precision, it relies on natural language. This makes it more accessible to a less experienced engineer, but with a firm disclaimer that easier is not necessarily better. As AI tools improve, prompt engineering will be invaluable and traditional coding will still be essential for high-performance systems.

AI coding assistants have evolved light years since autocomplete. They now support the full development cycle, streamlining base-level coding. Chandrasekar notes that experienced developers can automate simpler tasks to focus on complex, mentally intensive projects. Tests from GitHub Copilot show early adopters completed tasks 55% faster.

Yet prompt engineering is a challenge for engineers because it demands a new set of skills. Effective prompts need to take into account language, context, and strategy to shepherd AI models toward useful outputs. And to brief AI, it helps if you have experience of what good looks like, and ideally, briefing others to complete coding tasks.

Unlike traditional programming, prompt engineering introduces variability that can slow adoption. Teams must rethink workflows and learn new skills. Research from McKinsey shows that AI adoption often stalls when organizations underestimate the need for human oversight and adjustment in prompt design.

Beyond the technical, prompt engineering reshapes how teams work with AI. Engineers need to adopt a continuous learning mindset as AI models evolve. Chandrasekar believes many AI implementations struggle with change management and the “complexity cliff” of considering multiple variables. This needs expert knowledge to determine what’s needed before briefing an LLM. Effective content delivery throughout the developer workflow ensures “What they’re building is grounded in truth.” Although many advocate for mastery of prompting as a crucial developer skill, other experts believe the best way is to ask the LLM to write its own prompt.

Chandrasekar believes AI is a boon, but junior developers should be cautious not to use tools as a crutch. They must understand the underlying principles to troubleshoot issues.

Faus advises developers not to rely on AI for everything, like commit messages: “You should fully understand what you’re pushing.” AI also creates opportunities and challenges for enterprises in mapping developer skillsets, as GenAI’s base standard of code is now relatively high. But organizations that replace junior engineers with AI risk long-term pain by stymying the talent funnel.

Engineer Charity Major says software is an apprenticeship industry. It takes time and experience to mature into an experienced engineer, and you must practice. Writing code is the graduation dance towards composing systems.

With natural language programming (NLP), Faus thinks that developers can embrace new ways of working to be more strategic and collaborate with teams involved in product, marketing, or documentation. Non-technical colleagues can better review technical docs and quickly prototype ideas.

Recent programming languages like SQL have abstracted development further from machine coding. LLMs take this abstraction a step further with prompts replacing complex code. English, or your language of choice, is the new programming language. The low-code and no-code movement is opening up new possibilities for prototyping. Chandrasekar says, “Some of our best people [at Stack Overflow] are in a functional area but can code.” Domain experts with coding abilities can now take concepts through to implementation, like a marketing team piloting a new pricing tier. “We’ve all now got this machine-powered super intelligence.”

Greg Benson, Professor of Computer Science at the University of San Francisco and Chief Scientist at SnapLogic, says there could be a whole generation that can develop useful computer programs without any formal training. They may lack the foundational knowledge and skills that a grounded computer science education brings, but he thinks there’s a path forward by adding humans in the loop to engender trust.

AI-first development is far from flawless. GitClear’s research shows AI will suggest valid code but struggles with reuse and modification. Overreliance can weaken maintainability and test coverage, as explained by GitClear CEO Bill Harding in the Stack Overflow Podcast. Chandrasekar notes enterprise caution around AI adoption due to reliability concerns. Stack Overflow’s Developer Survey suggests developers are enthusiastic about using AI tools, yet only 43% feel confident about AI accuracy, with 31% remaining sceptical. Just half of professional developers trusted AI code accuracy.

This could be because experienced developers see that it’s easy to generate code, but it needs an overlay of expertise to develop good code. Like text generation, AI code can look plausible but rarely works in the way you expect and need it to. It’s a faster horse, not an electric supercar.

So what’s the solution? Prompt with purpose and augment AI with quality data sources to mitigate risks through proper attribution. Stack Overflow’s Enterprise Product offerings were built to power people and AI with high-quality, human-validated knowledge.

  • Stack Overflow for Teams: The most accurate and trustworthy knowledge store for your AI solutions.
  • Knowledge Solutions: Our subscription-based API service that provides continuous access to Stack Overflow’s 15+ years of community-curated data.

To summarize the conversation: Developers who embrace AI as a learning and problem-solving tool will thrive. Those who use it as a shortcut will fall behind. The future of coding isn’t about knowing the right syntax, it’s about knowing how to think.

Check out our other content

Check out other tags:

Most Popular Articles