flyingWords

Back

AI assistants in development: how artificial intelligence is changing programming

ai-assistants-in-development-first-image-6894968fbf052.webp

Artificial intelligence is no longer an abstract concept - it is becoming a direct participant in the everyday work of developers. Tools like GitHub Copilot, Cursor, and Claude don't just complement the programming process; they radically change its rhythm and dynamics. According to GitHub, 73% of developers already use AI for generating code and documentation, and in some projects, up to 30-40% of the final code is created with its involvement. At the same time, the key shift is not in the automation of routine tasks, but in the fact that AI is becoming a full-fledged partner in the team - a helper that frees up thinking for architectural solutions, accelerates delivery, and helps focus on complex engineering challenges.

Real case: speeding up release by 4 times with the help of AI

FinEdge, a fintech startup in 2024, faced the challenge of rewriting its credit risk calculation module in 2 weeks instead of the usual 2 months. The team implemented GitHub Copilot and Cursor: Copilot sped up generating generic code and tests, while Cursor helped analyze dependencies between modules. The result - a new module was developed in 14 days, with a 27% reduction in bug rate in regression testing compared to previous releases. This experience became the starting point for a large-scale implementation of AI assistants in the company.

From Autocompletion to Digital Agents

AI assistants are no longer limited to syntax suggestions. Modern tools are capable of:

  • generating functions and methods based on textual descriptions;
  • predicting and eliminating errors before they occur;
  • writing unit tests and creating documentation;
  • refactoring code and suggesting architectural improvements.

Tools like OpenAI Codex can no longer be called mere assistants, but rather agents: they are capable of running tests, creating pull requests, adapting to project architecture, and interacting with cloud environments. All of this is done in autonomous mode, within the safe context of the repository.

Generations of AI Assistants:

  Generation

  Examples

  Key Features

  Limitations

  Autocomplete

  Tabnine

  Syntax hints, autocomplete

  Does not consider the whole

  Smart assistants

  GitHub Copilot, Codeium

  Function generation by description, autotests, documentation

  Context limited to file/module

  AI agents

  Cursor, OpenAI Codex

  Understanding entire codebase, autonomous tasks (tests, PR, deployment)

  High price, security risks

 

Landscape of Solutions: From Copilot to Specialized Agents

The market for AI assistants is rapidly evolving and becoming increasingly diverse. GitHub Copilot, developed in collaboration with OpenAI, remains the most widely used solution today: over a million active users and 20 thousand corporate clients. Copilot integrates with leading IDEs, including Visual Studio Code and JetBrains, offering real-time contextual suggestions.

But next-generation tools are replacing universal solutions. Cursor, for example, analyzes not only the current file but also inter-file dependencies, providing a "holistic" understanding of the project. It supports a multi-model architecture* (GPT-4, Claude 3.5, Gemini), giving developers a choice based on the task.

Alternatives are also gaining popularity: Codeium is a free alternative to Copilot, Tabnine focuses on protecting corporate data, and Amazon CodeWhisperer integrates into the AWS ecosystem. Strong players are also emerging in the Russian market, such as SourceCraft Code Assistant with its intelligent auto-completion capabilities.

Three main ones in comparison:

  Parameter

  GitHub Copilot

  Cursor

  Codeium

  Price

  10/month (Pro)

  Free

  Free

  Context

  File / IDE window

  All codebase

  File

  Models

  GPT-4 / GPT-3. 5

  GPT-4, Claude 3.5, Gemini

  Proprietary LLM

  Tests and Documentation

  Yes

  Yes, advanced

  Yes

  Design Analysis

  No

  Yes

  No

  Security

  Medium

  Depends on settings

  High (on-prem)

 

Impact on Productivity: Myths and Reality

According to GitHub, AI assistants help reduce average coding time by 55%, and 75% of developers report higher satisfaction due to reduced boilerplate and clearer task flow. On average, programmers accept about 30% of Copilot's suggestions, which allows them to free up dozens of hours each month.

However, it's not all that straightforward. The METR study showed that in a number of cases (especially when working with established codebases), the productivity of experienced developers decreases by up to 19%. The reason is the need to double-check and refine the AI code. Interestingly, the participants themselves rated their performance as having increased by 20%, which indicates a strong effect of subjective perception.

Economic potential: billions on a scale

GitHub estimates that Copilot-driven increases in code velocity and test coverage could contribute $1.5 billion to global software delivery value. In companies where AI creates at least 30% of the code, an increase in activity (number of commits) of 2.4% per quarter has been noted.

Examples from practice:

  • JPMorgan Chase increased engineer productivity by 10–20% thanks to their own assistant.
  • The Jit team increased the speed of releasing new features by 3x with the help of Cursor.

However, only 6% of technical leaders report sustained delivery gains - most successful teams had formal AI usage policies, trained developers in prompt engineering, and implemented code review workflows for AI output.

Quality and safety issues: the downside of acceleration

AI helps, but it doesn't always do so flawlessly. Studies show:

  • 70% of developers in Russia are dissatisfied with the quality of AI-generated code (MTS AI);
  • In 80% of cases, the generated code requires refinement;
  • A study by GitClear on 153 million lines of code showed an increase in hasty and duplicate code after the implementation of Copilot;
  • In 2 out of 5 cases, code from Copilot contains security vulnerabilities (New York University).

Typical problems include:

  • SQL Injection: AI-generated Python code forms SQL query with string concatenation without parameterization.
  • Hardcoded API Keys: Copilot suggested inserting a test API key directly into the code.
  • Default insecure configs: Kubernetes-manifest generation with allowPrivilegeEscalation: true and no CPU/RAM limits.
  • XSS vulnerabilities: AI suggested HTML template without escaping user data.
    Example: a CSET study found that 48% of AI-generated snippets contained potentially dangerous vulnerabilities.

It is especially risky to use AI for generating Kubernetes configurations: according to CNCF, 75% of companies have already encountered incidents due to misconfigurations.

We should not forget about political risks either. Chinese solutions like Qwen3-Coder raise concerns due to local requirements for data transfer to the state.

How AI is transforming the profession itself

AI tools are automating more and more tasks: 43% of developers use them for test generation, 44% for documentation, and 57% for bug detection. On the horizon is vibe coding*: programming through conversational language. Software development is becoming closer to design than to manual coding.

However, this also carries risks. Overloading with autocomplete suggestions can lead to professional degradation - especially among beginners. Experiments show that constant reliance on AI reduces the ability to solve problems independently.

The winners in the new reality will be those who learn to work with AI, rather than instead of it. This requires new skills:

  • proficient task formulation (prompt engineering*);
  • critical thinking and verification;
  • architectural thinking and system design;
  • understanding the internal mechanisms of AI models.

A next-generation developer is a conductor of a technological orchestra, not just a writer of lines of code.
Systems and architectural thinking is becoming a key competence of developers. These are the specialists who will make decisions about the system structure while AI writes the code. AI will not correct errors at the architectural level - on the contrary, it can aggravate technical debt.
Example: an incorrectly designed modular structure will lead to a 40% increase in build time, even if the code is generated instantly.

The future: agent systems, multimodality, and ethics

The technological agenda for 2025:

  • Agent AI systems: autonomous assistants that will be able to take on the full cycle - from requirements analysis to deployment;
  • Multimodality*: interaction with code through voice, visual interfaces, and images. This lowers the entry barrier and expands the audience;
  • AI TRiSM* (AI Trust, Risk, and Security Management): new standards of trust, transparency, and resilience for AI models;
  • Competition between open and closed solutions: the gap between them is rapidly narrowing - from 8% to 1.7% accuracy. This gives companies more flexibility and freedom in customization.

For example, in Russia, funding for the development of sovereign AI will continue - over 7.7 billion rubles have been allocated for flagship projects in the field of strong AI and the development of national models.

Regulatory and Ethical Framework

Of course we must not forget the regulations and nuances of geopolitics that directly impact the future of technology:

  • EU AI Act - introduces risk classification of AI systems. For AI in software development, requirements for transparency and traceability of decisions are important.
  • China Cybersecurity Law - obliges AI vendors to store data in China and share it with government agencies upon request.
  • US NIST AI RMF - a framework for managing AI risks, including in programming.
  • Geopolitical risks - the use of AI tools from jurisdictions with different data protection laws (e.g. CN or RU) may affect GDPR or CCPA compliance.

What Businesses and Development Teams Should Do: Action Plan

  1. Start with a controlled pilot
  • Scope: Select 1–2 low-risk projects or modules.
  • Goal: Test AI assistant impact without disrupting critical systems.
  • Example KPI: Reduce average code review time from 3 days to 2 days within the first month.
  • Responsible: Team Lead - defines scope and success metrics; CTO - approves tools and governance policy.
  1. Establish measurement and feedback loops
  • Metrics to track:
    • % of AI-generated code accepted into production
    • Bugs per 1,000 lines of AI code vs manually written code
    • Time from feature request to deployment
  • Frequency: Weekly progress reports; monthly AI impact reviews.
  • Responsible: Developers - log AI prompts and outputs; Team Lead - collect data; CTO - analyze trends.
  1. Implement risk management from day one
  • Security: Mandatory static code scans (e.g., SonarQube, Snyk) for all AI-generated code.
  • Data protection: Strict rules on what data can be sent to external AI tools.
  • Compliance: Align with GDPR, AI Act, and local data laws before scaling.
  • Responsible: CTO - defines policies; Developers - follow protocols.
  1. Scale with structured onboarding
  • Expand AI assistant use only after KPIs are met for 2–3 consecutive sprints.
  • Train new teams on:
    • Prompt engineering* best practices
    • Reviewing AI code for security and maintainability
    • Combining multiple AI tools for complex workflows
  • Responsible: Team Lead - training plan; Developers - peer learning.
  1. Continuously optimize and evolve
  • Run quarterly tool performance reviews.
  • Experiment with new models or configurations (e.g., switching between GPT-4 and Claude for specific tasks).
  • Measure ROI on both speed (delivery time) and quality (defect rates).
  • Responsible: CTO - strategic updates; Team Lead - tactical improvements.

Glossary: Key Terms

  •     Prompt engineering - creating precise queries to AI to get accurate results.
  •     Vibe coding - programming through natural language commands without writing code manually.
  •     Multi-model architecture - using multiple AI models in one tool for different tasks.
  •     AI TRiSM - an approach to trust, risk and security management of AI systems.

Conclusion

AI assistants are reshaping the very nature of programming. They free us from routine, increase speed, and open new horizons for creative engineering thinking: by automating repetitive tasks (e.g., templated tests, CRUD generation), teams save up to 12 hours per sprint, reallocating time to architectural planning and system design. But behind this progress lie not only opportunities, but also challenges - technical, ethical, organizational.

Organizations and developers who learn to use AI consciously - with an understanding of the risks, preserving fundamental skills, and focusing on quality - will gain a strategic advantage. Those who overestimate AI or underestimate the consequences will face a new wave of technical debt and vulnerabilities.

The future of programming is the cooperation between humans and AI. And it is precisely now that practices are being formed that will determine how productive, safe, and truly transformative this collaboration will be.

Read also:

AI

AIassistants

artificialintelligence

programming

softwaredevelopment

GithubCopilot

Copilot

Cursor

Claude

AItools

codingwithAI

AIagents

developerproductivity

AIcodegeneration

AIinsoftware

AIautomation

AIdevtools

AIcodingassistants

AIengineering

AIintegration

codeassistant

AIpoweredcoding

AITRISM

vibecoding

promptengineering

devops

AIsecurity

codereviewAI

futureofprogramming