
Trying to Disrupt Five Development Practices Using AI
From our comprehensive framework of 14 software development activities, we selected five areas for our initial AI disruption experiments. Our goal wasn't to measure time savings—there are already countless articles attempting to quantify AI efficiency gains. Instead, we treated generative AI as a truly disruptive technology and looked for ways it could redefine what we do, not just make us incrementally faster.
Our Five Focus Areas
We chose areas that represented different types of software development work and offered the potential for fundamental transformation:
-
AI-Assisted Code Writing (“Incremental Development”) - Probably what most development orgs mean when they first say they use AI, it means incorporating AI assistants such as GitHub Copilot into the development workflow.
-
Jira Ticket Creation and Management (“Project Scoping & Backlog Initialization”) - Making sure tickets reliably cover the product definition and are fully specified according to our operational standards.
-
Architectural Documentation (“Architectural & Technical Design Documentation”) - Using AI to produce more comprehensive systems documentation and keep it up to date
-
AI Test Generation (“Quality Assurance & Testing”) - Using AI to identify untested code paths and write tests
-
Release Note Writing (“Project Closure or Transition”) - Measure the usefulness and clarity of the
These areas span the spectrum from highly technical (coding) to highly communicative (release notes), giving us a broad view of AI's disruptive potential across different aspects of what we do.
Why We Didn't Focus On Time Savings/Efficiency
Our goal was explicitly not to try to measure productivity with and without AI tools. We worried about behavioral distortions that come when people measure themselves against machines. But more importantly, we were explicitly concerned with the disruptive aspects of generative AI, as opposed to the kinds incremental benefits we are always working to introduce into our development processes.
As I discussed in the third article in this series, disruptive technologies tend not to produce efficiency gains out of the gate, but render certain things obsolete, thus forcing changes in overall approach. While we certainly took note of areas where we found incremental improvement (e.g. in development velocity) and incorporated them, our main goal was to identify ways these jobs will change and reposition our skills for a fully AI-saturated world.
What We Discovered
After a pretty broad set of experiments over a number of projects using tools in different ways, we found we were able to do major changes in how we performed three of these five activities. For Quality Assurance in particular, we found an unexpected and possibly game-changing benefit to rebuilding the process around AI.
Code Writing: Making Every Developer a Full-Stack Owner
Using AI coding tools (GitHub Copilot connecting to list) provided significant velocity gains for greenfield development (in one case 40% above average) and modest improvements for extending existing code. But the more interesting discovery was how AI enabled more of our developers to function as effective full-stack developers and really own entire feature sets or epics.
Lineate traditionally hires for a product development mindset and creative orientation rather than specific technical skills. We feel this sets us apart from commodity “find us a developer to carry out my instructions” kind of firms. However, we still end up with specialists on projects. For example, a great data engineer might struggle to build pixel-perfect UIs, so we might bring in a front-end expert to take on part of an analytics feature.
More than we expected, AI assistants helped our developers own entire epics. This enabled most of our team to work effectively as full-stack developers without sacrificing quality on their less-strong skills. In effect, AI provided "an extra pair of hands" controlled by the developer in charge of the epic. The AI excelled at handling the specialized technical details while the human developer maintained strategic control and creative direction.
This isn't just about efficiency. It's about moving viewing the engineering discipline from being about knowing a specific set of skills role to driving feature sets as part of product development.
Release Notes: Focusing Our Attention On Business Outcomes
Initially, our development teams viewed AI-generated release notes as a way to automate a task they disliked. But the real impact was behavioral.
Since the bulk of the release notes describing the features and what they do could be automated well with AI, our team leads would focus primarily on the overall description of the purpose and impact of releases. This is something we always coached them to do, but having the rest done automatically freed the teams to spend more time thinking about the overall goal of each release and how well it achieved that goal. By removing the friction of release note creation, AI encouraged more thoughtful consideration of release strategy in a way that we had never effectively “managed” before.
Quality Assurance: Shaping Product Definition
The results from QA were the most surprising. We expected our tests to become better and more extensive, but what we didn’t expect was the extent to which it changed how we design and document. By moving the test themselves to English language requests, the tests became acceptance criteria and our QA engineers became focused less on testing and documentation and even more on identifying edge cases and defining product behaviors.
Good QA actually does far more than just run tests and check requirements. Traditionally, QA engineers examine requirements and acceptance criteria provided by others to confirm software functions as specified. But in practice for us, especially when following Lean Innovation principles, requirements are constantly changing and most testing is automated. The real value-add for QA professionals in such a system is thinking about how software should work, identifying non-obvious failure modes, and shaping feature definition by creating acceptance criteria from relatively sparse requirements.
Generative AI turbocharges this process. Our QA team began writing automated tests using standard English. These tests (especially the UX tests) were less brittle than our usual automated tests and allowed the product design to evolve more flexibly. But what was really interesting was that by having the tests written in English, the tests themselves ended up becoming the written, evolving acceptance criteria of the product.
The Most Intriguing Discovery
Of all our experiments, the QA transformation intrigues us most. This approach has significant implications for product management. When tests become acceptance criteria written in plain English, the boundary between QA and product management starts to blur. We hadn't anticipated the product management implications.
The ability to create living, self-documenting acceptance criteria through AI-assisted test creation could fundamentally change how products are specified, developed, and validated. It's exactly the kind of process redefinition we were hoping to discover.
What's Next
These initial experiments have given us concrete evidence that AI can indeed redefine core software development activities, not just optimize them. The impacts go beyond individual productivity to affect team composition, role boundaries, and strategic processes.
Each of these discoveries deserves deeper exploration. If any of these transformations strike you as particularly interesting—the full-stack developer evolution, the QA-product management convergence, or the behavioral impacts of automated release notes—let us know. We can dive deep into exactly what we implemented and what we learned.
This is the fifth in a series examining how AI is transforming the software development consulting industry. See the previous article
Author: Ben Engber, CEO
Share:
Struggling with AI?
As we embarked on our mission to challenge AI to beat us at every aspect of software development, we quickly realized that most companies are thinking too narrowly about AI's disruptive potential. While others focus on AI as a coding assistant, we needed to reconceptualize the entire value chain and define a methodology for systematically replacing ourselves.
Contact Us