Tenth Gear Consulting Logo
Cover photo
Updated September 5, 2024
Reading time: 6 minutes

Should You be Using AI to Generate Business Solutions?

Evaluating the downsides and the need for policy.

Every single business is continuously on the lookout for ways to optimize spending. Technology is expensive, making cost-cutting in tech development attractive. Lately multiple companies have been announcing the retooling of their budgets and concentrated spending on AI-assisted technologies.  

Should your business use AI to create and manage technical products instead of hiring more developers? 

The products that allow you to use AI to code have been growing. You have a choice of AI designers, AI embedded in your code that can generate new functionality in just a few prompts,  AI developers, and the list goes on.  

The companies that sell AI assistants are doing a good job marketing them, so we will not be concentrating on the benefits of these solutions and the best use-case scenarios.  

Instead, let’s cover the downsides and the critical planning considerations so that you can make the best decision for your business.   

Product Completeness 

In software development, the main product use scenario is called “a happy path.” That means all assumptions are accurate, and everything is operating as expected. 

Because software attempts to automate the real world – where things rarely go as expected – it’s important to think of all probable non-standard scenarios and address them in the product.

In software engineering, these scenarios are called edge cases.  

In addition to edge cases, there are properties of the product that do not strictly cover the functionality but are critical for operations. These properties are called non-functional requirements, and a few examples of those are speed of response (performance), security, and ease of maintaining the product. 

When AI system is prompted with requirements, it will do exactly as asked, meaning there is a risk of missing edge cases and non-functional requirements. 

Should you care? It depends on the scale and importance of the solution you are building. Quick MVP for a demo, to try out an idea, or to gauge customer interest? If quality looks acceptable to you, the risk is low.  

Complex app for production environment? Determine if you can afford the risk of missing requirements or logic.  

Code Quality 

Have you ever heard the term “technical debt”?

It refers to the non-optimal software algorithms and solutions that, if left unchecked, can, with time, severely impact the speed of improvements and addressing defects.  

Almost every company that produces tech solutions has technical debt.

But AI helps to produce the debt at neck-breaking speed.  

There are a few reasons for this: 

  • AI is operating from the patterns scraped from the internet – which is full of good but also bad code.  
  • AI is not good at thinking through complex logic and can end up overengineering solutions (which means – coming up with significantly more complicated architecture than warranted).
  • Software development can be prescriptive – and it uses patterns for code development – but picking the right pattern requires reasoning beyond the current AI model’s capacity.   Does this matter in your case?  If you are building a temporary solution, it may be decommissioned before the quality becomes the issue. Thinking of keeping the product around? It will become a “pay now or pay later” situation. Expect to pay to refactor and address code issues.   

Lack of domain expertise 

Whom do you call when something breaks or just doesn’t behave as expected?  While AI could assist with fixing the bugs inside the codebase, it works as a copilot. Would you have an available domain expert that can work with AI to resolve the issues? 

Keep in mind that some issues may not be caused by the code itself, but by deployment infrastructure. 

Trying to decipher someone else’s code is difficult. Expect delays when experts start working with your AI-generated code.  

Accuracy  

AI is known to make up answers (the phenomenon known as hallucinations).

When you are asking it to design a black box that transfers provided inputs into desired outputs, you are placing a certain level of trust that the transformation inside the box will be accurate.  

The risk is definitely too high for strictly regulated industries and mission-critical code.

To evaluate if the risk is worth it, it helps to keep the accuracy metrics score where you can evaluate the final solution against the known use cases.   

Lack of time savings 

Speak to any developer, and they will tell you that writing code actually takes the least amount of time.  

There are requirements, architecture, design, debugging, building supporting systems (like authentication), deployment, analytics, monitoring and observing, optimizing, and scaling.   It’s important to establish accurate metrics to gauge if AI-generated code does produce expected benefits when compared to the additional time required to make code into a production-ready and functional product.    

Security and compliance 

Security is constantly evolving and is a complex field. Since AI follows patterns, you need to know what to ask for. 

Without being specific, it’s easy to create a product that can expose your organization to various security risks.  

If you are building something for internal consumption only (meaning the product will not be hosted on the public internet), the security risk is somewhat reduced but not eliminated. Consider that even a protected environment can be breached. 

The issue is especially concerning if the AI-generated product can perform automated tasks. Evaluate potential damage to your organization if the access is ever gained by unauthorized third parties that can use your product as a new attack vector.    

Licensing 

Coding assistants are trained on existing codebases, including open-source projects.  

Open-source software does not equal free software. It is governed by licenses - some of which are more restrictive than others - and may be restricted by patents. 

You should be aware of the risk that code snippets for your project may be lifted from open-source projects and the legal exposure that can come from that.   

Approaching AI usage policies 

Every organization should have (or work on ) an acceptable AI code generation policy covering the following aspects: 

  • What is allowed to be AI-generated? Complex tasks or regulated industries should require human coding only. 
  • How would the quality be maintained? Require AI-generated code to be clearly marked for future identification and set specific standards for code quality and style. 
  • What role do humans play? The code generated by AI must be reviewed by humans possessing the necessary qualifications to evaluate the code. 
  • How can we prove that AI helps to achieve goals? The reason for using AI is to increase developer productivity. Creating accurate metrics and running continuous analytics is critical to understanding the true impact.   

AI as a tool 

As with any tools that you may consider adopting in your organization, AI usage should be evaluated from business goals and risk perspective, and the results should be recorded in the policy. 

Creating standards helps to reduce risk vectors from the spontaneous adoption of emerging products and technologies. Making emerging tech evaluation part of your strategic planning helps to reduce this threat.