Welcome to impressit

Menu
Business
AI
/26.09.2025/12 min.

Where AI Can't Help You: Cases of AI Hindering Business Processes

Roman Zomko
Roman ZomkoCo-Founder and CEO

Despite the remarkable advances in artificial intelligence, business leaders are discovering a critical truth: the most impactful work still requires distinctly human capabilities. While AI excels at processing data and automating routine tasks, it falls short in areas that define competitive advantage and sustainable growth.

Understanding these limitations isn't about dismissing AI's value but about recognizing where human skills remain irreplaceable. Companies that thrive will be those that strategically combine AI's efficiency with human judgment, creativity, and connection.

Let's examine the crucial areas where AI and generative AI integration simply cannot replace human expertise, and why these skills will become even more valuable as automation advances.

 

When AI Implementation Backfires: Case Studies Across Industries

Even well-established businesses have experienced AI efforts worsening their business processes. Here are some sector-specific examples that illustrate the AI system, the failure, and the outcome.

 

Retail: Target's Predictive Marketing Scandal

Target's predictive marketing analytics model sought to identify major life events (such as pregnancy) for the purpose of targeted marketing. One algorithm associated a teen's pregnancy with her shopping patterns, then sent targeted maternity ads to her house, disclosing sensitive information to her family. This triggered a public privacy and ethics controversy.

Consequences: Following this incident, the corporation reevaluated its approach to using customers' data to prevent trust breaches, closely monitoring and fine-tuning its personalization efforts.

 

Finance: Apple Card's Algorithmic Bias

The Apple Card's credit underwriting algorithm (developed by Goldman Sachs) aimed to automatically set and update Apple Cardholders' credit limits. After its release, one tech employee noticed that he had a significantly higher credit limit than his spouse, and his spouse even had a better credit score. The AI appeared to discriminate by gender, likely due to biased training data or flawed variables. 

Consequences: The algorithm came under public and legislative scrutiny due to its biased lending practices, which resulted from poor model documentation.

 

Healthcare: IBM Watson at MD Anderson

MD Anderson Cancer Center partnered with IBM Watson to create an Oncology Expert Advisor. This is an AI tool that utilizes natural language processing to review patient records and suggest potential cancer treatments. Despite the hype, Watson's recommendations in practice were often not useful or accepted by physicians. The system struggled with the complexity and nuance of oncology. An internal audit found it hadn't met clinical goals, and physicians were hesitant to trust a "black box" tool. IBM's powerful AI ultimately proved no match for the messy reality of healthcare, revealing a fundamental mismatch between how machines learn and how doctors work. 

Consequences: After four years and $62 million spent, MD Anderson canceled the project in 2016. This high-profile failure hurt IBM's reputation in health AI and underscored that without physician buy-in and clear efficacy, AI can become a costly dead-end.

 

Manufacturing: Tesla's Over-Automation Misstep

For the Model 3, Tesla integrated an advanced assembly line, utilizing cutting-edge robotics and AI to replace human labor. The AI business process automation system became so convoluted that it started to slow production. As CEO, Elon Musk declared, "excessive automation at Tesla was a mistake…Humans are underrated." While trying to automate every part of the line, Tesla lost sight of how to complete production.

Consequences: Tesla lost production targets, resulting in late deliveries. The company set the automation back and reintegrated humans to the line to prioritize production. The incident highlighted that blindly automating everything can hinder operational performance and that human skill remains crucial in many processes.

These cases illustrate how AI projects can fail by violating fundamental business realities, whether by neglecting privacy, embedding bias, lacking user trust, or overestimating technology. Each failure taught a lesson: the new technology should meet the actual needs of the business, the data and models must be aligned, and humans must be tied to the process.

 

How AI Hinders Business Processes

AI isn't always the silver bullet businesses think it is. In some instances, traditional methods involving human interaction and rule-based systems produce improved outcomes with less risk, greater reliability, and lower costs. So, there is value in recognizing when to pass on AI and knowing when to apply it.

For now, let’s focus on cases when it can do more damage than good.

 

False Positives Overwhelm True Results

Overwhelming false positives drown accurate identifications, which is one of the critical and most overlooked obstacles to AI deployment. It creates a cascade of inefficiencies that can paralyze operations rather than streamline them.

To illustrate this, let's consider a customer service abuse monitoring system that aims to detect customer service agents who shout at customers. The AI system designed to detect abusive agents may misidentify legitimate cases of loud communication as abusive. See, AI struggles to distinguish the desired speech from unwanted background sounds, which is probably the major issue for voice AI development right now. This leads to customer service managers devoting inordinate amounts of time to reviewing false positive alerts instead of focusing on real abuse cases, which is the whole point of having AI in place.

As a result, the hidden costs of false positives include:

  • Increased manual review, which negates the AI increase in operational efficiency.
  • Alert fatigue leads to the overlooking of real problems.
  • Discrimination against legitimate business functions.
  • Diminished trust in automated systems across the business.

Better alternative: Apply threshold-based monitoring with random auditing to minimize false positives while maintaining the desired level of supervision.

 

Zero Error Tolerance Makes AI Unusable

In certain industries where extreme precision is needed, AI’s unpredictability becomes a liability rather than an asset. This is especially true in the preparation of legal documents.

When lawyers use AI to prepare court documents, every single citation, every regulation, and every legal precedent must be checked and approved. Courts and attorneys cannot afford to submit incorrect legal citations, case histories, or fabricated documents. This mistake becomes a potential malpractice case, risking far more than the time saved.

Here are the high-stakes scenarios requiring perfect accuracy:

  • Medical diagnoses and treatment recommendations.
  • Financial regulatory compliance reporting.
  • Cybersecurity system controls.
  • Legal document preparation for court submissions.

Strategic Approach: Use AI only for research and draft generation, and always use a human check and approval workflow. Use AI as a starting point, not a final product.

 

Data Quality Undermines AI Effectiveness

The massive effort required to achieve data quality slows down AI implementation.

gen AI maturity level

But there's more to this. See, AI systems are only as good as the data on which they're trained. Data that is insufficient, unbalanced, or of poor quality will yield results worse than those produced by traditional analytics. 

Consider the case of a recruiting firm's AI analysis of resumes. If past discriminatory hiring practices are incorporated into the historical hiring datasets used to train the AI system, the AI will reinforce these practices. The system might systematically disadvantage qualified candidates from underrepresented groups, creating legal liability and missing valuable talent.

Here are data quality red flags:

  • A lack of sufficient historical data to use for model training.
  • Data sets with documented bias.
  • Changing business conditions make historical data irrelevant.
  • Discrepancies in data collection methods during different time frames.

Alternative approaches: Implement statistical analysis, structured scoring systems, or blind review processes that acknowledge limitations while maintaining objectivity.

 

Red Flags: Signs Your Business Isn't Ready for AI

Assessing your organization's position in the AI readiness continuum is crucial for developing a successful AI approach. To facilitate this evaluation, note the signs that your organization lacks readiness for AI.  

 

Your Data Exists in Silos  

Disconnected systems are one of the largest hurdles to adopting AI. When your CRM, finance, and operations systems do not communicate with one another, you create data silos, which are detrimental to the success of AI initiatives. In fact, 57% of organizations say their data is not “AI-ready," and those without ready data will likely fail to achieve AI objectives

To function properly, AI models need consolidated and easily accessible data. If customer data is kept in one system, transaction data in another, and operational metrics in a third, your AI projects will likely face challenges right from the beginning. So, data silos become the greatest barrier to businesses becoming data-driven.  

The answer lies in data integration. Organizations must dismantle data silos and connect disparate systems to create a single source of truth. Modern integration solutions such as AWS Glue, Fivetran, and Matillion are making this possible for even smaller businesses. AI systems will be unable to deliver reliable results without access to the clean, integrated, consolidated data that is essential.

 

You are Still Dependent on Spreadsheets

Spreadsheets are widely considered a technology of the past and an inadequate foundation for any AI initiative. MarketWatch reports that as many as 88% of spreadsheets have some form of inaccuracy. Back in 2012, JP Morgan suffered a $6 billion loss due to a simple copy-and-paste error in Excel. If important business functions are poorly automated and driven by Excel, the introduction of AI will only automate the inefficient processes and errors that currently exist.

The Financial Conduct Authority highlighted the risks posed by uncontrolled spreadsheets in regulated industries. A single misplaced formula can lead to flawed insights at the enterprise level, especially with AI amplifying these mistakes within automated workflows.  

Organizations must identify high-value activities currently within spreadsheets and reengineer them into higher-value systems. This means moving the data into structured databases, building reports on business intelligence tools like Power BI or Tableau, and teaching the business to control the data flow from source systems all the way to storage. This is the data infrastructure needed to support AI models.  

 

No Clear Use Case or Strategy

If you cannot name a specific problem AI is meant to solve, it's too early to start. Pursuing AI just because “everyone else is doing it” is the surest way to waste funds. Research by PwC indicates that although 86% of executives consider AI to be a mainstream technology, fewer than 20% have adopted it at scale within their business. 

The absence of well-defined, actionable use cases is a primary reason for the gap.

gen AI use cases for businesses

 

When organizations initiate an AI pilot, they need to anticipate what specific business outcomes they want to achieve; otherwise, they risk losing steam. This is why businesses mostly focus on localized impact.

 

gen AI for business

Companies should conduct an AI opportunity discovery and value assessment to prioritize AI use cases. In order to prove business value and demonstrate ROI, organizations need to set outcome metrics before an AI implementation project starts. This also facilitates the required incremental AI investment strategy to get the solution to the broader use cases envisioned.

 

Cultural Resistance to Change

AI adoption has major people-related issues. The lack of reskilling and upskilling is culturally one of the most pressing AI adoption challenges. Resistance to AI adoption or AI training programs among employees stems mostly from fear of job displacement or losing control over advanced systems.

gen AI risks for businesses

In this case, leadership buy-in means everything. Start with small AI experiments contained in the sandbox of your organization and never reach the outside world. 

 

Lack of Basic Analytics and Automation

If a business has not automated its basic processes or mastered business intelligence, introducing AI is likely to be ineffective. Companies that pursue advanced AI without a foundation of automated processes and analytics often become “paralyzed,” according to a Harvard Business Review article. In this situation, organizations tend to become burdened by expensive black-box systems that cannot be utilized effectively. 

An organization that isn’t already using analytics to drive decisions or is still relying on manual processes will certainly struggle to capture AI’s value. A clear sign of this uncaptured value is when businesses pursue AI simply for AI’s sake, without having optimized more basic digital processes.

 

Inadequate Technology Infrastructure

High-performance AI systems require advanced computing and seamless system integrations with other cloud systems. For AI tools to be implemented and scaled, there must be modern cloud infrastructure, modern robust systems, no legacy systems that hinder AI foundational work, and advanced GPU computing resources. Dismissing weak data privacy or poor cybersecurity practices is also unwise. AI will amplify previously contained risks. 

Red flags in this case are when the IT department experiences friction with a current data volume or software, but has no plans to scale its infrastructure or implement manual processes to secure data for AI projects.

Should one or more of these red flags seem familiar to you, take a moment to build more solid ground. Experts agree that only about a quarter of organizations have actually achieved tangible value from AI so far. The remaining ones are likely still trying to figure out the value it can bring them, which, as mentioned above, likely stems from the gaps we've described. 

The following section provides a roadmap to help you work on exactly that. If you feel like you need help identifying solid AI opportunities within your ecosystem, contact us, and we'll gladly help! 

 

Building AI Readiness: A Strategic Foundation

So far, business leaders struggle to claim the value AI has to offer. However, after the first wave of AI hype, the focus has turned to value and edge, which is the most important thing to any business.

We’ve researched important aspects to keep in focus when pursuing successful implementation.

 

AI readiness roadmap

 

1. Data Must Accurately Reflect Your Process

If you want to predict parts of a business process, you need a complete dataset that reflects the whole process. Suppose you have a dataset on the performance of your best employees, and you want to use it to enhance your hiring decisions. This idea has a major flaw. Your hiring procedure involves sifting through 1,000 applications, from which you identify 100 candidates. You then assess these 100 for the high-performance criteria. 

The problem is that the high performers you analyze do not form the whole dataset. You have only analyzed candidates who passed your screening process. The data you have totally ignores 900 candidates who did not make it and might have had outstanding potential.

Solution: For any decision requiring AI implementation, strive to collect data representing your complete decision-making framework, including both selected and rejected alternatives. This includes customer segmentation, vendor selection, geographic location planning, and hiring decisions. Base future selection decisions on more than data from only the outcomes that were selected. Otherwise, you will be laying a weak and biased foundation.

 

2. Comprehensive Data Doesn't Guarantee Meaningful Patterns

Abundant representative data may simply constitute extensive noise rather than valuable insight. Just because you have a large data set, it does not mean you can apply advanced techniques and obtain actionable intelligence. Organizations are often tight-lipped about the numerous failures that accompany their successful machine learning projects. Data mining earns its name precisely because not every data source contains valuable insights. 

For instance, employee evaluation programs might seem ideal for ML-based analysis. They offer potential correlations between individual characteristics and innovation success. However, a multitude of case studies have shown that no matter how rigorous, the analysis often surfaces no meaningful patterns. Success or failure often hinges on random, unobservable variables, like the mood of the contributor or evaluator.

Solution: Focus on domains where decision quality varies significantly between experts and beginners. If an experienced senior manager has repeated success with, for example, a pricing strategy, supplier selection, or hiring, this indicates a pattern that might be useful for your organization. Human decision-makers also rely on pattern recognition. This algorithm is simply the mind, and the data set is experience.

Another option is to work with academic researchers. Most academics are not under the immediate pressure of profit. They are paid to identify curious patterns in data, and they are able to conduct a wider range of unprofitable searches than most businesses are willing to allow. This is often a valuable opportunity for training doctoral students, and can be a win-win situation for the students and the researchers.

 

3. Patterns Must Remain Consistent Over Time

Every machine learning algorithm assumes tomorrow's world will resemble yesterday's environment. While ML can surprise us by revealing stable patterns invisible to human perception, this stability isn't guaranteed. If there is too much volatility in a process, extensive historical data will be of no use, as it will not aid pattern detection. Let's consider AI in wealth management, taking into account patterns of multinational corporations between 2000 and 2015. While historians and researchers might appreciate the data, it is unreasonable to consider it a predictor of corporate behavior in 2019, given the significant global shifts.

Solution: Prioritize projects in relatively stable domains or those changing in predictable ways. When uncertain about stability, rely on external exploration or segment data into smaller periods with greater consistency, frequently retraining your models.

 

4. Patterns Shouldn't Perpetuate Problematic Practices

Amazon used machine learning algorithms to recruit candidates, but it had to abandon its approach due to the gender biased recommendations it suggested. The recruiting algorithms turned out to be biased and gender discriminatory due to the social biases that the algorithms learned and then replicated. Ironically, this case reflected the gender discriminatory recruitment patterns in the historical data used to train the algorithms. The situation highlights the need for responsible AI development; however, the system cannot improve itself if employees use a systematic pattern of discriminatory practices.

Solution: Strategically designed algorithms can provide more accurate insights to HR executives when managers themselves are unaware of discriminatory practices. In such situations, it is more useful to improve the algorithms by adding bias assessments than to abandon the algorithms altogether. The absence of bias in ML applications using personnel data should be the industry standard. This means that human processes themselves need to eliminate prejudice from all aspects of their lives.

 

Final Word

As you evaluate AI tools for your business, ask yourself: Does this technology free me to focus on work that requires human judgment, creativity, and connection? If the answer is yes, you're on the right path toward leveraging AI while preserving the human elements that drive lasting success.

The future doesn't belong to humans or machines; it belongs to those who understand how to combine both effectively while never losing sight of what makes us irreplaceable.
 

Roman Zomko

Roman Zomko

Co-Founder and CEO
A passionate tech founder leads a team of experts to create innovative digital solutions that seamlessly blend business goals with technical excellence.

Other articles