Entrepreneurs Break
No Result
View All Result
Monday, April 13, 2026
  • Login
  • Home
  • News
  • Business
  • Entertainment
  • Tech
  • Health
  • Opinion
Entrepreneurs Break
  • Home
  • News
  • Business
  • Entertainment
  • Tech
  • Health
  • Opinion
No Result
View All Result
Entrepreneurs Break
No Result
View All Result
Home Business

Itai Liptz: How to Tell if Your AI Project is a Productivity Tool or a Time Sink

by Ethan
9 months ago
in Business
0
Itai Liptz
159
SHARES
2k
VIEWS
Share on FacebookShare on Twitter

The promise of AI has always been rooted in efficiency. Companies adopt these tools hoping to streamline tasks, reduce costs, and free up human time for more meaningful work. But for every success story, there are plenty of projects that stall or burden teams with more complexity than they eliminate. A tool that was meant to simplify a process ends up requiring more maintenance, training, or review than the process itself ever did.

Entrepreneur Itai Liptz has worked with companies at every stage of AI adoption, and he’s seen firsthand how easily a good idea can get lost in its own implementation. He helps teams cut through the appeal of novelty and focus instead on practical outcomes. “There’s a difference between something that’s powerful in theory and something that helps people in practice,” he said. “You don’t always see that until a few months in—when the tool is either invisible because it works, or a constant topic in meetings because it doesn’t.”

The gap between potential and performance is more common than many leaders expect. According to S&P Global, 42% of enterprises scrapped the majority of their AI initiatives in the past year—more than double the previous year’s rate. Most of those projects didn’t collapse from technical failure. They unraveled because teams couldn’t show that the tool solved a real problem.

True, there’s no universal formula. But some patterns are common. The following issues focus less on how the tool is built and more on how it functions within everyday work.

Table of Contents

  • Start with what’s supposed to change
  • Look at who’s using it—and how
  • Track how it’s changing collaboration
  • Weigh the costs beyond the price tag
  • Where things go off track
  • Itai Liptz: What success actually looks like

Start with what’s supposed to change

Before measuring results, it’s worth revisiting the reason the project began. Too often, teams start building or buying AI tools before identifying a meaningful problem. The technology gets treated as an end in itself. Once deployed, it might technically perform well, but without real impact, it becomes another layer of infrastructure to manage.

This tends to happen when automation targets tasks that are low effort to begin with. A tool might auto-generate summaries or sort documents faster than a human, but if those tasks were already fast and low-value, the gain is negligible. Worse, that time and budget could have gone toward a process that actually needed improvement.

Liptz recalls one company that built a natural language generator to produce weekly internal updates. “The tool was clever,” he said. “But the reports weren’t the bottleneck. They took minutes to write. Meanwhile, the team was spending hours reconciling data from different systems, something the tool couldn’t touch.” In hindsight, he says, the project solved a problem no one had.

This example of mismatch is widespread. One deployment study found that more than 70% of AI projects never make it past the pilot stage. They don’t fail on technical grounds but rather, they simply can’t demonstrate value beyond a proof of concept. A good starting point is to name the task or workflow the AI was supposed to improve. Not just broadly—“reporting” or “customer support”—but in practical terms: who used to do it, how long it took, and what outcome they needed. If no one can answer that in detail, the project likely lacked focus from the beginning.

Look at who’s using it—and how

Adoption numbers can be misleading. A tool may be rolled out across a department or show high login rates, but those metrics don’t tell you whether it’s improving daily work. The more revealing question is whether people rely on it to get something done or if they’re still falling back on old systems when the stakes are high.

It’s common for AI tools to gain traction with a few early enthusiasts while others avoid them. That kind of split usage often signals unclear value or poor usability. If only a handful of people truly understand how to get useful output, or if the tool feels like it’s designed for a different audience than the one using it, adoption will plateau.

Liptz advises clients to observe behavior more than dashboards. “If your staff are still copy-pasting into spreadsheets or messaging each other for the same data they’re supposed to get from the tool, that tells you everything,” he said. “Workarounds are the loudest signals in quiet rooms.” They don’t always show up in analytics, but they’re a clear indicator that something’s off.

It’s also worth checking whether the tool has a designated owner or champion. In many cases, AI systems are deployed without clear responsibility for their upkeep or improvement. When something breaks or needs updating, it falls through the cracks. A tool that’s genuinely useful tends to have advocates, people who would miss it if it were gone and who push for it to get better over time.

Track how it’s changing collaboration

One of the less obvious effects of AI tools is how they alter the way teams communicate. A system that promises better insights or automation may technically succeed while also introducing confusion. People may disagree about what the tool is telling them, how to interpret results, or which version of the truth to trust.

Some of this is inevitable. AI tools often replace a single step in a larger chain, and that step rarely operates in isolation. For example, a language model that generates client emails may produce faster drafts, but if every email still requires a manager’s review, the time savings evaporate. Worse, it might delay responses if people wait longer to finalize drafts or second-guess the AI’s tone.

Liptz noted that in some organizations, collaboration actually slows down after a new system is introduced. “You get more status meetings, not fewer. People start validating outputs manually, double-checking data, or asking around for a ‘real’ answer,” he said. That is an indication that trust in the output hasn’t been earned.

The most useful test is whether conversations have become more focused and decisions faster since the tool’s rollout. If people spend less time aligning and more time executing, the change is likely helping. But if meetings grow longer or explanations more frequent, the system may be compounding uncertainty.

Weigh the costs beyond the price tag

It’s easy to underestimate the total cost of an AI initiative. The subscription or development fee is usually just the beginning. Training, integration, oversight, and ongoing updates often require internal labor that goes uncounted. And when that work is distributed informally, it becomes harder to measure yet still affects capacity.

This is especially true with systems that need regular tuning. Whether it’s prompt refinement, retraining on new data, or adjusting configurations, many AI tools don’t just “run” on their own. They require attention, and that attention costs time. If the same few people are constantly debugging or explaining the tool to others, the burden can grow quietly but significantly.

Liptz encourages teams to track hours, not just dollars. “Ask yourself: how many total person-hours does this tool take to maintain each month? Not just technical hours—include the time spent interpreting results, explaining workflows, and dealing with edge cases.” If the total effort isn’t decreasing over time, the return may not justify the investment.

This doesn’t mean every tool has to save money immediately. Some systems provide long-term value by reducing risk or improving quality. But those benefits should be specific and measurable. If the primary outcome is reputational (being able to say the company is using AI) then its operational value deserves a closer look.

Where things go off track

Misalignment often starts early. Teams may feel pressure to adopt AI quickly, especially when competitors are doing so or internal stakeholders are eager to innovate. In that rush, they might prioritize tools that demo well over those that integrate cleanly. A polished prototype can generate interest, but that doesn’t guarantee it will work within existing systems.

Another common issue is skipping the groundwork. Automation works best on clearly defined processes, but many teams try to apply it to workflows that are messy or inconsistent to begin with. The AI ends up codifying confusion. That can make things harder to fix later, since the problems become embedded in the system.

Liptz has seen this happen even in well-resourced organizations. “There was a case where a sales team implemented AI to analyze calls and generate CRM updates,” he said. “But the CRM itself was full of outdated fields, and no one agreed on what the notes were supposed to contain. So the AI pulled from bad inputs and created bad outputs faster.”

Projects are more likely to succeed when the team defines a narrow scope and tests it with real users early. That helps surface edge cases, usability issues, and integration challenges before the system becomes too entrenched. A small, well-functioning tool that solves one problem is often more valuable than a broad, glitchy platform with dozens of features.

Itai Liptz: What success actually looks like

When an AI project works, it tends to stop being the center of attention. People use it without thinking about it. Tasks get done faster, with fewer steps. Meetings shrink or disappear entirely. The tool becomes part of the background.

One of the clearest signs of success is that the system no longer needs to be explained. Employees trust it, act on its output, and no longer view it as experimental. If something changes, there’s someone accountable who can adapt it quickly and competently.

There’s also a shift in team energy. Instead of spending time validating or fixing AI output, people focus on the work the tool enables. Analysts might spend more time identifying insights than cleaning data. Customer service teams may respond more quickly because suggested replies are usable without revision. These changes are often subtle at first, but they accumulate.

Liptz put it simply: “The best AI tools feel almost boring once they’re working. They don’t call attention to themselves. They just make everything else easier.” That’s a helpful test. If the tool supports real work and its absence would slow people down, then it’s probably doing its job.

Ethan

Ethan

Ethan is the founder, owner, and CEO of EntrepreneursBreak, a leading online resource for entrepreneurs and small business owners. With over a decade of experience in business and entrepreneurship, Ethan is passionate about helping others achieve their goals and reach their full potential.

Entrepreneurs Break logo

Entrepreneurs Break is mostly focus on Business, Entertainment, Lifestyle, Health, News, and many more articles.

Contact Here: [email protected]

Note: We are not related or affiliated with entrepreneur.com or any Entrepreneur media.

  • Home
  • Privacy Policy
  • Contact

© 2026 - Entrepreneurs Break

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News
  • Business
  • Entertainment
  • Tech
  • Health
  • Opinion

© 2026 - Entrepreneurs Break