AI Practice in the Workplace: Promises and Challenges

2 min read
July 2025
AI Practice in the Workplace: Promises and Challenges
3:53

AI is being seen as the promised land in many industries, offering rapid turnaround times for a wide range of tasks. From streamlining workflows to enhancing decision-making processes, AI has the potential to revolutionise how we work. Its ability to automate routine tasks and provide real-time insights makes it an invaluable tool in many sectors. However, as its adoption grows, so do the risks and complexities associated with its use. 

Despite its transformative potential, there are also critical voices that are warning us of quick and reckless adoption. A cautionary tale comes from a recent study by Carnegie Mellon University called “The AgentCompany”: it found that relatively successful AI agents only get multi-step office tasks right about 30% of the time1. And JPMorganChase's Chief Information Security Officer, Patrick Opet, openly questioned the increasing reliance on AI-driven solutions without sufficient security and oversight in an open letter to his suppliers2. Like many advanced technologies, AI systems often operate as black boxes, making it difficult to understand where mistakes occur or how decisions are made. This lack of transparency complicates efforts to identify and correct errors when they arise. Moreover, even when a single AI step has a 95% success rate, in more agentic and autonomous applications this still results in unacceptably high overall failure rates. 

Does this mean we should not trust AI? 

Not necessarily. The way to eliminate this risk is to make sure you take a step-by-step approach. This keeps the task for AI more manageable. And as an added bonus, each incremental result can easily be checked. A good example of this is our recent use of the AI tool Loveable, which we used to create a internal configuration tool for our Patient Engagement solution Pathways InPatient. The goal was to enable the team to make quick, customisable adjustments to the system without constantly relying on developers. 

Clinton Davelaar, a developer at LOGEX, soon noticed that compound prompts, even when formulated to the human eye clearly, would often lead to unexpected results. Loveable made assumptions that weren’t aligned with his expectations on regular occasions. For example, instead of creating the desired dropdown menus, the AI tool generated radio buttons, an unintended shortcut that undermined the efficiency of the interface. 

Remembering that excruciating viral video of a dad asking his kids to write down precise instructions on how to make a peanut butter and jelly sandwich and the dad continuously finding ways to misinterpret instructions3, Clinton understood what he needed to do: he needed to eat the elephant one bite at a time. By giving the tool a long series of small assignments, the system had a better grasp on what the expected outcome was, so it made far fewer mistakes. And because each task was small, Clinton could easily check Lovable's work. 

This experience reinforced an important lesson:

AI must be used with caution, especially when working on tasks that require precision and careful planning. Clinton's story demonstrates why AI should be applied primarily to processes that can be broken down into manageable steps that are easy to audit and adjust. The AI-powered platform he used allowed his team to rapidly develop and modify the configuration tool interface, improving flexibility and saving time by reducing the need for continuous developer intervention. 

So yes, AI holds great promise. But, it requires oversight, clear instructions, and a careful, step-by-step approach. This will minimise risk and maximise the value of AI.  

 1 https://arxiv.org/pdf/2412.14161  

 2 https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers 

 

 

Screenshot 2025-03-11 151950

Get the latest insights, industry trends, and updates on how LOGEX is transforming healthcare with data-driven solutions.

Subscribe to Our Newsletter