In recent years, enterprises have invested as much as $40 billion into Generative AI (GenAI). However, research from Nexthink indicates significant underutilisation, with most employee interactions with GenAI tools lasting under four minutes on average.
The analysis, which covers 4.9 million sessions daily and incorporates input from 3.4 million employees, provides insight into how frequently these technologies are used in practice. Employees average around ten interactions per day, yet total weekly engagement amounts to approximately three hours and fourteen minutes — around thirty-nine minutes per day. The findings suggest a pattern of short “micro sessions” rather than sustained integration into everyday workflows.
Despite limited engagement, the analysis shows users save approximately three hours and forty-seven minutes per week on average through the use of GenAI tools. However, performance varies across the four leading tools in the market:
ChatGPT: Average engagement — 2 hours 47 minutes; Net time saved — 5 hours 46 minutes
Claude: Average engagement — 2 hours 30 minutes; Net time saved — 3 hours 23 minutes
Copilot: Average engagement — 2 hours 40 minutes; Net time saved — 2 hours 45 minutes
Gemini: Average engagement — 2 hours 13 minutes; Net time saved — 4 hours 46 minutes
The differences indicate uneven adoption and efficiency outcomes across platforms, suggesting scope for more consistent deployment and optimisation.
While organisations have adopted these tools at scale, limited visibility into usage — including who is using them and for what purposes — can restrict understanding of how value is being derived from the investment. Nexthink’s AI Drive platform aims to address this by consolidating data on usage, measurement and guidance to provide a more integrated view of AI activity within organisations.
To maximise returns on GenAI investment, organisations may need to move beyond providing access alone and focus on structured adoption strategies. Understanding how each tool contributes across teams and use cases can help inform targeted support and training.
Encouraging a workplace culture that supports experimentation and views AI as a mechanism for improving processes and adapting existing approaches may also support deeper integration. Ongoing training and regular evaluation of tool effectiveness can further strengthen adoption and alignment with operational needs.
For organisations seeking to increase the impact of GenAI, developing clearer visibility of employee requirements and operational challenges remains important. Insights from usage analytics platforms can assist in identifying adoption gaps and improving integration into everyday workflows.
The effectiveness of GenAI in the workplace will depend largely on how it is implemented and embedded. While investment has established the foundation, outcomes will be shaped by adoption practices and governance over time.