"Just learn the new tool"
There's a growing tendency to think about generative AI at work as "just another tool," something employees can learn the same way they learn new software, then fold into their workflows easily. That framing makes sense on the surface, especially if you don't know anything about how genAI works.
Most workplace tools work that way. An individual learns how to use a new software, applies it to their own tasks, and over time, teams align on shared practices. The tool behaves predictably, the outputs are consistent, and mistakes are usually obvious.
Generative AI doesn't work like that, though.
When someone experiments with AI in their own workflows, they can rely on their own context, judgment, and experience to evaluate the results. If something feels off, they can course-correct. The risk is mostly contained to the individual. When genAI is introduced as a shared resource (used by a team, department, or organization), it stops being a personal productivity aid and becomes part of the system of work.
Consider an AI agent designed to answer internal process or policy question. A long-tenured employee might immediately notice when an answer reflects an outdated workflow, skips an important exception, or oversimplifies a nuanced scenario. A newer employee, however, may have no reason to question the response. The answer sounds confident and there may be no easy way to verify without asking a colleague.
In that moment, AI's output becomes the default authority because the user lacks the context to know otherwise. This is where the comparison to traditional software breaks down.
With most tools, learning is about operating the interface correctly. If you click the wrong button, the mistake is visible. The user guide might even tell you why it's the incorrect procedure for a workflow. The software does exactly what it's programmed to do, every time, regardless of who is using it.
GenAI doesn't behave that way. It produces guesses, not outcomes. It can sound confident while being incomplete. It can be directionally right while missing critical nuance. It rarely signals uncertainty in a way users can reliably detect, especially new users who are still learning the domain.
The risk isn't misuse in the traditional sense, it's unquestioning use.
Using genAI responsibly requires more than knowing how to interact with it. It also requires knowing when not to trust it, how to validate what it produces, and where to go when something doesn't align with reality. Those skills are learned through experience, mentorship, training, and access to institutional knowledge, not through tools alone.
Treating genAI as "just another tool" obscures the real work involved in building shared understanding, maintaining trust, and creating structure around judgment.
AI can improve how work gets done but it doesn't replace the need for domain knowledge, context, or critical thinking.
Originally composed January 22, 2026, but I didn't have this blog then.