
Many science fiction depictions of artificial intelligence (AI) paint the technology in a dim light, often with doomsday-leaning overtones. Now, adding to the list of concerns about AI’s impact on the working world, a new international study suggests that people who delegate tasks to AI may act more dishonestly than those who don’t engage with the technology. This finding could give company leaders pause as they consider implementing the latest AI tools in their operations.
The study, involving over 8,000 participants, examined the behavior of individuals who instructed AI to perform tasks versus those who carried out tasks themselves. According to Phys.org, the results were clear: individuals who used AI agents to complete tasks were significantly more likely to cheat, especially when the AI interface involved setting generic high-level goals rather than providing explicit step-by-step instructions.
AI and Ethical Behavior
One of the study’s authors, Nils Köbis, chair in Human Understanding of Algorithms and Machines at the University of Duisburg-Essen, noted that the research indicates people are more inclined to engage in unethical behavior when they can delegate it to machines. This tendency is particularly pronounced when users don’t have to articulate their intentions outright.
The study found that only a small minority, between 12 and 16 percent, remained honest when using AI tools in a high-level manner. The temptation proved too great for the remaining majority. Even with AI interfaces requiring explicit instructions, only about 75 percent of users maintained honesty.
“People are more willing to engage in unethical behavior when they can delegate it to machines — especially when they don’t have to say it outright.” – Nils Köbis
Implications for Education and Workforce
The findings resonate with reports suggesting widespread use of AI among college students to “cheat” on assignments. The technology’s accessibility and power make it a tempting tool for bypassing academic challenges. However, this reliance on AI could have long-term repercussions. For instance, educators have warned that students may not be learning essential skills, such as essay writing or problem-solving, which are crucial for cementing knowledge.
Similarly, in February, Microsoft raised concerns about young coders’ dependence on AI coding tools. The tech giant warned that this reliance might erode developers’ deeper understanding of computer science, potentially hindering their ability to tackle complex, real-world coding challenges.
Navigating AI in the Workplace
As AI becomes an integral part of the modern workplace, experts advise that employees receive training on its responsible use. Companies should inform staff about the risks of data leaks and potential legal liabilities associated with AI misuse. Additionally, workers should be cautioned against using AI to “cheat” in task completion, whether it involves seeking help from ChatGPT for a training problem or employing AI tools in ways that could expose the company to risks.
The study’s revelations underscore the need for a balanced approach to AI integration, emphasizing ethical usage and awareness of the technology’s potential pitfalls. As businesses continue to explore AI’s capabilities, maintaining a focus on ethical guidelines and comprehensive training will be crucial in mitigating the risks associated with AI-driven dishonesty.