A new study reveals that at the same time humans are unwilling to co-operate and compromise with machines, and that they will exploit them. Using methods from behavioral game theory, an international team of researchers at LMU Munich and the University of London conducted large-scale online studies to see whether people would behave as co-operatively with artificial intelligence (AI) systems as they do with fellow humans. The study, published in the journal iScience, found that, upon first encounter, people have the same level of trust toward AI as for human: most expect to meet someone who is ready to co-operate. The difference comes afterward. People are much less ready to reciprocate with AI, and instead exploit its benevolence to their benefit. Using a traffic example, a human driver would give way to another human but not to a self-driving car. The study identifies this unwillingness to compromise with machines as a new challenge to the future of human-AI interactions. “We put people in the shoes of someone who interacts with an artificial agent for the first time, as it could happen on the road,” explains Jurgis Karpus, a behavioral game theorist and philosopher at LMU Munich. “We modeled different types of social encounters and found a consistent pattern. People expected artificial agents to be as co-operative as fellow humans. However, they did not return their benevolence as much and exploited the AI more than humans.”
https://techxplore.com/news/2021-06-humans-ready-advantage-benevolent-ai.html