How AI is changing our social relationships: a GDI meta-analysis

Artificial intelligence has arrived. In addition to the numerous applications for which AI is proving useful, such as automating tedious tasks and analysing complex problems, it is also increasingly taking on social functions: it supports us as an assistant, expert, moderator, friend and even educator. But how do interactions with AI affect the way we live together? There is no universal answer to this question. Initial research findings show mixed results, as demonstrated by a meta-analysis conducted by the GDI.
9 February, 2024 by
How AI is changing our social relationships: a GDI meta-analysis
GDI Gottlieb Duttweiler Institute
 

GDI researcher Davide Brunetti summarises the results of selected studies investigating the social impact of AI in an overview. By "social impact" we mean all types of effects that AI has on our behaviour and our interactions in groups or society. These effects can often occur unintentionally as "side-effects".

The studies show how AI is changing the way in which we live together and our social norms, and analyse the following questions:

How does social behaviour change when we delegate decisions to AI? What types of relationships can we develop with AI? And how will AI determine our self-image and identity in the future?

The results of the studies are based on experiments and surveys.

AI as a moderator

  • Cooperation in human groups is significantly improved by the intervention of (AI) bots that specifically adapt social connections.
  • Strategically deployed bots that distance less cooperative network members strengthen cooperative behaviour in the overall network.
  • The study underpins the potential of AI-driven intervention to promote cooperative dynamics in social networks.

Social impact: positive (prosocial)

AI as an assistant

  • Autonomous safety systems in vehicles can affect the established norms of reciprocity between people.
  • Autonomous braking systems encourage cooperative behaviour such as swerving, but do not affect reciprocity (consideration).
  • Automatic steering systems reduce both cooperation and reciprocity, since they make decisions for the driver.​
  • Driver assistance technologies can increase safety, but may affect social norms in road traffic.

Social impact: negative (anti-social)

AI as a (social) catalyst

  • Coordination problems in groups can be improved by a certain degree of randomness.
  • (AI) bots with a low degree of behavioural randomness placed centrally in a network accelerate problem solving by about 56%.
  • This improvement is particularly evident in complex coordination tasks.
  • The randomness in the behaviour of the bots not only influences directly connected people, but also the interactions in the entire network, thus contributing to improved global coordination.

Social impact: positive (prosocial)

AI as an advisor

  • AI is increasingly being used as an advisor in everyday life, thereby raising concerns about influencing ethical decisions.
  • A large-scale behavioural experiment investigates whether AI-generated advice can corrupt people. A GPT-2 (precursor to ChatGPT) algorithm is used to generate advice that promotes honesty or dishonesty.
  • Results show that AI-generated advice corrupts people even when the source of the advice is known. The corrupting power of AI is as strong as that of humans.

Social impact: negative (anti-social)

AI as a friend

  • The study shows the development of relationships between humans and the AI chatbot Replika, from initial curiosity to deep emotional connections.
  • Users experience an increase in their wellbeing by interacting with Replika and find the conversations meaningful and supportive.
  • The results emphasise the potential of AI to play an emotionally supportive role in mental health and personal support.

Social impact: positive (prosocial)

AI as a competitor/replacement

  • The study analyses how identification with social role models influences preferences for automated products.
  • Automation offers obvious consumer benefits, but these may be undesirable in identity-driven consumption, as people tend to resist automation if it obscures their personal capabilities.
  • The findings have important theoretical implications for the technological shaping of our identity. Accordingly, this discussion is relevant in a business context with regard to targeting, product innovation and communication.

Social implications: ambivalent

AI as an educator

  • Humans often change their behaviour and decisions to adapt to others, even in the face of obviously wrong facts. 
  • With the development of AI and robotics, robots are increasingly taking on a new social presence in human environments.
  • The study shows that children in particular are adapting to robots, which raises both opportunities and concerns regarding the use of social robots with young and vulnerable social groups.

Social impact: negative (anti-social)

AI will increasingly find its way into companies over the next few years and be able to take on more and more functions. From an economic perspective, the benefits of AI are pretty obvious: increased efficiency through automation and expert-level assistance (think ChatGPT for the latter). And from a social perspective, artificial intelligence can also increase wellbeing and play an emotionally supportive role. However, the social side-effects of AI are not exclusively beneficial and can even have potentially harmful consequences. Research shows that the negative side-effects of AI can demotivate employees and customers and can thus ultimately have a detrimental effect on corporate success.

If you want to learn more about the positive and negative social side-effects of AI, join us at the 20th European Trend Day on 13 March 2024, where Petter Brandtzaeg, Nils Köbis and Stefano Puntoni will share more fascinating insight into their research and how you can best prepare for the upcoming AI revolution.

Share this post
Archive