“New Technologies Have the Potential to Permanently Change the Fabric of Society”
Center for Adaptive Rationality & Center for Humans and Machines
The two of you teamed up for the Blurry Face project. What’s it about?
Nils Köbis: We’re investigating how a technology that’s currently in its infancy might develop in the future. Specifically, we’re looking at filters that can be applied to people’s faces. We’ve probably all been in a video call where someone has put on a silly mustache or turned into a cat. In the Blurry Face project, we’re interested in how depersonalizing filters might affect people’s social behavior. These filters make it possible to blur or obscure a face in a video conference—and in the near future they may be a feature of augmented reality glasses. This kind of technology has the potential to permanently change the fabric of society.
Why is the topic so important?
Philipp Lorenz-Spreen: I’ve been working on how technology is changing communication and social behavior in the context of social media. Although millions of us use social media, the psychological and social effects are only slowly becoming clear. Information technologies are developing at lightning pace and we need to be prepared for the consequences. Looking into the future can help. Once a technology has found its way into everyday life, it’s difficult to control.
Nils Köbis: We were inspired by the Black Mirror episode “Arkangel.” It’s about a child who is implanted with a technology that allows her mother to monitor her movements—and that automatically also blurs any distressing images the child might see. The episode shows the huge impact that the technology has on the child’s development. Filters like the ones we’ve studied can theoretically be used for the same purpose—especially in combination with augmented reality glasses, which many people expect to replace smartphones.
So the inspiration for the study was a science fiction story. Why are you interested in science fiction?
Nils Köbis: Because science fiction offers glimpses into possible future scenarios. Modern technologies are developing so fast that research can’t keep up—a technology is often already outdated by the time studies on it are published. In science fiction science, we try to stay one step ahead by looking into the future and experimentally investigating how technologies will develop.
How did your joint project come about?
Philipp Lorenz-Spreen: It started back in the pandemic. Iyad [Rahwan] invited us to a video call to brainstorm ideas for a science fiction science project investigating digital filters and communication. So the impetus for the project came from the Center for Humans and Machines. At the Center for Adaptive Rationality, we have a wealth of experience with experimental setups and put a particular focus on obtaining representative measurements so that the situation in the lab was as realistic as possible.
Nils Köbis: We soon agreed on the research idea, who would do what, and how we would organize it. For me, that was impressive, especially as Philipp and I had only met over Zoom. It was a really cool experience.
What are the challenges of this joint project?
Philipp Lorenz-Spreen: What I sometimes find difficult in interdisciplinary collaboration isn’t so much the terminology. It’s often said that you need to start by finding a common language. But if all sides are open to it, that’s relatively easy. What’s more difficult is agreeing on a common research question. Perspectives on a topic vary from one discipline to the next. For example, our colleagues at the Center for Humans and Machines are more interested in the aspect of science fiction science. And we from the Center for Adaptive Rationality are more interested in the underlying psychological mechanisms. We decided on a version of the experiment that is somewhat more controlled, which allowed us to draw clearer conclusions about what is happening on the psychological level. But the compromise was that we’ve not been able to venture so far into the future, and haven’t yet included augmented reality in the design.
Despite the compromise, what are the benefits?
Nils Köbis: One benefit is the emergence of a group that really works well together. That’s worth a lot in science. It was never a top-down thing dictated from above. We had several meetings where we brought the general idea back to the table and had long discussions about it. Based on that, we developed a design that we all liked.
Philipp Lorenz-Spreen: It always mixes things up when people from different fields work together—that’s why I’m a firm advocate of interdisciplinarity. As a physicist, I’ve already stepped out of my comfort zone—and benefited hugely from doing so. Our research centers are inherently interdisciplinary. After all, there’s no point in social scientists thinking about the social impact of new technologies without understanding how they work. And it doesn’t help to have computer scientists developing new technologies without putting any thought into how they will affect our societies. That’s why collaboration is needed.
So what did you do in the Blurry Face project?
Nils Köbis: We conducted two studies. In both, participants played two economics games with another player, and we studied the possible positive and negative effects of depersonalizing filters. One game was a classic Dictator Game, originally designed to study altruism. Participants were given a sum of money and had to decide how much of that money to share with another person. Some participants were shown a normal photo of the potential recipient, the others were shown a blurred version of the photo.
Our working hypothesis was that participants would probably share less money when the recipient’s face was blurred; that the depersonalization filter would decrease their empathy. That would be a negative effect of such filters on human behavior. But depersonalization filters may also have positive effects in some situations. In job interviews, for example, it’s important not to let certain physical characteristics of the applicants influence the decision process. In the second game, the Money Allocation Game, players had to decide whether they would give money to an individual or an organization—in our case, the World Food Program. Again, some participants were shown a normal photo of the individual, while others saw a blurred version. We wanted to see whether depersonalization filters would lead to money being donated to an organization rather than given to an individual.
Philipp Lorenz-Spreen: The next study followed on from that. We used the same two games, but this time the participants saw a video of the recipient rather than a photo. We conducted the experiment live, which was quite a technological challenge. We had to program the filters and create a platform that allowed participants to interact and in which we were able to control which faces were blurred. Although we conducted the experiment without sound, it brought us one step closer to a situation that we all know from video calls, where the filter technology can already be implemented.
What did you find out? Were your concerns about this new technology confirmed?
Nils Köbis: We see support for our concerns, yes. In both experiments, participants playing the Dictator Game were less willing to share money with people whose faces were blurred. So the depersonalizing effect of the filters can result in people behaving less altruistically. In terms of whether the filters can have a positive effect, the results are less clear. In fact, we observed different effects across the experiments.
Philipp Lorenz-Spreen: To be able to draw clearer conclusions, we need to run follow-up experiments to investigate the effect further—potentially using augmented reality glasses or virtual worlds. In the future, augmented reality glasses may change reality by using filters to alter people’s perception of the environment in real time. Our studies showed that the effect of blurring people’s faces is stronger for video than for photos. Another possibility for a follow-up study would be to try out other filters—a beauty filter, for example, or one that makes it look as if you’re maintaining eye contact with the camera all the time. There are all kinds of possibilities.
What are the implications for the regulation of new media?
Nils Köbis: Tools of this kind are typically developed and launched by companies with economic interests in mind. And the mindset in Silicon Valley is to innovate first and ask for forgiveness later. ChatGPT is a prime example. It was rolled out without any form of impact assessment and people are already using it by the millions, even though some of the output it produces is absolute rubbish. So the question is, what do we do now?
Philipp Lorenz-Spreen: I’ve been looking into regulation issues in the context of the Digital Services Act—a new European Union regulation intended to control the influence of online platforms. For social media, at least, companies will have to submit risk reports—but only after the fact. They’ll be required to report what has happened on their platform, which new functionalities they’ve introduced, and the effects they have had on user behavior. What’s missing is prevention. In medicine, drugs have to be tested rigorously before being released to the market. In the same way, I argue, we need to test the potential harms of at least some technologies before allowing them to be introduced on a broader population level.
Nils Köbis: For example, there’s the question of how social media impacts the mental health of teenagers. What are the effects of a like button on the teenage mind? If we had looked ahead from the outset and done research on the topic, we could have intervened earlier.
What other science fiction science topics are you interested in?
Philipp Lorenz-Spreen: I’m interested in the algorithms used by social media to determine which content is displayed first. At the moment, I’m running experiments to try out alternative sorting algorithms. I’m taking the science fiction science approach, that is, stepping away from the status quo for a moment and considering how the algorithms could be better designed in the future.
Nils Köbis: In the Center for Humans and Machines, that’s already one of our basic principles: A lot of our research starts in the here and now and looks to the future. The Blurry Face project is part of a larger research area on AI-mediated communication. We’re looking at AI systems that are increasingly acting independently, such as online text tools. What does it do with people when they communicate through texts that they didn’t actually write themselves? What could a regulation framework look like? For example, should there be an automatic notification that the text wasn’t written by a human being?
What’s next for the Blurry Face project?
Philipp Lorenz-Spreen: We’re going to try to bring the idea to an even more realistic setting and have participants interact with each other in three-dimensional space, with filters changing their appearance.
Research Project in Brief
Topic: AI-mediated communication and its impact on interpersonal trust and cooperation
Researchers: Nils Köbis (Senior Research Scientist, Center for Humans and Machines), Philipp Lorenz-Spreen (Research Scientist, Center for Adaptive Rationality)
Funding: Max Planck Society