r/dataisbeautiful 3d ago

OC [OC] What risk and benefits do people attribute to various AI-related topics? Results form a survey of 1,100 people in Germany

Post image

Hi everyone, we recently published a peer-reviewed article exploring how people perceive artificial intelligence (AI) across different domains (e.g., autonomous driving, healthcare, politics, art, warfare). The study used a nationally representative sample in Germany (N=1100) and asked participants to evaluate 71 AI-related scenarios in terms of expected likelihood, risks, benefits, and overall attributed value

Main takeaway: People often see AI scenarios as likely, but this doesn’t mean they view them as beneficial. In fact, most scenarios were judged to have high risks, limited benefits, and low overall value. Interestingly, we found that people’s value judgments were almost entirely explained by risk-benefit tradeoffs (96.5% variance explained, with benefits being more important for forming value judgements than risks), while expectations of likelihood didn’t matter much.

Why this matters? These results highlight how important it is to communicate concrete benefits while addressing public concerns. Something relevant for policymakers, developers, and anyone working on AI ethics and governance.

What about you? What do you think about the findings and the methodological approach?

  • Are relevant AI related topics missing? Were critical topics oversampled?
  • Do you like the illustrations? What would you improve? While I like the scatterplot to illustrate the different attributions across the different topics, I found it very hard to make them readable owing to the large number of 71 topics (larger fonts dislocates the labels from the data points).
  • Have you expected that the risks play a minor role in forming the overall value judgement?

Interested in details? Here’s the full article:
Mapping Public Perception of Artificial Intelligence: Expectations, Risk-Benefit Tradeoffs, and Value As Determinants for Societal Acceptance, in Technological Forecasting and Social Change (2025), https://doi.org/10.1016/j.techfore.2025.124304

20 Upvotes

21 comments sorted by

15

u/orroro1 3d ago

Wait, so some Germans believe "Destroys Humanity" has below average but non-zero utility? Maybe something is lost in the translation, but it seems many of these labels have intrinsically positive or negative connotations, like "improve health" and "beyond human control". I don't get why anyone will see a beyond human control as anything other than a high-risk low utility outcome. The data backs this up too, since it doesn't really show a tradeoff at all -- things are are beneficial are also not dangerous, while things perceived as dangerous are also seen to provide little benefit. It's like the opposite of a tradeoff.

4

u/OGS_7619 3d ago

ah, the Germans and their passion for non-zero utility of destroying the humanity...

2

u/CryptographerHot366 3d ago

Old habits don't die out so easily - a german

6

u/UDcc123 3d ago

I think the dotted line is inverted. It should go from bottom left to top right. Anything up and left of that new line is reward > risk. Anything down and right of the new line is risk > reward.

Right now the dotted line doesn’t mean anything. Technically the entire bottom right quadrant is bad (useless and risky)

4

u/Katten_elvis 3d ago

I don't understand why people think 'omniscient' is less risky than 'knows everything about me'.

2

u/Automatic_Actuator_0 3d ago

Guessing many of them didn’t know what the word meant

2

u/danielv123 2d ago

Apparently there"knows everything about me" is apparently riskier than destroys humanity too. The placement of these points make almost no sense.

4

u/Mark8472 3d ago

Something is seriously wrong here.

No mention of data protection, privacy, fear of technology and change, fear in general (and of foreigners in particular \s)?

2

u/lipflip 3d ago edited 3d ago

Data is from a survey we did using a participant pool. Sample is representative for the german population. We used R to filter and analyse the data, the illustrations were made with ggplot and a simple geom_point() scatterplot (x-axis average perceived risk for each topic, y-axis average perceived benefits), and geom_text_repel() for the labels near, but not on the data points. I find the illustration quite accessible, yet the text is too small owing to the large number of labels (next survey will probably contain less topics… :).

5

u/-p-e-w- 3d ago

The chart is completely unreadable, even when zooming in. Have you tried opening this on mobile? The labels are blurry blobs.

1

u/lipflip 3d ago

On mine it works (after a second click). But yes: The font sizes are to small for an easy reading. The other option was to use different scales and zoom into the data, but we found the perspective of the absolute placement more important.

2

u/HeineBOB 2d ago

Sorry but this plot is ugly

1

u/lipflip 2d ago

Thanks for the constructive comment. what are your suggestions to make it better?

"All plots are ugly but some are useful." 😜

2

u/HeineBOB 2d ago

text too small, generally low readability. capital letters black text on blue background harder to read.

The axes.... What the fuck does risk from -100% to +100% mean? whats the scale here?

The "(without risk ... risky)" is typically done with arrows on the end of each of the axes, not as extra text centered.

Whats the line with the spread ? typically thered be a legend to tell this.

i think the goal of a plot / visual aid is to help the information flow easily into the brain. Now its much better than a table for sure, but for a scatter plot, this needs a ton of polish to be worthy of Beautiful label IMO.

1

u/lipflip 2d ago

Now this was constructive. Thanks. :) That helps for the next one.

One challenge is that I am not a designer and I somehow need to get the graphs more or less automatically out of the data.

1

u/neoneye2 3d ago edited 3d ago

In the chart, the "misused by criminals" sits at this coordinate:
Average Estimated Benefit = -25%
Average Estimated Risk = 66%
I read the chart as there is no benefit from using AI.

I disagree. I can make something with AI that is useful for criminals as a counter example.

1

u/majwilsonlion 20h ago

"Supports me" – the only dot in the Low Risk, High Utility, and it is so vague to even be classified as Low Risk...

-1

u/tomrichards8464 3d ago

These results highlight how important it is to communicate concrete benefits while addressing public concerns.

Or we could just all get together for a global ban on that shit. 

1

u/lipflip 3d ago

I think it's not that easy. AI in various domains has many many up and downsides. We should carefully consider where we want to use AI and where it should be regulated or even forbidden. Many many past technologies have tremendously improved our lives (fire for warmth, printing press for leaning und cultural exchange, computers for better, fast calculations, simulations,.and communicstion, ... ) and we somehow managed to handle the downsides.

My point is a different one: let's have an informed societal debate about this!

-1

u/tomrichards8464 3d ago

There's no realistic way to get the benefits without the existential risk, because both are driven by increased capability.

u/Due-Mycologist-7106 2h ago

Some industries have used ai for years and years now to the point you can't go back. It's not as new as just chatbots and shit