Synopsis of Social media discussions

The discussions reflect a recognition of the article’s significance by referencing related studies, such as designings of bias research or the reflection of societal prejudices in AI models. Words like 'classic,' 'demonstrated,' and mentions of methodology emphasize the analytical tone, while engaging examples like autonomous cars and word embeddings highlight real-world relevance.

A
Agreement
Moderate agreement

Most discussions acknowledge the validity of the publication's findings, emphasizing the existence of human-like biases in language models.

I
Interest
Moderate level of interest

Participants show moderate curiosity, engaging with topics like implicit bias and implications for AI systems without deep emotional investment.

E
Engagement
Moderate level of engagement

The discourse includes references to studies, methodologies, and personal experiences, indicating a reasonable level of thoughtful engagement.

I
Impact
Moderate level of impact

Multiple comments suggest the article's importance in shaping understanding of AI biases and influencing future research or application, signaling moderate impact.

Social Mentions

YouTube

2 Videos

Facebook

12 Posts

Twitter

30 Posts

Blogs

45 Articles

News

125 Articles

Reddit

4 Posts

Metrics

Video Views

8,390

Total Likes

837

Extended Reach

460,915

Social Features

218

Timeline: Posts about article

Top Social Media Posts

Posts referencing the article

Debunking Myths About Bias in Autonomous Vehicle Training Processes

Debunking Myths About Bias in Autonomous Vehicle Training Processes

Recently the headlines have been filled with concerns about whether machine learning used in autonomous vehicles may be putting some groups of pedestrians at risk. This video explores how training methods influence perceived biases and discusses recent research findings on biases derived from language data used in machine l

March 24, 2023

7,259 views


Ethical and Privacy Challenges in Wearable Health Research

Ethical and Privacy Challenges in Wearable Health Research

This lecture explores the growing role of wearable devices in health research, highlighting opportunities for real-time diagnostics and personalized care, as well as addressing data privacy, regulatory challenges, and ethical considerations in using consumer-grade wearables.

November 2, 2024

1,131 views


  • Masataka Nakayama
    @p_grin_rin (Twitter)

    https://t.co/itzEhnga5O
    view full post

    November 11, 2023


  • @salumc (Twitter)

    https://t.co/l1e4DKMhIn
    view full post

    September 6, 2023

  • kaveinology
    @kaveinthran (Twitter)

    RT @j2bryson: @SteveStuWill @jonathanstray I designed our 2017 #aibias study knowing #implicitBias was one of the largest effects discovere…
    view full post

    September 1, 2023

    2

  • Joanna J Bryson
    @j2bryson (Twitter)

    @kokkonutter @jpskeete I don't know if this makes it less uncomfortable, but after doing my work on the implicit biases and seeing their scale, I think this is not necessarily all explicit, intentional beliefs. So double blinding helps us be our best selves. https://t.co/xShhDfDMcI
    view full post

    April 10, 2023

    1

  • Rito
    @AllesistKode (Twitter)

    RT @j2bryson: @GaryMarcus I guess this is yet another thing that my paper with @aylin_cim & @random_walker showed – mining language doesn't…
    view full post

    April 3, 2023

    2

  • Rebekah Wegener
    @rebekahwegener (Twitter)

    RT @j2bryson: @GaryMarcus I guess this is yet another thing that my paper with @aylin_cim & @random_walker showed – mining language doesn't…
    view full post

    April 3, 2023

    2

  • Joanna J Bryson
    @j2bryson (Twitter)

    @GaryMarcus I guess this is yet another thing that my paper with @aylin_cim & @random_walker showed – mining language doesn't give you just any experience, it gives you the lived experience of those who produced it, expressed as knowledge including implicit biases. https://t.co/xShhDfDMcI
    view full post

    April 3, 2023

    4

    2

  • Emiel van Miltenburg
    @evanmiltenburg (Twitter)

    @HadasKotek I was also thinking you could try using the same approach as with word embeddings but then using sentence embeddings. But that sounds so simple someone must have done it already. (So similar to https://t.co/K3OiLgn3bh)
    view full post

    March 30, 2023

  • Boots Whitlock
    @BootsWH (Twitter)

    @Tesla @elonmusk This is 1st principles woke. It's a good perspective. If you care about safety you'll pay attention to this. https://t.co/8b98v47F4R
    view full post

    March 24, 2023

    1

  • Amit Sharma
    @amitsharmalie (Twitter)

    Are autonomous cars putting people's lives at risk? https://t.co/oomSSibyvM via @YouTube
    view full post

    March 24, 2023

  • Joanna J Bryson
    @j2bryson (Twitter)

    @marksongs22 @lizweil More history: my most cited article, allegedly on #AIBias, actually came from that semantics, cognitive science, & evolution-of-cognition research stream, NOT from my #AIEthics research stream. I was glad @ScienceMagazine spotted that & called it cog sci! https://t.co/xShhDfDMcI
    view full post

    February 7, 2023


  • @4annegs (Twitter)

    RT @j2bryson: @SteveStuWill @jonathanstray I designed our 2017 #aibias study knowing #implicitBias was one of the largest effects discovere…
    view full post

    December 30, 2022

    2

  • Joanna J Bryson
    @j2bryson (Twitter)

    @SteveStuWill @jonathanstray I designed our 2017 #aibias study knowing #implicitBias was one of the largest effects discovered in psychology. We now demonstrated its reality with an entirely different methodology, yet it’s STILL under attack. https://t.co/xShhDfDMcI
    view full post

    December 30, 2022

    4

    2

  • Joanna J Bryson
    @j2bryson (Twitter)

    @jadelgador https://t.co/xShhDfDMcI
    view full post

    August 5, 2022

    1

  • Joanna J Bryson
    @j2bryson (Twitter)

    @mlamons1 @1Br0wn No, it's just from our society. AI trained on the open Internet matches human implicit biases. https://t.co/xShhDfDMcI or for the blog version https://t.co/MOOkByetcA
    view full post

    June 22, 2022

  • Joanna J Bryson
    @j2bryson (Twitter)

    @MCoeckelbergh @Google @cajundiscordian @Floridi @David_Gunkel sorry, this should read “speaking of ‘bias’” or “speaking of prejudice”, to be consistent with one of my just earlier tweets to it, and the definitions we learnt about from reviewers & then explained here: https://t.co/xShhDfDMcI https://t.co/irjj4kGj3f
    view full post

    June 16, 2022

  • Rineke Verbrugge
    @RinekeV (Twitter)

    @j2bryson Yes, I was thinking of that paper too. For those who haven't read it, see https://t.co/wXgprOF8px Looking forward to @j2bryson's talk at #HHAI2022 tomorrow!
    view full post

    June 14, 2022

    2

  • Deb Raji
    @rajiinio (Twitter)

    @DrDesmondPatton Yeah, I think this is a classic text on this (by @aylin_cim, @j2bryson & @random_walker): https://t.co/rwoMVjhelj
    view full post

    April 21, 2022

    13

  • @timnitGebru (@dair-community.social/bsky.social)
    @timnitGebru (Twitter)

    @DrDesmondPatton @teemu_roos @emilymbender @moinnadeem https://t.co/CegvzedVPZ from 2017.
    view full post

    April 21, 2022

    3

  • Tobias Martens
    @tbsmartens (Twitter)

    https://t.co/L1Xf3OrmQ9
    view full post

    March 30, 2022

  • Paul Smaldino
    @psmaldino (Twitter)

    @acerbialberto OK, yeah, it looks more along the lines of this paper that found that algorithms reflect the biases in human-produced content. https://t.co/hioziA0YtE
    view full post

    March 14, 2022

    1

  • Paul Smaldino
    @psmaldino (Twitter)

    @vlasceanu_mada @david_m_amodio This by @j2bryson and colleagues seems relevant. https://t.co/hioziA0YtE
    view full post

    March 14, 2022

    1

  • Yuanye Ma
    @yuanye0111 (Twitter)

    Semantics derived automatically from language corpora contain human-like biases https://t.co/FzuAtTZsbT
    view full post

    March 8, 2022

  • JJ Bryson 2
    @j2blather (Twitter)

    @AutoArtMachine @tweetycami @johnchavens @StanfordHAI @StanfordSML @Stanford No, just perfectly replicating them is common https://t.co/stFG5l0KDU
    view full post

    February 20, 2022

    1

  • Jinhee Kim
    @TheJinheeKim (Twitter)

    @ShannonVallor Not from 2020, but it got me into AI Ethics: https://t.co/2CTZrsFIq2
    view full post

    December 6, 2021

  • IPman
    @IP_ad_man (Twitter)

    RT @Andrea_ilsergio: Semantics derived automatically from language corpora contain human-like biases https://t.co/4xoe1A8jgu
    view full post

    November 25, 2021

    2

  • Sir James Delon
    @SirJamesDelon (Twitter)

    RT @Andrea_ilsergio: Semantics derived automatically from language corpora contain human-like biases https://t.co/4xoe1A8jgu
    view full post

    November 25, 2021

    2

  • Andrea Sergiacomi
    @Andrea_ilsergio (Twitter)

    Semantics derived automatically from language corpora contain human-like biases https://t.co/4xoe1A8jgu
    view full post

    November 25, 2021

    2

  • Otto Koppius
    @OKoppius (Twitter)

    RT @EvilAICartoons: Note that using representative data is not always a good idea, as shown by @aylin_cim @j2bryson @random_walker in the…
    view full post

    October 7, 2021

    1

  • Evil AI Cartoons
    @EvilAICartoons (Twitter)

    Note that using representative data is not always a good idea, as shown by @aylin_cim @j2bryson @random_walker in the context of language models—used in chatbots and language translation tools— https://t.co/1TAsNdUatC
    view full post

    October 7, 2021

    1

Abstract Synopsis

  • Machine learning models that analyze language can automatically uncover humanlike biases present in large text datasets from the internet.
  • These biases include neutral ones, like those related to insects or flowers, as well as problematic prejudices related to race, gender, or societal roles.
  • The findings suggest that language data reflects historical and cultural biases, offering a way to identify and possibly address these biases in technology and society.]