Synopsis of Social media discussions
The discussions reflect a recognition of the article’s significance by referencing related studies, such as designings of bias research or the reflection of societal prejudices in AI models. Words like 'classic,' 'demonstrated,' and mentions of methodology emphasize the analytical tone, while engaging examples like autonomous cars and word embeddings highlight real-world relevance.
Agreement
Moderate agreementMost discussions acknowledge the validity of the publication's findings, emphasizing the existence of human-like biases in language models.
Interest
Moderate level of interestParticipants show moderate curiosity, engaging with topics like implicit bias and implications for AI systems without deep emotional investment.
Engagement
Moderate level of engagementThe discourse includes references to studies, methodologies, and personal experiences, indicating a reasonable level of thoughtful engagement.
Impact
Moderate level of impactMultiple comments suggest the article's importance in shaping understanding of AI biases and influencing future research or application, signaling moderate impact.
Social Mentions
YouTube
2 Videos
12 Posts
30 Posts
Blogs
45 Articles
News
125 Articles
4 Posts
Metrics
Video Views
8,390
Total Likes
837
Extended Reach
460,915
Social Features
218
Timeline: Posts about article
Top Social Media Posts
Posts referencing the article
Debunking Myths About Bias in Autonomous Vehicle Training Processes
Recently the headlines have been filled with concerns about whether machine learning used in autonomous vehicles may be putting some groups of pedestrians at risk. This video explores how training methods influence perceived biases and discusses recent research findings on biases derived from language data used in machine l
Ethical and Privacy Challenges in Wearable Health Research
This lecture explores the growing role of wearable devices in health research, highlighting opportunities for real-time diagnostics and personalized care, as well as addressing data privacy, regulatory challenges, and ethical considerations in using consumer-grade wearables.
-
https://t.co/itzEhnga5O
view full postNovember 11, 2023
-
@salumc (Twitter)https://t.co/l1e4DKMhIn
view full postSeptember 6, 2023
-
kaveinology
@kaveinthran (Twitter)RT @j2bryson: @SteveStuWill @jonathanstray I designed our 2017 #aibias study knowing #implicitBias was one of the largest effects discovere…
view full postSeptember 1, 2023
2
-
Joanna J Bryson
@j2bryson (Twitter)@kokkonutter @jpskeete I don't know if this makes it less uncomfortable, but after doing my work on the implicit biases and seeing their scale, I think this is not necessarily all explicit, intentional beliefs. So double blinding helps us be our best selves. https://t.co/xShhDfDMcI
view full postApril 10, 2023
1
-
Rito
@AllesistKode (Twitter)RT @j2bryson: @GaryMarcus I guess this is yet another thing that my paper with @aylin_cim & @random_walker showed – mining language doesn't…
view full postApril 3, 2023
2
-
Rebekah Wegener
@rebekahwegener (Twitter)RT @j2bryson: @GaryMarcus I guess this is yet another thing that my paper with @aylin_cim & @random_walker showed – mining language doesn't…
view full postApril 3, 2023
2
-
Joanna J Bryson
@j2bryson (Twitter)@GaryMarcus I guess this is yet another thing that my paper with @aylin_cim & @random_walker showed – mining language doesn't give you just any experience, it gives you the lived experience of those who produced it, expressed as knowledge including implicit biases. https://t.co/xShhDfDMcI
view full postApril 3, 2023
4
2
-
Emiel van Miltenburg
@evanmiltenburg (Twitter)@HadasKotek I was also thinking you could try using the same approach as with word embeddings but then using sentence embeddings. But that sounds so simple someone must have done it already. (So similar to https://t.co/K3OiLgn3bh)
view full postMarch 30, 2023
-
Boots Whitlock
@BootsWH (Twitter)@Tesla @elonmusk This is 1st principles woke. It's a good perspective. If you care about safety you'll pay attention to this. https://t.co/8b98v47F4R
view full postMarch 24, 2023
1
-
Amit Sharma
@amitsharmalie (Twitter)Are autonomous cars putting people's lives at risk? https://t.co/oomSSibyvM via @YouTube
view full postMarch 24, 2023
-
Joanna J Bryson
@j2bryson (Twitter)@marksongs22 @lizweil More history: my most cited article, allegedly on #AIBias, actually came from that semantics, cognitive science, & evolution-of-cognition research stream, NOT from my #AIEthics research stream. I was glad @ScienceMagazine spotted that & called it cog sci! https://t.co/xShhDfDMcI
view full postFebruary 7, 2023
-
@4annegs (Twitter)RT @j2bryson: @SteveStuWill @jonathanstray I designed our 2017 #aibias study knowing #implicitBias was one of the largest effects discovere…
view full postDecember 30, 2022
2
-
Joanna J Bryson
@j2bryson (Twitter)@SteveStuWill @jonathanstray I designed our 2017 #aibias study knowing #implicitBias was one of the largest effects discovered in psychology. We now demonstrated its reality with an entirely different methodology, yet it’s STILL under attack. https://t.co/xShhDfDMcI
view full postDecember 30, 2022
4
2
-
Joanna J Bryson
@j2bryson (Twitter)@jadelgador https://t.co/xShhDfDMcI
view full postAugust 5, 2022
1
-
Joanna J Bryson
@j2bryson (Twitter)@mlamons1 @1Br0wn No, it's just from our society. AI trained on the open Internet matches human implicit biases. https://t.co/xShhDfDMcI or for the blog version https://t.co/MOOkByetcA
view full postJune 22, 2022
-
Joanna J Bryson
@j2bryson (Twitter)@MCoeckelbergh @Google @cajundiscordian @Floridi @David_Gunkel sorry, this should read “speaking of ‘bias’” or “speaking of prejudice”, to be consistent with one of my just earlier tweets to it, and the definitions we learnt about from reviewers & then explained here: https://t.co/xShhDfDMcI https://t.co/irjj4kGj3f
view full postJune 16, 2022
-
Rineke Verbrugge
@RinekeV (Twitter)@j2bryson Yes, I was thinking of that paper too. For those who haven't read it, see https://t.co/wXgprOF8px Looking forward to @j2bryson's talk at #HHAI2022 tomorrow!
view full postJune 14, 2022
2
-
Deb Raji
@rajiinio (Twitter)@DrDesmondPatton Yeah, I think this is a classic text on this (by @aylin_cim, @j2bryson & @random_walker): https://t.co/rwoMVjhelj
view full postApril 21, 2022
13
-
@timnitGebru (@dair-community.social/bsky.social)
@timnitGebru (Twitter)@DrDesmondPatton @teemu_roos @emilymbender @moinnadeem https://t.co/CegvzedVPZ from 2017.
view full postApril 21, 2022
3
-
Tobias Martens
@tbsmartens (Twitter)https://t.co/L1Xf3OrmQ9
view full postMarch 30, 2022
-
Paul Smaldino
@psmaldino (Twitter)@acerbialberto OK, yeah, it looks more along the lines of this paper that found that algorithms reflect the biases in human-produced content. https://t.co/hioziA0YtE
view full postMarch 14, 2022
1
-
Paul Smaldino
@psmaldino (Twitter)@vlasceanu_mada @david_m_amodio This by @j2bryson and colleagues seems relevant. https://t.co/hioziA0YtE
view full postMarch 14, 2022
1
-
Yuanye Ma
@yuanye0111 (Twitter)Semantics derived automatically from language corpora contain human-like biases https://t.co/FzuAtTZsbT
view full postMarch 8, 2022
-
JJ Bryson 2
@j2blather (Twitter)@AutoArtMachine @tweetycami @johnchavens @StanfordHAI @StanfordSML @Stanford No, just perfectly replicating them is common https://t.co/stFG5l0KDU
view full postFebruary 20, 2022
1
-
Jinhee Kim
@TheJinheeKim (Twitter)@ShannonVallor Not from 2020, but it got me into AI Ethics: https://t.co/2CTZrsFIq2
view full postDecember 6, 2021
-
IPman
@IP_ad_man (Twitter)RT @Andrea_ilsergio: Semantics derived automatically from language corpora contain human-like biases https://t.co/4xoe1A8jgu
view full postNovember 25, 2021
2
-
Sir James Delon
@SirJamesDelon (Twitter)RT @Andrea_ilsergio: Semantics derived automatically from language corpora contain human-like biases https://t.co/4xoe1A8jgu
view full postNovember 25, 2021
2
-
Andrea Sergiacomi
@Andrea_ilsergio (Twitter)Semantics derived automatically from language corpora contain human-like biases https://t.co/4xoe1A8jgu
view full postNovember 25, 2021
2
-
Otto Koppius
@OKoppius (Twitter)RT @EvilAICartoons: Note that using representative data is not always a good idea, as shown by @aylin_cim @j2bryson @random_walker in the…
view full postOctober 7, 2021
1
-
Evil AI Cartoons
@EvilAICartoons (Twitter)Note that using representative data is not always a good idea, as shown by @aylin_cim @j2bryson @random_walker in the context of language models—used in chatbots and language translation tools— https://t.co/1TAsNdUatC
view full postOctober 7, 2021
1
Abstract Synopsis
- Machine learning models that analyze language can automatically uncover humanlike biases present in large text datasets from the internet.
- These biases include neutral ones, like those related to insects or flowers, as well as problematic prejudices related to race, gender, or societal roles.
- The findings suggest that language data reflects historical and cultural biases, offering a way to identify and possibly address these biases in technology and society.]
Masataka Nakayama
@p_grin_rin (Twitter)