Synopsis of Social media discussions

The discussions highlight the article's core insight that surprisal-based models underestimate the difficulty of syntactic disambiguation, with comments emphasizing the use of advanced models like RNNs and the need for mechanisms like reanalysis. Words like 'severely underestimate' and phrases such as 'new article' and 'final version' underscore their engagement and acknowledgment of the research's importance.

A
Agreement
Moderate agreement

Most discussions acknowledge the article's findings, like the underestimation of garden-path effects by surprisal models, indicating general agreement with the publication's conclusions.

I
Interest
High level of interest

The posts show a high level of curiosity and appreciation for the research, referencing specific methods and implications, which suggests strong interest.

E
Engagement
Moderate level of engagement

Some comments delve into technical details, such as the use of RNN language models and the linking hypothesis, reflecting thoughtful engagement.

I
Impact
Moderate level of impact

Participants recognize the significance of the findings, especially the need for more complex mechanisms beyond single-stage models, indicating a moderate impact perception.

Social Mentions

YouTube

1 Videos

Twitter

13 Posts

Metrics

Video Views

215

Total Likes

89

Extended Reach

53,766

Social Features

14

Timeline: Posts about article

Top Social Media Posts

Posts referencing the article

The Role of Surprisal in Reading Times

The Role of Surprisal in Reading Times

This video explores how syntactic surprisal influences reading times by examining studies that reveal the cognitive processing involved. We delve into works that show how anticipatory effects shape our understanding of language during reading, highlighting the relationship between linguistic structure and processing difficu

September 13, 2023

215 views


  • Mao Ruihua, Mauri 毛睿华
    @Bhg51631711 (Twitter)

    RT @marty_with_an_e: New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show tha…
    view full post

    June 29, 2021

    7

  • Akira Murakami
    @mrkm_a (Twitter)

    RT @tallinzen: Great to see the final version of this work led by @marty_with_an_e! We used RNN language models to test the hypothesis that…
    view full post

    June 27, 2021

    3

  • T. Florian Jaeger
    @_hlplab_ (Twitter)

    RT @tallinzen: Great to see the final version of this work led by @marty_with_an_e! We used RNN language models to test the hypothesis that…
    view full post

    June 26, 2021

    3

  • Ruixiang Cui
    @ruixiangcui (Twitter)

    RT @marty_with_an_e: New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show tha…
    view full post

    June 26, 2021

    7

  • Dr. Christina Bergmann - Skies are blue...
    @chbergma (Twitter)

    RT @marty_with_an_e: New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show tha…
    view full post

    June 26, 2021

    7

  • Ted Gibson, Language Lab MIT
    @LanguageMIT (Twitter)

    RT @marty_with_an_e: New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show tha…
    view full post

    June 25, 2021

    7

  • Cory Shain
    @coryshain (Twitter)

    RT @marty_with_an_e: New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show tha…
    view full post

    June 25, 2021

    7

  • Aaron Mueller
    @amuuueller (Twitter)

    RT @marty_with_an_e: New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show tha…
    view full post

    June 25, 2021

    7

  • Marten van Schijndel
    @marty_with_an_e (Twitter)

    @tmalsburg @mariemarm (Just because I'm riding the high of actually getting this published) @tallinzen and I explore the linking hypothesis between surprisal and reading times in our new paper. Section 2.5 is almost exactly what you ask for!
    view full post

    June 25, 2021

    4

    1

  • Tiago Pimentel
    @tpimentelms (Twitter)

    RT @tallinzen: Great to see the final version of this work led by @marty_with_an_e! We used RNN language models to test the hypothesis that…
    view full post

    June 25, 2021

    3

  • Marten van Schijndel
    @marty_with_an_e (Twitter)

    New article in Cognitive Science (w/ @tallinzen)! Peaks in surprisal predict garden paths in reading, but we show that these peaks severely underestimate the magnitude of the effects. Suggests other repair mechanisms are used when processing garden paths. https://t.co/JTt95otxP5
    view full post

    June 25, 2021

    44

    7

  • Tal Linzen
    @tallinzen (Twitter)

    Great to see the final version of this work led by @marty_with_an_e! We used RNN language models to test the hypothesis that the difficulty that people experience when reading garden-path sentences can be explained as a simple word predictability effect. https://t.co/2R0yCHwbzD
    view full post

    June 25, 2021

    37

    3

  • PsyArXiv-bot
    @PsyArXivBot (Twitter)

    Single-stage prediction models do not explain the magnitude of syntactic disambiguation difficulty https://t.co/dYRVdWPRzK
    view full post

    August 27, 2020

Abstract Synopsis

  • The article discusses how single-stage prediction models, which rely on surprisal (or predictability), can explain some aspects of syntactic disambiguation but fail to account for the full difficulty involved, especially in garden-path sentences.
  • While surprisal models can predict the presence of garden-path effects, they tend to underestimate how severe these effects are and do not effectively predict differences across various sentence structures.
  • The authors suggest that solving syntactic disambiguation problems might need more complex mechanisms, such as reanalysis processes, beyond just considering word predictability.