Synopsis of Social media discussions

The variety of discussions, including detailed explanations of the perception system, links to videos demonstrating the robot's capabilities, and praise for the research, illustrate a high level of interest and engagement. Words like 'hugely impressive' and 'exciting' emphasize the positive perception of the research's significance and potential impact.

A
Agreement
Moderate agreement

Most discussions support the significance and achievements of the research, emphasizing its success in enabling quadrupedal robots to operate robustly in challenging environments.

I
Interest
High level of interest

The discussions reflect a high level of curiosity, especially regarding the robot's capabilities such as hiking speed and robustness, showing enthusiasm for the applications.

E
Engagement
High engagement

Many comments provide detailed insights, highlight specific developments like the perception system, and reference related works, indicating deep engagement with the subject.

I
Impact
High level of impact

The widespread sharing of videos, praise for the novelty of the approach, and mention of deploying this in real-world environments suggest a high perceived impact on robotics and AI fields.

Social Mentions

YouTube

3 Videos

Facebook

2 Posts

Twitter

228 Posts

Blogs

3 Articles

News

40 Articles

Metrics

Video Views

106,857

Total Likes

2,280

Extended Reach

15,672,349

Social Features

276

Timeline: Posts about article

Top Social Media Posts

Posts referencing the article

Robust Perceptive Locomotion for Quadrupedal Robots in Challenging Environments

Robust Perceptive Locomotion for Quadrupedal Robots in Challenging Environments

We present a perceptive locomotion controller for quadrupedal robots that combines fast locomotion and exceptional robustness on challenging terrain. The system integrates external and internal sensors for reliable navigation, demonstrated through extensive natural environment testing.


Enhanced Quadrupedal Robot Navigation Using Integrated Perception

Enhanced Quadrupedal Robot Navigation Using Integrated Perception

Researchers improved quadrupedal robots' ability to navigate challenging environments by integrating external sensors and internal feedback. This allows the robot to plan and adapt its gait effectively in real-time, demonstrated by a successful one-hour alpine hike.

January 19, 2022

30,042 views


Robust Perception for Quadrupedal Robots in Unpredictable Environments

Robust Perception for Quadrupedal Robots in Unpredictable Environments

This video discusses enabling quadrupedal robots to navigate challenging terrains by integrating external and internal sensory data. The system enhances robustness and adaptability, demonstrated through extensive tests in natural and urban environments, including a one-hour Alps hike.

January 28, 2022

938 views


  • Jim Bloom
    @jimmyroybloom (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    October 28, 2025

    69

  • C Zhang
    @ChongZitaZhang (Twitter)

    @leto__jean walking: https://t.co/rjAqXEC0go swimming: https://t.co/OJTqTQuU2B flying: https://t.co/tmU7kzw6Ev manipulating: https://t.co/Dxji1s0f7l
    view full post

    August 22, 2024

    3

  • C Zhang
    @ChongZitaZhang (Twitter)

    So they have the best motion planning. And perception part is low hanging (https://t.co/890VgJaVDq) And motion tracking is also low hanging (https://t.co/9yP19UeF3W)(don't think BD needs to all-in RL here, so https://t.co/jLJv0G7ggZ)
    view full post

    April 17, 2024

    1

  • C's Robotics Paper Notes
    @RoboReading (Twitter)

    this is about locomotion only, and also echos the removed implicit sysID module in wild anymal (DOI 10.1126/scirobotics.abk2822). In an old paper it was called TCN (DOI 10.1126/scirobotics.abc5986) the saliency analysis is similar to the one proposed in https://t.co/HOTyn4JqcF
    view full post

    March 14, 2024

  • Sarah Price
    @MicroSarahP (Twitter)

    RT @ScienceMagazine: On #InternationalMountainDay, read about ANYmal, a legged #robot that completed an hour-long hike on the Etzel mountai…
    view full post

    December 11, 2023

    5

  • syawal™ シ
    @syawal (Twitter)

    RT @ScienceMagazine: On #InternationalMountainDay, read about ANYmal, a legged #robot that completed an hour-long hike on the Etzel mountai…
    view full post

    December 11, 2023

    5

  • PsyberspaceSuperstar
    @psybrspcsuprstr (Twitter)

    RT @ScienceMagazine: On #InternationalMountainDay, read about ANYmal, a legged #robot that completed an hour-long hike on the Etzel mountai…
    view full post

    December 11, 2023

    5

  • Science Magazine
    @ScienceMagazine (Twitter)

    On #InternationalMountainDay, read about ANYmal, a legged #robot that completed an hour-long hike on the Etzel mountain in Switzerland. Learn more in @SciRobotics: https://t.co/0nPuKCgXT9 https://t.co/PoEMXdpqYn
    view full post

    December 11, 2023

    35

    5

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    RT @SciRobotics: On #InternationalMountainDay, read about ANYmal, a legged #robot that completed an hour-long hike on the Etzel mountain in…
    view full post

    December 11, 2023

    2

  • Science Robotics
    @SciRobotics (Twitter)

    On #InternationalMountainDay, read about ANYmal, a legged #robot that completed an hour-long hike on the Etzel mountain in Switzerland. Learn more in Science #Robotics: https://t.co/tQLQNISGkA https://t.co/kJqGRYxXNk
    view full post

    December 11, 2023

    12

    2

  • Emmanuel Kahembwe
    @MannyKayy (Twitter)

    @kevin_zakka https://t.co/B9U9054R2S https://t.co/NDsxRbDRBF
    view full post

    March 26, 2023

  • nayopu
    @nayopu3 (Twitter)

    @ML_deep (夢オチ...?) もしかしてこれですか? Learning robust perceptive locomotion for quadrupedal robots in the wild https://t.co/KDCaqVSjzD https://t.co/CN1gelabpA
    view full post

    March 25, 2023

    5

  • Nathan Lambert
    @natolambert (Twitter)

    @pathak2206 @ki_ki_ki1 led the recent work from Hutter's lab https://t.co/eMwaWFFaEd Hello!
    view full post

    November 15, 2022

    1

  • でべ
    @devemin (Twitter)

    ANYmal C 四脚ロボが軽やかに動く映像を見ると、いつもすごいと思うけど、 ヒザのモータ位置ってどうなってるんだろ?単純にヒザに配置? 慣性の影響のためにボディに近づけてリンク機構かベルトを使った方が有利と思ってたんだけどどうなんだろ https://t.co/ghD26V5GRK https://t.co/xungkDh69q https://t.co/gV5qMlFyd4
    view full post

    September 29, 2022

    11

  • PSDY!?
    @psdy_tunamayo (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    September 10, 2022

    68

  • VirginSlayerIncelius
    @VIncelius (Twitter)

    ScienceMagazine: RT @ScienceVisuals: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    July 7, 2022

  • syawal™ シ
    @syawal (Twitter)

    RT @ScienceVisuals: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    July 6, 2022

    7

  • Science Visuals
    @ScienceVisuals (Twitter)

    This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    July 6, 2022

    50

    7

  • Reinforcement Learning Bot
    @ReinforcementB (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    May 11, 2022

    68

  • KALALA NZENIELE
    @cniongolo (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    May 11, 2022

    68

  • 藤本圭一郎
    @bluedack (Twitter)

    RT @ki_ki_ki1: 洞窟探検2 #WildANYmal https://t.co/ezFZiUkBoz https://t.co/ZOJXSNrqKh https://t.co/BXJ6Q1UgH9
    view full post

    May 7, 2022

    1

  • Data Governance Framework
    @gdprAI (Twitter)

    RT @loretoparisi: This model by @ETH_en is... an #anymal
    view full post

    May 4, 2022

    1

  • Loreto Parisi
    @loretoparisi (Twitter)

    This model by @ETH_en is... an #anymal
    view full post

    May 4, 2022

    1

    1

  • Sudeep Pillai
    @sudeeppillai (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    May 4, 2022

    9

  • Alexandre Borghi
    @_Alex_Borghi_ (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    April 9, 2022

    68

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    洞窟探検2 #WildANYmal https://t.co/ezFZiUkBoz https://t.co/ZOJXSNrqKh https://t.co/BXJ6Q1UgH9
    view full post

    March 23, 2022

    9

    1


  • @soya_moon (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    March 23, 2022

    68

  • Dar
    @Dad_H_Williams (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    March 22, 2022

    9

  • K.
    @UniversalFunc (Twitter)

    RT @weights_biases:
    view full post

    March 21, 2022

    4

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    RT @weights_biases:
    view full post

    March 21, 2022

    4

  • GeekyRakshit (e/mad)
    @soumikRakshit96 (Twitter)

    RT @weights_biases:
    view full post

    March 14, 2022

    4

  • Weights & Biases
    @weights_biases (Twitter)


    view full post

    March 14, 2022

    13

    4

  • Weights & Biases
    @wandb (Twitter)


    view full post

    March 14, 2022

    13

    4

  • Huda Flatah هدى فلاته
    @HudaFlatah (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    March 11, 2022

    68

  • Shuuji Kajita
    @s_kajita (Twitter)

    @shosakaino 第2ステップの教師あり学習において、 teacher出力をもとにstudentを学習させます。詳しくはこちらの論文をご覧ください。 https://t.co/5VXmadLSNE
    view full post

    March 5, 2022

    1

  • Mo7amed
    @Ragnar_56 (Twitter)

    RT @fihmai: كما الحال في البشر، تستخدم #الروبوتات طريقتين لتتفاعل مع العالم: إدراك خارجي كالذي يأتي من أنظمة الاستشعار الخارجية مثل الكامير…
    view full post

    February 12, 2022

    2

  • محمد
    @jo_moe_90_AI (Twitter)

    RT @fihmai: كما الحال في البشر، تستخدم #الروبوتات طريقتين لتتفاعل مع العالم: إدراك خارجي كالذي يأتي من أنظمة الاستشعار الخارجية مثل الكامير…
    view full post

    February 12, 2022

    2

  • fihm.ai فِهم للذكاء الاصطناعي
    @fihmai (Twitter)

    كما الحال في البشر، تستخدم #الروبوتات طريقتين لتتفاعل مع العالم: إدراك خارجي كالذي يأتي من أنظمة الاستشعار الخارجية مثل الكاميرات،و استشعار داخلي يتضمن أشياء مثل اللمس 1 https://t.co/4pAM0y9Lb9
    view full post

    February 12, 2022

    3

    2

  • zuka
    @denchu_tom (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 9, 2022

    20

  • panah
    @panah (Twitter)

    This robot can hike as fast as a human https://t.co/gyii5UoYTA
    view full post

    February 8, 2022

    2


  • @SRrapin (Twitter)

    RT @s_kajita: @16tons_ 最近、スイスのETHのグループが機械学習技術を駆使して、4脚ロボットでアルプス登山を実現しました。  このレベルの不整地踏破をASIMOで示せば(技術的に出来ない理由はない)世間の見る目は変わるはずです。 https://t.co…
    view full post

    February 8, 2022

    4

  • まるる♪
    @nagasakisumire (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 7, 2022

    20

  • daizyuugunndann
    @daizyuugunndann (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 6, 2022

    20

  • うん、
    @wai201303 (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 6, 2022

    20

  • 鮪鯨人(ゆうげいじん)@角刈り
    @yuugeijinn (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 6, 2022

    20

  • ギーつくの森本 ⋈
    @takuzirra (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 6, 2022

    20

  • 【喪中】自転車乗ってました@PPMMP+Meで6回目
    @jitensha_nori (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 6, 2022

    20

  • |ω・`)もんぐれ
    @mongrelP (Twitter)

    RT @s_kajita: @16tons_ 最近、スイスのETHのグループが機械学習技術を駆使して、4脚ロボットでアルプス登山を実現しました。  このレベルの不整地踏破をASIMOで示せば(技術的に出来ない理由はない)世間の見る目は変わるはずです。 https://t.co…
    view full post

    February 6, 2022

    4

  • Kei Nishikawa
    @KeiNishikawa3 (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • KEI
    @KEIMINKEI (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • あにめモン@2Dアニメ制作の人
    @2D_AnimeMon (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • Daichi Ogawa
    @Earth_Scream (Twitter)

    YouTubeってスゴイ!最先端のサイエンスを学べて、英語リスニングの訓練にも。考え方次第で、めっちゃコスパの良い教材! ↓例えば、今朝リツイート経由で出会った動画(偶然ですが、私の母校ETH Zurich発の研究) https://t.co/AEpqEsRP4n
    view full post

    February 5, 2022

    1

  • Bearia
    @Bearia (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • おばけりんご
    @amane1735 (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • まお(松岡洋)
    @kuronekodaisuki (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • 石部統久
    @mototchen (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • Masa Yamamoto予測誤差が大きい人生を楽しもう
    @mshero_y (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • rhythmsift
    @rhythmsift (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • 虹ピクミンの手動bot
    @doll_of_Misery (Twitter)

    RT @s_kajita: (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    20

  • くりうず
    @kuriuzu (Twitter)

    RT @s_kajita: @16tons_ 最近、スイスのETHのグループが機械学習技術を駆使して、4脚ロボットでアルプス登山を実現しました。  このレベルの不整地踏破をASIMOで示せば(技術的に出来ない理由はない)世間の見る目は変わるはずです。 https://t.co…
    view full post

    February 5, 2022

    4

  • Shuuji Kajita
    @s_kajita (Twitter)

    (ロボット工学に新たな革命が起きる予兆) https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    49

    20

  • 16tons
    @16tons_ (Twitter)

    RT @s_kajita: @16tons_ 最近、スイスのETHのグループが機械学習技術を駆使して、4脚ロボットでアルプス登山を実現しました。  このレベルの不整地踏破をASIMOで示せば(技術的に出来ない理由はない)世間の見る目は変わるはずです。 https://t.co…
    view full post

    February 5, 2022

    4

  • Shuuji Kajita
    @s_kajita (Twitter)

    @16tons_ 最近、スイスのETHのグループが機械学習技術を駆使して、4脚ロボットでアルプス登山を実現しました。  このレベルの不整地踏破をASIMOで示せば(技術的に出来ない理由はない)世間の見る目は変わるはずです。 https://t.co/7LBOcS6GXX
    view full post

    February 5, 2022

    7

    4

  • kenji@やわらからじお
    @uecken (Twitter)

    ( ゚Д゚) Learning robust perceptive locomotion for quadrupedal robots in the wild https://t.co/dbXcJNfqx1
    view full post

    February 5, 2022

  • MASA-chaussettes
    @Marthur5884 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 4, 2022

    26

  • 早瀬道博(はやせ みちひろ)@VR・AI他色々やってます
    @m_hayase256 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 4, 2022

    26

  • 孔明@B-SKY Lab
    @eternalfriend17 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 4, 2022

    26

  • 魔女みならい
    @witch_kazumin (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 4, 2022

    26

  • あじあんたむ
    @SC_Tomoyo (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 4, 2022

    26

  • Masahiro Ikeda
    @ikeko24 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 4, 2022

    26

  • 森山和道/ライター、書評屋
    @kmoriyama (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • M.Aono%
    @aomonoya (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • func
    @func_hs (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 望月紅葉さんと幸せな家庭を築きたい
    @momiji_fullmoon (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 加藤リュウイチ
    @ka10ryu1 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • Root/Roof
    @YK42356161 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 大山英明
    @eimei0080 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 暇人
    @safefield (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • ken 〜消費期限近傍人生再設計第一世代〜
    @twken3 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • ひつじ
    @tsuji_t1 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 4ptacc
    @4ptacc (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • フェネックを労災から守るアライさんBOT
    @SaveFennecSafty (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 闇ときどき豚
    @yami_buta (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26


  • @fukanju (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • yoneken
    @k_yone (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • 片岡大哉
    @hakuturu583 (Twitter)

    RT @s_kajita: ETHの山道を踏破する4脚ロボットの原著論文の構成が面白くて、Introductionの次にResultsが来て、最終章がMaterials and Methods(まてめそ)になってる。  ロボットの論文ではこれが正解かも。 https://t.…
    view full post

    February 3, 2022

    26

  • ken crichlow
    @ken_crichlow (Twitter)

    RT @NewsfromScience: After researchers gave this robot “eyes,” they found it can move about as fast as an average human’s walking speed.
    view full post

    February 2, 2022

    2

  • Dieter
    @Wuhle (Twitter)

    RT @NewsfromScience: After researchers gave this robot “eyes,” they found it can move about as fast as an average human’s walking speed.
    view full post

    February 2, 2022

    2

  • News from Science
    @NewsfromScience (Twitter)

    After researchers gave this robot “eyes,” they found it can move about as fast as an average human’s walking speed.
    view full post

    February 2, 2022

    1

    2

  • Ronald K. Phillips
    @Ariella09 (Twitter)

    RT @NewsfromScience: After researchers gave this robot “eyes,” they found it can move about as fast as an average human’s walking speed.
    view full post

    January 28, 2022

    1

  • J.E. Nieto-Domínguez
    @dasjend (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Karsten Suhre
    @ksuhre (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Euripides
    @Euriteo (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • 住田 朋久
    @sumidatomohisa (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Jason Mullikin
    @jason_mullikin (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Ciencia de la 4T
    @Ciencia4T (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Xulu
    @ngwenyathabs (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Ricardo Khouri
    @ricardo_khouri (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • VirginSlayerIncelius
    @VIncelius (Twitter)

    ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

  • Masterguy
    @Masterguy14 (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Alexander Ruf
    @rufalexan (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • HakcTehIcepol
    @goremiyon (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17


  • @bettielie (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Michael J.
    @michael99J (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Pisigomet
    @pisigomet (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Pt Chowa ⚛️ Atheist
    @ChowaPt (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Prakriti Arya
    @PrakritiArya4 (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    17

  • Science Magazine
    @ScienceMagazine (Twitter)

    This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 28, 2022

    107

    17

  • PINAKI LASKAR
    @PinakiLaskar (Twitter)

    RT @Fisheyebox: Learning robust perceptive locomotion for quadrupedal robots in the wild #ArtificialIntelligence #AI #DataScientist #BigDat…
    view full post

    January 27, 2022

    4

  • RobotConsumer
    @RobotConsumer (Twitter)

    RT @Fisheyebox: Learning robust perceptive locomotion for quadrupedal robots in the wild #ArtificialIntelligence #AI #DataScientist #BigDat…
    view full post

    January 26, 2022

    4

  • Serverless Fan
    @ServerlessFan (Twitter)

    RT @Fisheyebox: Learning robust perceptive locomotion for quadrupedal robots in the wild #ArtificialIntelligence #AI #DataScientist #BigDat…
    view full post

    January 26, 2022

    4

  • Fisheyebox AI
    @Fisheyebox (Twitter)

    RT @Fisheyebox: Learning robust perceptive locomotion for quadrupedal robots in the wild #ArtificialIntelligence #AI #DataScientist #BigDat…
    view full post

    January 26, 2022

    4

  • Fisheyebox AI
    @Fisheyebox (Twitter)

    Learning robust perceptive locomotion for quadrupedal robots in the wild #ArtificialIntelligence #AI #DataScientist #BigData #MachineLearning #Analytics #DataScience #RStats #JavaScript #Python #Serverless #Linux #Innovation #selfdrivingcars #IoT #programming #coding #fashiontech https://t.co/UDTX0oqzOm
    view full post

    January 26, 2022

    3

    4

  • Alexander Kruel
    @XiXiDu (Twitter)

    Huge Step Forward in Legged Robotics from ETH: "We present a perceptive locomotion controller for quadrupedal robots that combines fast locomotion and exceptional robustness on challenging terrain." https://t.co/FDkASiS4bw
    view full post

    January 26, 2022

    1

  • Daniel Stoddart
    @danielstoddart (Twitter)

    It's only a matter of time until one of these things kills a person. https://t.co/wveYoPanMx
    view full post

    January 25, 2022

  • Faisal S.S.
    @ImmunityForever (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • Robotic Systems Lab
    @leggedrobotics (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • VirginSlayerIncelius
    @VIncelius (Twitter)

    ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

  • Wendinh GarCalder
    @WendinhGarCalde (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • Emmanuel Donald
    @Wikimanuel (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • Allan Kardec Barros
    @akardecbarros (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • Humans Are Obsolete
    @R0B0TAP0CALYPSE (Twitter)

    https://t.co/OMmTd5DKVY #robot #robots #hiking
    view full post

    January 25, 2022

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • .
    @Vagan (Twitter)

    RT @ScienceMagazine: This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    17

  • Science Magazine
    @ScienceMagazine (Twitter)

    This robot can hike as fast as a human, thanks in part to depth sensors that allow it to “see” its environment.
    view full post

    January 25, 2022

    108

    17

  • diphda
    @diphda (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 25, 2022

    68

  • 森山和道/ライター、書評屋
    @kmoriyama (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 25, 2022

    68

  • (((JReuben1)))
    @jreuben1 (Twitter)

    Learning robust perceptive locomotion for quadrupedal robots in the wild https://t.co/hiB3wu1VVi via @YouTube
    view full post

    January 25, 2022

  • JohnStrong
    @CuiStrong (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 25, 2022

    68

  • License to Live
    @saamagni (Twitter)

    Learning robust perceptive locomotion for quadrupedal robots in the wild https://t.co/gLb1vVM0el via @YouTube
    view full post

    January 24, 2022

  • Robotic Systems Lab
    @leggedrobotics (Twitter)

    RT @SciRobotics: Alpine hiking is no problem for #ANYmal, a quadrupedal #robot created by scientists from @ETH_en, @kaistpr, and @intel tha…
    view full post

    January 24, 2022

    3

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    RT @SciRobotics: Alpine hiking is no problem for #ANYmal, a quadrupedal #robot created by scientists from @ETH_en, @kaistpr, and @intel tha…
    view full post

    January 24, 2022

    3

  • Science Robotics
    @SciRobotics (Twitter)

    Alpine hiking is no problem for #ANYmal, a quadrupedal #robot created by scientists from @ETH_en, @kaistpr, and @intel that combines internal and external inputs to trek fast and efficiently. https://t.co/63UqVLn7ru https://t.co/nv2Dwim41e
    view full post

    January 24, 2022

    24

    3

  • vassilis (∎, ∆)
    @TziokasV (Twitter)

    "Here, we present a robust and general solution to integrating exteroceptive and proprioceptive perception for legged locomotion. We leverage an attention-based recurrent encoder that integrates proprioceptive and exteroceptive input." https://t.co/I9ZTw2C6gy
    view full post

    January 23, 2022

  • NaO
    @spikeoutOXY (Twitter)

    RT @NeuroscienceNew: “Learning robust perceptive locomotion for quadrupedal robots in the wild” by Marco Hutter et al. Science Robotics ht…
    view full post

    January 23, 2022

    1

  • Neuroscience News
    @NeuroscienceNew (Twitter)

    “Learning robust perceptive locomotion for quadrupedal robots in the wild” by Marco Hutter et al. Science Robotics https://t.co/ohhQjq6vcc
    view full post

    January 23, 2022

    4

    1

  • James V Stone
    @jgvfwstone (Twitter)

    Learning robust perceptive locomotion for quadrupedal robots in the wild Takahiro Miki et al https://t.co/PCsdeMvl21 Video: https://t.co/fgiK1TXUfD Overview: We train a neural network policy in simulation and then perform zeroshot sim-to-real transfer.
    view full post

    January 23, 2022

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    I have a lot of video materials for our paper, Learning robust perceptive locomotion for quadrupedal robots in the wild. I'm going to post them from time to time. 論文の動画に載せきれなかった動画が大量にあるので、たまに投稿していきます #WildANYmal #RL https://t.co/EnOTh3Ufxc
    view full post

    January 22, 2022

    16

  • youngjoongkwon
    @youngjoongkwon (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 22, 2022

    68

  • Klaus Lex
    @Klaus_Lex (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 22, 2022

    68

  • panah
    @panah (Twitter)

    ANYmal Robot with Perceptive Locomotion Controller for autonomous operation in challenging environments https://t.co/jdxlDLwxzX https://t.co/Gh9IwU3shI
    view full post

    January 21, 2022

    1

  • Robot Enthusiast
    @robothusiast (Twitter)

    Perceptive Locomotion Controller for Robots In Challenging Environments https://t.co/79KXawcDqD https://t.co/LIHJlLfTtS
    view full post

    January 21, 2022

  • Tatsuya Matsushima @CoRL2025
    @__tmats__ (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 21, 2022

    68

  • azuer-bot
    @AzuerBot (Twitter)

    RT @PDH_SciTechNews: #Robotics Learning robust perceptive locomotion for quadrupedal robots in the wild - Robotic Systems Lab: Legged Robot…
    view full post

    January 21, 2022

    1

  • PDH
    @PDH_Metaverse (Twitter)

    #Robotics Learning robust perceptive locomotion for quadrupedal robots in the wild - Robotic Systems Lab: Legged Robotics at ETH Zürich https://t.co/uYbRRxJsUo #robot #technology #engineering #robots #automation #tech #innovation #ai #iot #coding #programming #stem #engineer #s…
    view full post

    January 21, 2022

    1

  • Masahiro Ikeda
    @ikeko24 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • mlfeed.tech
    @mlfeedtech (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Meagan Phelan
    @MeaganPhelan (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Alex
    @_Alefram_ (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • jamilsiddiq
    @jamilsiddiq3 (Twitter)

    https://t.co/pC6d27Gszo
    view full post

    January 20, 2022

  • Luca Ambrogioni
    @LucaAmb (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • Eduardo Reis
    @edreisMD (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • ざんぬ
    @yukky_saito (Twitter)

    @ki_ki_ki1 紐で引っ張ってるのyou? https://t.co/vHX2RsGqWv
    view full post

    January 20, 2022

  • flight GNC
    @flight_gnc (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    Project Page: https://t.co/ezFZiUkBoz Main Video: https://t.co/ZOJXSNrqKh
    view full post

    January 20, 2022

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    #WildANYmal My controller published at Science Robotics was used in this video. https://t.co/y7u008utn0 #spot vs #anymal https://t.co/VWWfZhtbYC
    view full post

    January 20, 2022

    4

  • おじゅげ
    @koke_ne130 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • HiNATA(3Dプリンター大好きマン)
    @HiNATA13638705 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Steve Crowe
    @SteveCrowe (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    January 20, 2022

    9

  • Gary Catterall
    @GaryCatterall (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Dr. THsama
    @THsama2 (Twitter)

    これか https://t.co/WxDKN42exE
    view full post

    January 20, 2022

  • Nikhil Barhate
    @nikhilbarhate99 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Davide Faconti
    @facontidavide (Twitter)

    RT @leggedrobotics: Also check out this great explainer video by @ScienceMagazine https://t.co/TPT8bTWxD5
    view full post

    January 20, 2022

    2

  • 冬眠中のF
    @DAMDAMF_M (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Jens Meier
    @JensMeier144 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Rapier4
    @Viper3Engage (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • 高雄
    @azTakao (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Stefanos Charalambous
    @stefanos_ch3 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • HMS_Azurlane ͏ London
    @HMS_London (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • かいとう らん
    @on_ca (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • GlobalBritainTech #WorldRobotDay 25 Jan
    @WorldRobotDay (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    January 20, 2022

    9

  • Thomas Godden
    @GoddenThomas (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Mustafa Khammash
    @KhammashLab (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Future Timeline
    @future_timeline (Twitter)

    "...Researchers trained ANYmal to rely solely on its proprioceptive perception when it was at odds with its height map. This iteration of the #robot could move twice as fast as its predecessor, and about as fast as an average human’s walking speed." https://t.co/SElX2ee1Mf
    view full post

    January 20, 2022

    2

  • Vassilis Vassiliades
    @v_vassiliades (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Andy Scollick
    @Andy_Scollick (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Anton Baumann
    @antonbaumann101 (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • Joël Mesot
    @Joel_Mesot (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • ΔMASH
    @shun_dmash (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • Marin Vlastelica
    @vlastelicap (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • computercom
    @computercom4 (Twitter)

    RT @PDH_SciTechNews: #Robotics This robot can hike as fast as a human - Science Magazine https://t.co/lKgsZGSY1t #robot #technology #engine…
    view full post

    January 20, 2022

    2

  • azuer-bot
    @AzuerBot (Twitter)

    RT @PDH_SciTechNews: #Robotics This robot can hike as fast as a human - Science Magazine https://t.co/lKgsZGSY1t #robot #technology #engine…
    view full post

    January 20, 2022

    2

  • PDH
    @PDH_Metaverse (Twitter)

    #Robotics This robot can hike as fast as a human - Science Magazine https://t.co/lKgsZGSY1t #robot #technology #engineering #robots #automation #tech #innovation #ai #iot #coding #programming #stem #engineer #science
    view full post

    January 20, 2022

    1

    2

  • Tarik Alafif (طارق العفيف)
    @tarik_alafif (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • ɥozɐʞıɥ
    @hikazoh (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 20, 2022

    68

  • ADIROID
    @adimatic (Twitter)

    Learning robust perceptive locomotion for quadrupedal robots in the wild https://t.co/0FPYItT3Wo @YouTubeより
    view full post

    January 20, 2022

  • hex_cric
    @hex_cric (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • Jonah Philion
    @PhilionJonah (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 20, 2022

    9

  • Hamid Eghbalzadeh
    @heghbalz (Twitter)

    RT @EugeneVinitsky: Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 19, 2022

    9

  • Dr. Tom
    @Mechapinata (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • プロニート
    @xof_Pussy_Lv999 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Eugene Vinitsky (@RLC)
    @EugeneVinitsky (Twitter)

    Sim2real is real, RL is useful for control, and this paper is awesome https://t.co/XesbkGtCzE https://t.co/XYx54np2mn
    view full post

    January 19, 2022

    66

    9

  • Nijanthan Vasudevan
    @nijanthanspace (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • 浜垣 博志
    @sadajpe1 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • como
    @ReedRoof (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • HDMI
    @BMI5100 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Jabril Jacobs
    @JabrilJacobs (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Caroline Naomi
    @Caroline_Naomi (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Akira Sasaki
    @gclue_akira (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Juan Jimeno
    @joemeno (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • JM3LGF
    @jm3lgf (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • BREAD
    @BREAD200011 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Kyle
    @KyleMorgenstein (Twitter)

    massive congrats to the team at ETH, perceptive locomotion has been on the horizon for some time now but this is the first time (to my knowledge) it's ever been achieved and deployed. hugely impressive work: https://t.co/xqvGq4Sb1T
    view full post

    January 19, 2022

    25

  • Jacky Liang
    @jackyliang42 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Naoki Akai(赤井直紀)
    @naokiakai (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • おれんじ色
    @orangeiro7 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • トランジスタ技術&Interface掲載「ECN Products」ほしい技術&製品が見つかる!
    @ECN_cqpub (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • ZYF
    @zhangyunfeng (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Namihei Adachi
    @7oei (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Yujin Tang
    @yujin_tang (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    January 19, 2022

    9

  • さくふわめろんぱん
    @skfwMelonpan (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    January 19, 2022

    9

  • A. Fukuhara
    @fukufuku_7 (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    January 19, 2022

    9

  • Reinforcement Learning Bot
    @ReinforcementB (Twitter)

    RT @erwincoumans: Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (…
    view full post

    January 19, 2022

    9

  • Erwin Coumans
    @erwincoumans (Twitter)

    Exciting new research paper and video by ETH "Learning robust perceptive locomotion for quadrupedal robots in the wild" (arxiv preprint link will follow once public) https://t.co/CPCSDjUrHF
    view full post

    January 19, 2022

    44

    9

  • JF Shaw Co., Inc.
    @jfshawco (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Swiss Robotics
    @swissrobotics (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Open Robotics
    @OpenRoboticsOrg (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Robot Operating System (ROS)
    @rosorg (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    RT @leggedrobotics: Also check out this great explainer video by @ScienceMagazine https://t.co/TPT8bTWxD5
    view full post

    January 19, 2022

    2

  • Robotic Systems Lab
    @leggedrobotics (Twitter)

    Also check out this great explainer video by @ScienceMagazine https://t.co/TPT8bTWxD5
    view full post

    January 19, 2022

    10

    2

  • Klaus Lex
    @Klaus_Lex (Twitter)

    RT @ki_ki_ki1: Science Magazine covered my work! Thanks a lot! This robot can hike as fast as a human https://t.co/A8Fn6YEJuu https://t.co…
    view full post

    January 19, 2022

    1

  • Takahiro Miki
    @ki_ki_ki1 (Twitter)

    Science Magazine covered my work! Thanks a lot! This robot can hike as fast as a human https://t.co/A8Fn6YEJuu https://t.co/pSILe89RSM
    view full post

    January 19, 2022

    7

    1

  • Matías Mattamala @mmattamala.bsky.social
    @mmattamala (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Ishtar ✨decentralised
    @goedhart_esther (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • K.
    @UniversalFunc (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Reinforcement Learning Bot
    @ReinforcementB (Twitter)

    RT @leggedrobotics: Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our l…
    view full post

    January 19, 2022

    68

  • Robotic Systems Lab
    @leggedrobotics (Twitter)

    Wild ANYmal! We present an exceptionally robust perceptive reinforcement learning controller for legged robots in our latest @SciRobotics article. Full video: https://t.co/5uaN7u5g9X Paper: https://t.co/4wLAW01oTl @ETH_en @ETH @ScienceMagazine #robot #AI #science #WildANYmal https://t.co/L6NChMqq57
    view full post

    January 19, 2022

    306

    68

Abstract Synopsis

  • The research focuses on enabling quadrupedal robots to navigate wild and challenging environments by improving their ability to perceive the terrain using both external sensors (exteroception) and internal feedback (proprioception), addressing limitations like sensor failure and difficult lighting conditions.
  • To overcome the challenges of unreliable exteroceptive perception, the authors develop an attention-based recurrent encoder that integrates data from both perception types, allowing the robot to plan and adapt its gait more effectively and robustly in real-time.
  • The integrated perception system was tested extensively in natural and urban settings, demonstrating high robustness and speed, including completing a one-hour hike in the Alps — a task comparable to human hiking — showcasing its practical potential for autonomous exploration in complex terrains.]