Human centric vision understanding for Human Robot Interaction

Human centric vision understanding for Human Robot Interaction

Yanli Ji

生駒 : 奈良先端科学技術大学院大学, 2019.2

授業アーカイブ

巻号情報

全1件
No. 刷年 所在 請求記号 資料ID 貸出区分 状況 予約人数

1

  • LA-I-R[MPDASH][Mobile]

M016019

内容紹介

Robotics, especially social robots, are increasingly employed to assist humans in various places, i.e. home, hospitals, hotels, elderly-care centers, and various industrial sites. The research of HRI involving social interactions with the human attracts more and more attentions. Our research work focuses on human centric visual understanding, including human action recognition, gesture motion estimation and recognition, and facial expression classification. For human actions, we proposed effective action feature representation methods based on RGB videos and skeletons to recognize wild actions and human-human interactions. Moreover, we newly collected a large-scale RGB-D action database for arbitrary-view action analysis, including RGB videos, depth and skeleton sequences. The database includes action samples captured in 8 fixed viewpoints and varying-view sequences which covers the entire 360∘ view angles. Furthermore, we proposed approaches to estimate hand gestures in a single depth image for a natural HRI application. For emotion understanding in the HRI, a domain adaptation approach was designed to solve the problem of emotion perception with facial expressions. .

詳細情報

刊年

2018

形態

電子化映像資料(1時間14分)

シリーズ名

情報科学領域・コロキアム ; 平成30年度

注記

講演者所属: University of Electronic Science and Technology of China (UESTC), China

講演日: 平成31年2月6日

講演場所: 情報科学棟大講義室L1

標題言語

英語 (eng)

本文言語

英語 (eng)

著者情報

Ji, Yanli