WEKO3
アイテム
2台のカメラを用いたマニピュレータの平面障害物回避軌道の生成
https://ipsj.ixsq.nii.ac.jp/records/53189
https://ipsj.ixsq.nii.ac.jp/records/53189389d5ae4-3df4-407d-ac16-ab45d255d258
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 1996 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | SIG Technical Reports(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 1996-05-23 | |||||||
タイトル | ||||||||
タイトル | 2台のカメラを用いたマニピュレータの平面障害物回避軌道の生成 | |||||||
タイトル | ||||||||
言語 | en | |||||||
タイトル | Trajectory Generation for Plane Obstacle Avoidance of Vision - Guided Manipulators | |||||||
言語 | ||||||||
言語 | jpn | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_18gh | |||||||
資源タイプ | technical report | |||||||
著者所属 | ||||||||
大阪大学工学部電子制御機械工学科 | ||||||||
著者所属 | ||||||||
大阪大学工学部電子制御機械工学科 | ||||||||
著者所属 | ||||||||
大阪大学工学部電子制御機械工学科 | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Dept. of Mech. Eng. for Computer - Controled Machinery, Osaka University. | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Dept. of Mech. Eng. for Computer - Controled Machinery, Osaka University. | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Dept. of Mech. Eng. for Computer - Controled Machinery, Osaka University. | ||||||||
著者名 |
城井壮一郎
× 城井壮一郎
|
|||||||
著者名(英) |
Soichiro, Kii
× Soichiro, Kii
|
|||||||
論文抄録 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | This paper deals with the real-time distance measurement and the real-time recognition of six basic facial expressions. In order to measure the distance between human being and robot in real-time we use the transputer for parallel processing and two CCD cameras equipped in the eyeballs of robot. By using the parallax of human images obtained two CCD cameras we measure the distance between human being and robot and find that the average error ratio is under 4[%] in about 80[ms] for onr measurement process. In order to obtain the center position of both pupils we obtain the brightness by using a CCD camera along a vertical line crossing over the pupil and eyebrow as base data and calculate the cross-correlation between base data and that in the given image. We extract the position of right and left pupils separately. By using transputer the time needed is about 40[ms] to obtain right and left pupil's position. As the facial information for utilizing the recognition of facial expression we use brightness data of 13 vertical lines (facial information) determined empirically and including the areas of eyes eyebrows and mouth. Then we acquire the facial information of 6 basic facial expressions for 30 subjects whose face images have already been obtained. Since we use a layer-type neural network for recognition of facial expressions facial information for some of 30 subjects is used for training the neural network and recognition tests done by using facial information not used for neural network learning. We find that when we use 15 subjects for the network training the correct recognition ratio reaches 85[%] and the total time for detecting right and left pupil positions plus the recognition of facial expression is about 55[ms] per one recognition cycle. | |||||||
論文抄録(英) | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | This paper deals with the real-time distance measurement and the real-time recognition of six basic facial expressions. In order to measure the distance between human being and robot in real-time, we use the transputer for parallel processing and two CCD cameras equipped in the eyeballs of robot. By using the parallax of human images obtained two CCD cameras, we measure the distance between human being and robot, and find that the average error ratio is under 4[%] in about 80[ms] for onr measurement process. In order to obtain the center position of both pupils, we obtain the brightness by using a CCD camera, along a vertical line crossing over the pupil and eyebrow as base data and calculate the cross-correlation between base data and that in the given image. We extract the position of right and left pupils separately. By using transputer, the time needed is about 40[ms] to obtain right and left pupil's position. As the facial information for utilizing the recognition of facial expression, we use brightness data of 13 vertical lines (facial information), determined empirically and including the areas of eyes, eyebrows and mouth. Then we acquire the facial information of 6 basic facial expressions for 30 subjects whose face images have already been obtained. Since we use a layer-type neural network for recognition of facial expressions, facial information for some of 30 subjects is used for training the neural network and recognition tests done by using facial information not used for neural network learning. We find that, when we use 15 subjects for the network training, the correct recognition ratio reaches 85[%], and the total time for detecting right and left pupil positions plus the recognition of facial expression is about 55[ms] per one recognition cycle. | |||||||
書誌レコードID | ||||||||
収録物識別子タイプ | NCID | |||||||
収録物識別子 | AA11131797 | |||||||
書誌情報 |
情報処理学会研究報告コンピュータビジョンとイメージメディア(CVIM) 巻 1996, 号 47(1996-CVIM-100), p. 93-100, 発行日 1996-05-23 |
|||||||
Notice | ||||||||
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. | ||||||||
出版者 | ||||||||
言語 | ja | |||||||
出版者 | 情報処理学会 |