WEKO3
アイテム
Obtaining Shading Properties from Multi-Viewpoint Images
https://ipsj.ixsq.nii.ac.jp/records/88218
https://ipsj.ixsq.nii.ac.jp/records/8821870e014db-972d-4981-a386-8161209d860d
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]()
2100年1月1日からダウンロード可能です。
|
Copyright (c) 2013 by the Institute of Electronics, Information and Communication Engineers
This SIG report is only available to those in membership of the SIG. |
|
CVIM:会員:¥0, DLIB:会員:¥0 |
Item type | SIG Technical Reports(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 2013-01-16 | |||||||
タイトル | ||||||||
タイトル | Obtaining Shading Properties from Multi-Viewpoint Images | |||||||
タイトル | ||||||||
言語 | en | |||||||
タイトル | Obtaining Shading Properties from Multi-Viewpoint Images | |||||||
言語 | ||||||||
言語 | eng | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_18gh | |||||||
資源タイプ | technical report | |||||||
著者所属 | ||||||||
Georgia Institute of Technology | ||||||||
著者所属 | ||||||||
NTT Media Intelligence Laboratories | ||||||||
著者所属 | ||||||||
NTT Media Intelligence Laboratories | ||||||||
著者所属 | ||||||||
NTT Media Intelligence Laboratories | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Georgia Institute of Technology | ||||||||
著者所属(英) | ||||||||
en | ||||||||
NTT Media Intelligence Laboratories | ||||||||
著者所属(英) | ||||||||
en | ||||||||
NTT Media Intelligence Laboratories | ||||||||
著者所属(英) | ||||||||
en | ||||||||
NTT Media Intelligence Laboratories | ||||||||
著者名 |
Ryan, Jones
× Ryan, Jones
|
|||||||
著者名(英) |
Ryan, Jones
× Ryan, Jones
|
|||||||
論文抄録 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | Integrating virtual models into real environments requires realistic lighting and shading for believable results. Current methods of obtaining this realism require extensive effort on the part of the user or highly specialized equipment. We propose a method to obtain the shading properties of an object by using only a short film or set of pictures from the user centered on a specific object. We reconstruct the object from the images and fit the spherical harmonic basis functions to the luminance values with respect to the surface normals of the object to obtain the lighting properties of the environment. The results show that our method is able to create plausible synthetic images with realistically shaded virtual models. Our method can be applied to consumer augmented reality services because it only requires images that can be captured by devices such consumer-level cameras. | |||||||
論文抄録(英) | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | Integrating virtual models into real environments requires realistic lighting and shading for believable results. Current methods of obtaining this realism require extensive effort on the part of the user or highly specialized equipment. We propose a method to obtain the shading properties of an object by using only a short film or set of pictures from the user centered on a specific object. We reconstruct the object from the images and fit the spherical harmonic basis functions to the luminance values with respect to the surface normals of the object to obtain the lighting properties of the environment. The results show that our method is able to create plausible synthetic images with realistically shaded virtual models. Our method can be applied to consumer augmented reality services because it only requires images that can be captured by devices such consumer-level cameras. | |||||||
書誌レコードID | ||||||||
収録物識別子タイプ | NCID | |||||||
収録物識別子 | AA11131797 | |||||||
書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM) 巻 2013-CVIM-185, 号 43, p. 1-6, 発行日 2013-01-16 |
|||||||
Notice | ||||||||
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. | ||||||||
出版者 | ||||||||
言語 | ja | |||||||
出版者 | 情報処理学会 |