Item type |
SIG Technical Reports(1) |
公開日 |
2023-11-09 |
タイトル |
|
|
タイトル |
3D-aware Semantic Image Synthesis |
タイトル |
|
|
言語 |
en |
|
タイトル |
3D-aware Semantic Image Synthesis |
言語 |
|
|
言語 |
eng |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
筑波大学 |
著者所属 |
|
|
|
筑波大学 |
著者所属 |
|
|
|
筑波大学 |
著者所属(英) |
|
|
|
en |
|
|
University of Tsukuba |
著者所属(英) |
|
|
|
en |
|
|
University of Tsukuba |
著者所属(英) |
|
|
|
en |
|
|
University of Tsukuba |
著者名 |
Chattarin, Rodpon
Yoshihiro, Kanamori
Yuki, Endo
|
著者名(英) |
Chattarin, Rodpon
Yoshihiro, Kanamori
Yuki, Endo
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
We propose 3D-aware semantic image synthesis by explicitly introducing 3D information to semantic image synthesis. Existing methods of semantic image synthesis try to translate a semantic mask to a realistic RGB image directly. However, semantic masks neither convey sufficient information on the 3D scene structure nor interior shapes within the masks, making 3D-aware image synthesis a challenging task. To tackle this problem, we integrate 3D scene knowledge as depth information into image synthesis by introducing a multi-task network which not only generates an RGB image but also a depth representation. We also introduce a wireframe parsing loss to further enforce 3D scene structure in image generation. We demonstrate that our method outperforms baseline methods across several datasets via qualitative and quantitative evaluations. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
We propose 3D-aware semantic image synthesis by explicitly introducing 3D information to semantic image synthesis. Existing methods of semantic image synthesis try to translate a semantic mask to a realistic RGB image directly. However, semantic masks neither convey sufficient information on the 3D scene structure nor interior shapes within the masks, making 3D-aware image synthesis a challenging task. To tackle this problem, we integrate 3D scene knowledge as depth information into image synthesis by introducing a multi-task network which not only generates an RGB image but also a depth representation. We also introduce a wireframe parsing loss to further enforce 3D scene structure in image generation. We demonstrate that our method outperforms baseline methods across several datasets via qualitative and quantitative evaluations. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10100541 |
書誌情報 |
研究報告コンピュータグラフィックスとビジュアル情報学(CG)
巻 2023-CG-192,
号 39,
p. 1-6,
発行日 2023-11-09
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8949 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |