<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-15T05:11:58Z</responseDate>
  <request metadataPrefix="oai_dc" verb="GetRecord" identifier="oai:ipsj.ixsq.nii.ac.jp:00229206">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00229206</identifier>
        <datestamp>2025-01-19T11:36:07Z</datestamp>
        <setSpec>1164:6757:11095:11361</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>3D-aware Semantic Image Synthesis</dc:title>
          <dc:title>3D-aware Semantic Image Synthesis</dc:title>
          <dc:creator>Chattarin, Rodpon</dc:creator>
          <dc:creator>Yoshihiro, Kanamori</dc:creator>
          <dc:creator>Yuki, Endo</dc:creator>
          <dc:creator>Chattarin, Rodpon</dc:creator>
          <dc:creator>Yoshihiro, Kanamori</dc:creator>
          <dc:creator>Yuki, Endo</dc:creator>
          <dc:description>We propose 3D-aware semantic image synthesis by explicitly introducing 3D information to semantic image synthesis. Existing methods of semantic image synthesis try to translate a semantic mask to a realistic RGB image directly. However, semantic masks neither convey sufficient information on the 3D scene structure nor interior shapes within the masks, making 3D-aware image synthesis a challenging task. To tackle this problem, we integrate 3D scene knowledge as depth information into image synthesis by introducing a multi-task network which not only generates an RGB image but also a depth representation. We also introduce a wireframe parsing loss to further enforce 3D scene structure in image generation. We demonstrate that our method outperforms baseline methods across several datasets via qualitative and quantitative evaluations.</dc:description>
          <dc:description>We propose 3D-aware semantic image synthesis by explicitly introducing 3D information to semantic image synthesis. Existing methods of semantic image synthesis try to translate a semantic mask to a realistic RGB image directly. However, semantic masks neither convey sufficient information on the 3D scene structure nor interior shapes within the masks, making 3D-aware image synthesis a challenging task. To tackle this problem, we integrate 3D scene knowledge as depth information into image synthesis by introducing a multi-task network which not only generates an RGB image but also a depth representation. We also introduce a wireframe parsing loss to further enforce 3D scene structure in image generation. We demonstrate that our method outperforms baseline methods across several datasets via qualitative and quantitative evaluations.</dc:description>
          <dc:description>technical report</dc:description>
          <dc:publisher>情報処理学会</dc:publisher>
          <dc:date>2023-11-09</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>研究報告デジタルコンテンツクリエーション（DCC）</dc:identifier>
          <dc:identifier>39</dc:identifier>
          <dc:identifier>2023-DCC-35</dc:identifier>
          <dc:identifier>1</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>2188-8868</dc:identifier>
          <dc:identifier>AA12628338</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/229206/files/IPSJ-DCC23035039.pdf</dc:identifier>
          <dc:language>eng</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
