<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-08T10:51:44Z</responseDate>
  <request metadataPrefix="oai_dc" verb="GetRecord" identifier="oai:ipsj.ixsq.nii.ac.jp:00234833">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00234833</identifier>
        <datestamp>2025-01-19T09:41:52Z</datestamp>
        <setSpec>1164:5352:11553:11625</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>敵対的サンプルにおける転移性の定量的評価</dc:title>
          <dc:title>Evaluation of Transferability for Adversarial Examples</dc:title>
          <dc:creator>加藤, 駿一</dc:creator>
          <dc:creator>熊谷, 瞭</dc:creator>
          <dc:creator>竹本, 修</dc:creator>
          <dc:creator>野崎, 佑典</dc:creator>
          <dc:creator>吉川, 雅弥</dc:creator>
          <dc:creator>Shunichi, Kato</dc:creator>
          <dc:creator>Ryo, Kumagai</dc:creator>
          <dc:creator>Shu, Taketomo</dc:creator>
          <dc:creator>Yusuke, Nozaki</dc:creator>
          <dc:creator>Masaya, Yoshikawa</dc:creator>
          <dc:subject>情報論的学習理論と機械学習2</dc:subject>
          <dc:description>AI に対する脅威として敵対的サンプル（Adversarial Example：AE）が報告されている．AE は，入力画像に微小なノイズを加えることで推論結果を誤認識させる攻撃である．近年，この AE の転移性と呼ばれる性質を用いた転移攻撃も報告されている．しかし，これまでに転移攻撃への対策に関する評価はほとんど行われていない．そこで本研究では，フィルタリングや Test-Time Augmentation（TTA）を用いた対策手法を構築し，転移攻撃に対する耐性を定量的に評価する．実験結果から，これらの対策手法が転移攻撃に対する対策として有効であると示した．</dc:description>
          <dc:description>Adversarial Example (AE) has been reported as a threat to AI. AE is an attack that misclassify prediction results by adding small noise to the input image. Recently, Transferable Adversarial Attack based on Adversarial Example transferability has also been reported. However, there have been few evaluations of countermeasures against Transferable Adversarial Attack. Therefore, this study constructs the countermeasures against it, which are based on ﬁltering and Test Time Augmentation (TTA) in order to evaluate the vulnerability. Experiments show the validity of these countermeasures.</dc:description>
          <dc:description>technical report</dc:description>
          <dc:publisher>情報処理学会</dc:publisher>
          <dc:date>2024-06-13</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>研究報告バイオ情報学（BIO）</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>2024-BIO-78</dc:identifier>
          <dc:identifier>1</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>2188-8590</dc:identifier>
          <dc:identifier>AA12055912</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/234833/files/IPSJ-BIO24078006.pdf</dc:identifier>
          <dc:language>jpn</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
