<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-18T08:02:50Z</responseDate>
  <request verb="GetRecord" metadataPrefix="oai_dc" identifier="oai:ipsj.ixsq.nii.ac.jp:00209847">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00209847</identifier>
        <datestamp>2025-01-19T18:22:17Z</datestamp>
        <setSpec>1164:4619:10416:10532</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>Towards Adversarial Robustness of Learning in the Frequency Domain</dc:title>
          <dc:title>Towards Adversarial Robustness of Learning in the Frequency Domain</dc:title>
          <dc:creator>Subhajit, Chaudhury</dc:creator>
          <dc:creator>Toshihiko, Yamasaki</dc:creator>
          <dc:creator>Subhajit, Chaudhury</dc:creator>
          <dc:creator>Toshihiko, Yamasaki</dc:creator>
          <dc:subject>セッション6-2</dc:subject>
          <dc:description>Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise.</dc:description>
          <dc:description>Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise.</dc:description>
          <dc:description>technical report</dc:description>
          <dc:publisher>情報処理学会</dc:publisher>
          <dc:date>2021-02-25</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>研究報告コンピュータビジョンとイメージメディア（CVIM）</dc:identifier>
          <dc:identifier>49</dc:identifier>
          <dc:identifier>2021-CVIM-225</dc:identifier>
          <dc:identifier>1</dc:identifier>
          <dc:identifier>5</dc:identifier>
          <dc:identifier>2188-8701</dc:identifier>
          <dc:identifier>AA11131797</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/209847/files/IPSJ-CVIM21225049.pdf</dc:identifier>
          <dc:language>eng</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
