<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-12T04:03:07Z</responseDate>
  <request identifier="oai:ipsj.ixsq.nii.ac.jp:00231487" metadataPrefix="jpcoar_1.0" verb="GetRecord">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00231487</identifier>
        <datestamp>2025-01-19T10:44:41Z</datestamp>
        <setSpec>1164:3616:11132:11410</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/1.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/1.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/1.0/jpcoar_scm.xsd">
          <dc:title>呼吸音から生成した複数画像による呼吸器疾患の自動分類</dc:title>
          <dc:title xml:lang="en">Automatic Classification of Respiratory Diseases Using Multiple Images Generated from Respiratory Sound</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>田端, 愛美</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>陸, 慧敏</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>神谷, 亨</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>間普, 真吾</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>木戸, 尚治</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Manami, Tabata</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Huimin, Lu</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Tohru, Kamiya</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Shingo, Mabu</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Shoji, Kido</jpcoar:creatorName>
          </jpcoar:creator>
          <datacite:description descriptionType="Other">呼吸器疾患は世界の死因の上位に挙げられており，死者数は年間約 800 万人にのぼる．呼吸器疾患の診断方法である聴診はシンプルかつ低コストであるため広く用いられるが，定量的な基準がないため，診断結果は医師の技量に左右される．熟練の医師が不足した災害現場では診断が困難になるため，コンピュータ支援診断 (CAD: Computer Aided Diagnosis) システムの開発が求められている．本論文では，呼吸音に対して異なる周波数解析法により生成した 2 種類の Spectrogram を入力とした CNN (Convolutional Neural Network) を用い，呼吸音の自動分類を行う手法を提案する．提案法を ICBHI (International Conference on Biomedical and Health Informatics) 2017 Challenge dataset に適用した結果，Sensitivity 64.6%，Specificity 82.3%，Average Score 72.4%，Harmonic Score 72.4%，Accuracy 74.0%，AUC 87.1%，偽陰性率 22.0% が得られた．1 種類の Spectrogram 画像のみを入力とした場合と Accuracy を比較したところ，約 1.9~5.0% の精度向上がみられた．</datacite:description>
          <datacite:description descriptionType="Other">About 8 million people die annually caused to respiratory diseases in the world. Auscultation with a stethoscope is inexpensive and less burden. However, accurate listening requires skilled techniques. Therefore, we need to develop a Computer Aided Diagnosis (CAD) system. In this study, we propose an automatic classification method using a Convolutional Neural Network (CNN) with two types of spectrograms as inputs. By using the International Conference on Biomedical and Health Informatics (ICBHI) 2017 Challenge dataset, sensitivity of 64.6%, specificity of 82.3%, accuracy of 74.0%, and AUC of 87.1% were obtained. These results are showing an accuracy improvement of approximately 1.9~5.0% compared to accuracy with one type of spectrogram as input.</datacite:description>
          <dc:publisher xml:lang="ja">情報処理学会</dc:publisher>
          <datacite:date dateType="Issued">2023-12-04</datacite:date>
          <dc:language>jpn</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_18gh">technical report</dc:type>
          <jpcoar:identifier identifierType="URI">https://ipsj.ixsq.nii.ac.jp/records/231487</jpcoar:identifier>
          <jpcoar:sourceIdentifier identifierType="ISSN">2188-8582</jpcoar:sourceIdentifier>
          <jpcoar:sourceIdentifier identifierType="NCID">AN10438399</jpcoar:sourceIdentifier>
          <jpcoar:sourceTitle>研究報告オーディオビジュアル複合情報処理（AVM）</jpcoar:sourceTitle>
          <jpcoar:volume>2023-AVM-123</jpcoar:volume>
          <jpcoar:issue>11</jpcoar:issue>
          <jpcoar:pageStart>1</jpcoar:pageStart>
          <jpcoar:pageEnd>6</jpcoar:pageEnd>
          <jpcoar:file>
            <jpcoar:URI label="IPSJ-AVM23123011.pdf">https://ipsj.ixsq.nii.ac.jp/record/231487/files/IPSJ-AVM23123011.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>1.7 MB</jpcoar:extent>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
