<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-17T16:33:37Z</responseDate>
  <request verb="GetRecord" metadataPrefix="jpcoar_1.0" identifier="oai:ipsj.ixsq.nii.ac.jp:00218188">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00218188</identifier>
        <datestamp>2025-01-19T15:14:19Z</datestamp>
        <setSpec>1164:4061:10837:10917</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/1.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/1.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/1.0/jpcoar_scm.xsd">
          <dc:title>Automatic Eating Stage Classification using ASMR videos</dc:title>
          <dc:title xml:lang="en">Automatic Eating Stage Classification using ASMR videos</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>Mari, Izumikawa</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Takafumi, Kawasaki</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Tadashi, Okoshi</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Jin, Nakazawa</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Mari, Izumikawa</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Takafumi, Kawasaki</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Tadashi, Okoshi</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Jin, Nakazawa</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:subject subjectScheme="Other">感覚・知覚</jpcoar:subject>
          <datacite:description descriptionType="Other">A balanced diet and an appropriate calorie intake are the keys to both preventing and treating type II diabetes. Meanwhile, widespread techniques such as manual food logs and food image captures have been posing burdens on those with diabetes and have made diet monitoring difficult to become part of one's routine. The ultimate aim of this study is to develop an earable device that monitors a volume of food intake automatically. However, an automatic food intake volume monitoring requires a detection of biting, chewing, and swallowing sounds with foods of various sizes and textures. The present research therefore attempted to classify an eating sound, collected from YouTube eating ASMR, into one of the following labels: bite/chew, swallow, or other. A CNN machine learning model using sound features as input achieved an accuracy of 86%.</datacite:description>
          <datacite:description descriptionType="Other">A balanced diet and an appropriate calorie intake are the keys to both preventing and treating type II diabetes. Meanwhile, widespread techniques such as manual food logs and food image captures have been posing burdens on those with diabetes and have made diet monitoring difficult to become part of one's routine. The ultimate aim of this study is to develop an earable device that monitors a volume of food intake automatically. However, an automatic food intake volume monitoring requires a detection of biting, chewing, and swallowing sounds with foods of various sizes and textures. The present research therefore attempted to classify an eating sound, collected from YouTube eating ASMR, into one of the following labels: bite/chew, swallow, or other. A CNN machine learning model using sound features as input achieved an accuracy of 86%.</datacite:description>
          <dc:publisher xml:lang="ja">情報処理学会</dc:publisher>
          <datacite:date dateType="Issued">2022-05-30</datacite:date>
          <dc:language>eng</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_18gh">technical report</dc:type>
          <jpcoar:identifier identifierType="URI">https://ipsj.ixsq.nii.ac.jp/records/218188</jpcoar:identifier>
          <jpcoar:sourceIdentifier identifierType="ISSN">2188-8698</jpcoar:sourceIdentifier>
          <jpcoar:sourceIdentifier identifierType="NCID">AA11838947</jpcoar:sourceIdentifier>
          <jpcoar:sourceTitle>研究報告ユビキタスコンピューティングシステム（UBI）</jpcoar:sourceTitle>
          <jpcoar:volume>2022-UBI-74</jpcoar:volume>
          <jpcoar:issue>9</jpcoar:issue>
          <jpcoar:pageStart>1</jpcoar:pageStart>
          <jpcoar:pageEnd>7</jpcoar:pageEnd>
          <jpcoar:file>
            <jpcoar:URI label="IPSJ-UBI22074009.pdf">https://ipsj.ixsq.nii.ac.jp/record/218188/files/IPSJ-UBI22074009.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>8.4 MB</jpcoar:extent>
            <datacite:date dateType="Available">2024-05-30</datacite:date>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
