<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-20T07:21:45Z</responseDate>
  <request verb="GetRecord" metadataPrefix="jpcoar_1.0" identifier="oai:ipsj.ixsq.nii.ac.jp:00237542">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00237542</identifier>
        <datestamp>2025-01-19T08:51:05Z</datestamp>
        <setSpec>934:1022:11484:11667</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/1.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/1.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/1.0/jpcoar_scm.xsd">
          <dc:title>Acceptability Evaluation of Naturally Written Sentences</dc:title>
          <dc:title xml:lang="en">Acceptability Evaluation of Naturally Written Sentences</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>Vijay, Daultani</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Héctor, Javier Vázquez Martínez</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Naoaki, Okazaki</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Vijay, Daultani</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Héctor, Javier Vázquez Martínez</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Naoaki, Okazaki</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:subject subjectScheme="Other">[研究論文] acceptability, readability, grammaticality, generative text, text evaluation, syntactic knowledge, speakers judgement</jpcoar:subject>
          <datacite:description descriptionType="Other">The success of Language Models (LMs) on a variety of NLP tasks has prompted the design and analysis of natural language benchmarks to evaluate their fitness for particular applications. In this work, we focus on the NLP task of acceptability rating, whereby a given model must rate the ‘goodness’ of a series of tokens. We find the current commonly used datasets to benchmark for LM sentence acceptability fail to capture the distribution of naturally occurring written data. Moreover, we find that the bias toward shorter (5-8 word) sentences is a strong confounding factor that contributes positively to LMs' performance. We then introduce seven datasets collected from the NLP literature that closely follow the sentence length distribution of naturally occurring written text. In our experiments, when sentence length is controlled by adjusting the distribution to match naturally occurring data, we observe a performance drop for current commonly used datasets of up to 48 points in MCC. We conclude with a discussion on implications for current applications and recommendations to improve our current commonly used acceptability benchmarking datasets.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.32(2024) (online)
------------------------------</datacite:description>
          <datacite:description descriptionType="Other">The success of Language Models (LMs) on a variety of NLP tasks has prompted the design and analysis of natural language benchmarks to evaluate their fitness for particular applications. In this work, we focus on the NLP task of acceptability rating, whereby a given model must rate the ‘goodness’ of a series of tokens. We find the current commonly used datasets to benchmark for LM sentence acceptability fail to capture the distribution of naturally occurring written data. Moreover, we find that the bias toward shorter (5-8 word) sentences is a strong confounding factor that contributes positively to LMs' performance. We then introduce seven datasets collected from the NLP literature that closely follow the sentence length distribution of naturally occurring written text. In our experiments, when sentence length is controlled by adjusting the distribution to match naturally occurring data, we observe a performance drop for current commonly used datasets of up to 48 points in MCC. We conclude with a discussion on implications for current applications and recommendations to improve our current commonly used acceptability benchmarking datasets.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.32(2024) (online)
------------------------------</datacite:description>
          <dc:publisher xml:lang="ja">情報処理学会</dc:publisher>
          <datacite:date dateType="Issued">2024-07-24</datacite:date>
          <dc:language>eng</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_6501">journal article</dc:type>
          <jpcoar:identifier identifierType="URI">https://ipsj.ixsq.nii.ac.jp/records/237542</jpcoar:identifier>
          <jpcoar:sourceIdentifier identifierType="ISSN">1882-7799</jpcoar:sourceIdentifier>
          <jpcoar:sourceIdentifier identifierType="NCID">AA11464847</jpcoar:sourceIdentifier>
          <jpcoar:sourceTitle>情報処理学会論文誌データベース（TOD）</jpcoar:sourceTitle>
          <jpcoar:volume>17</jpcoar:volume>
          <jpcoar:issue>3</jpcoar:issue>
          <jpcoar:file>
            <jpcoar:URI label="IPSJ-TOD1703002.pdf">https://ipsj.ixsq.nii.ac.jp/record/237542/files/IPSJ-TOD1703002.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>901.2 kB</jpcoar:extent>
            <datacite:date dateType="Available">2026-07-24</datacite:date>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
