Mоlimо vаs kоristitе оvај idеntifikаtоr zа citirаnjе ili оvај link dо оvе stаvkе: https://open.uns.ac.rs/handle/123456789/15438
Nаziv: Mining methodologies from NLP publications: A case study in automatic terminology recognition
Аutоri: Kovačević, Aleksandar 
Konjović Z.
Milosavljević, Branko 
Nenad, G. 
Dаtum izdаvаnjа: 1-апр-2012
Čаsоpis: Computer Speech and Language
Sažetak: The task of reviewing scientific publications and keeping up with the literature in a particular domain is extremely time-consuming. Extraction and exploration of methodological information, in particular, requires systematic understanding of the literature, but in many cases is performed within a limited context of publications that can be manually reviewed by an individual or group. Automated methodology identification could provide an opportunity for systematic retrieval of relevant documents and for exploring developments within a given discipline. In this paper we present a system for the identification of methodology mentions in scientific publications in the area of natural language processing, and in particular in automatic terminology recognition. The system comprises two major layers: the first layer is an automatic identification of methodological sentences; the second layer highlights methodological phrases (segments). Each mention is categorised in four semantic categories: Task, Method, Resource/Feature and Implementation. Extraction and classification of the segments is formalised as a sequence tagging problem and four separate phrase-based Conditional Random Fields are used to accomplish the task. The system has been evaluated on a manually annotated corpus comprising 45 full text articles. The results for the segment level annotation show an F-measure of 53% for identification of Task and Method mentions (with 70% precision), whereas the F-measures for Resource/Feature and Implementation identification were 61% (with 67% precision) and 75% (with 86% precision) respectively. At the document-level, an F-measure of 72% (with 81% precision) for Task mentions, 60% (with 81% precision) for Method mentions, 74% (with 78% precision) for the Resource/Feature and 79% (with 81% precision) for the Implementation categories have been achieved. We provide a detailed analysis of errors and explore the impact that the particular groups of features have on the extraction of methodological segments. © 2011 Elsevier Ltd. All rights reserved.
URI: https://open.uns.ac.rs/handle/123456789/15438
ISSN: 08852308
DOI: 10.1016/j.csl.2011.09.001
Nаlаzi sе u kоlеkciјаmа:PMF Publikacije/Publications

Prikаzаti cеlоkupаn zаpis stаvki

SCOPUSTM   
Nаvоđеnjа

16
prоvеrеnо 20.11.2023.

Prеglеd/i stаnicа

64
Prоtеklа nеdеljа
4
Prоtеkli mеsеc
4
prоvеrеnо 15.03.2024.

Google ScholarTM

Prоvеritе

Аlt mеtrikа


Stаvkе nа DSpace-u su zаštićеnе аutоrskim prаvimа, sа svim prаvimа zаdržаnim, оsim аkо nije drugačije naznačeno.