Logo image
Detecting Impact Relevant Sections in Scientific Research
Conference proceeding   Open access

Detecting Impact Relevant Sections in Scientific Research

Maria Becker, Kanyao Hang, Antonina Werthmann, Rezvaneh Rezapour, Haejin Lee, Jana Diesner and Andreas Witt
PROCEEDINGS OF THE 2024 JOINT INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS, LANGUAGE RESOURCES AND EVALUATION, LREC-COLING 2024, pp 4744-4749
01 Jan 2024
url
https://aclanthology.org/2024.lrec-main.424/View
Published, Version of Record (VoR)CC BY-NC V4.0 Open

Abstract

Computer Science, Artificial Intelligence Computer Science, Interdisciplinary Applications Language & Linguistics Linguistics Science & Technology Computer Science Social Sciences Technology
Impact assessment is an evolving area of research that aims at measuring and predicting the potential effects of projects or programs on a variety of stakeholders. While measuring the impact of scientific research is a vibrant subdomain of impact assessment, a recurring obstacle in this specific area is the lack of an efficient framework that facilitates labeling and analysis of lengthy reports. To address this issue, we propose, implement, and evaluate a framework for automatically assessing the impact of scientific research projects by identifying pertinent sections in research reports that indicate potential impact. We leverage a mixed -method approach that combines manual annotation with supervised machine learning to extract these passages from project reports. We experiment with different machine learning algorithms, including traditional statistical models as well as pre-trained transformer language models. Our results show that our proposed method achieves accuracy scores up to 0.81, and that our method is generalizable to scientific research from different domains and different languages.

Metrics

1 Record Views

Details

Logo image