Logo image
Fine-Tuning BERT Model for Materials Named Entity Recognition
Conference proceeding

Fine-Tuning BERT Model for Materials Named Entity Recognition

Xintong Zhao, Jane Greenberg, Yuan An and Xiaohua Tony Hu
2021 IEEE International Conference on Big Data (Big Data), pp 3717-3720
15 Dec 2021

Abstract

Adaptation models Analytical models BERT Big Data Bit error rate materials science named entity recognition natural language processing Solid modeling Text recognition Transformers
Scientific literature presents a wellspring of cutting-edge knowledge for materials science, including valuable data (e.g., numerical data from experiment results, material properties and structure). These data are critical for accelerating materials discovery by data-driven machine learning (ML) methods. The challenge is, it is impossible for humans to manually extract and retain this knowledge due to the extensive and growing volume of publications.To this end, we explore a fine-tuned BERT model for extracting knowledge. Our preliminary results show that our fine-tuned Bert model reaches an f-score of 85% for the materials named entity recognition task. The paper covers background, related work, methodology including tuning parameters, and our overall performance evaluation. Our discussion offers insights into our results, and points to directions for next steps.

Metrics

20 Record Views
22 citations in Scopus

Details

UN Sustainable Development Goals (SDGs)

This publication has contributed to the advancement of the following goals:

#7 Affordable and Clean Energy

InCites Highlights

Data related to this publication, from InCites Benchmarking & Analytics tool:

Web of Science research areas
Computer Science, Artificial Intelligence
Computer Science, Information Systems
Computer Science, Theory & Methods
Logo image