Exploring COVID-related relationship extraction: Contrasting data sources and analyzing misinformation

Tutkimustuotos: ArtikkeliTieteellinenvertaisarvioitu

1 Sitaatiot (Scopus)
11 Lataukset (Pure)

Abstrakti

The COVID-19 pandemic presented an unparalleled challenge to global healthcare systems. A central issue revolves around the urgent need to swiftly amass critical biological and medical knowledge concerning the disease, its treatment, and containment. Remarkably, text data remains an underutilized resource in this context. In this paper, we delve into the extraction of COVID-related relations using transformer-based language models, including Bidirectional Encoder Representations from Transformers (BERT) and DistilBERT. Our analysis scrutinizes the performance of five language models, comparing information from both PubMed and Reddit, and assessing their ability to make novel predictions, including the detection of “misinformation.” Key findings reveal that, despite inherent differences, both PubMed and Reddit data contain remarkably similar information, suggesting that Reddit can serve as a valuable resource for rapidly acquiring information during times of crisis. Furthermore, our results demonstrate that language models can unveil previously unseen entities and relations, a crucial aspect in identifying instances of misinformation.

AlkuperäiskieliEnglanti
Artikkelie26973
JulkaisuHeliyon
Vuosikerta10
Numero5
DOI - pysyväislinkit
TilaJulkaistu - 15 maalisk. 2024
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Julkaisufoorumi-taso

  • Jufo-taso 1

!!ASJC Scopus subject areas

  • General

Sormenjälki

Sukella tutkimusaiheisiin 'Exploring COVID-related relationship extraction: Contrasting data sources and analyzing misinformation'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä