Item request has been placed!
×
Item request cannot be made.
×

Processing Request
Distinguishing word identity and sequence context in DNA language models.
Item request has been placed!
×
Item request cannot be made.
×

Processing Request
- معلومة اضافية
- المصدر:
Publisher: BioMed Central Country of Publication: England NLM ID: 100965194 Publication Model: Electronic Cited Medium: Internet ISSN: 1471-2105 (Electronic) Linking ISSN: 14712105 NLM ISO Abbreviation: BMC Bioinformatics Subsets: MEDLINE
- بيانات النشر:
Original Publication: [London] : BioMed Central, 2000-
- الموضوع:
- نبذة مختصرة :
Transformer-based large language models (LLMs) are very suited for biological sequence data, because of analogies to natural language. Complex relationships can be learned, because a concept of "words" can be generated through tokenization. Training the models with masked token prediction, they learn both token sequence identity and larger sequence context. We developed methodology to interrogate model learning, which is both relevant for the interpretability of the model and to evaluate its potential for specific tasks. We used DNABERT, a DNA language model trained on the human genome with overlapping k-mers as tokens. To gain insight into the model's learning, we interrogated how the model performs predictions, extracted token embeddings, and defined a fine-tuning benchmarking task to predict the next tokens of different sizes without overlaps. This task evaluates foundation models without interrogating specific genome biology, it does not depend on tokenization strategies, vocabulary size, the dictionary, or the number of training parameters. Lastly, there is no leakage of information from token identity into the prediction task, which makes it particularly useful to evaluate the learning of sequence context. We discovered that the model with overlapping k-mers struggles to learn larger sequence context. Instead, the learned embeddings largely represent token sequence. Still, good performance is achieved for genome-biology-inspired fine-tuning tasks. Models with overlapping tokens may be used for tasks where a larger sequence context is of less relevance, but the token sequence directly represents the desired learning features. This emphasizes the need to interrogate knowledge representation in biological LLMs.
(© 2024. The Author(s).)
- References:
Nature. 2009 Oct 29;461(7268):1248-53. (PMID: 19865164)
Nat Rev Genet. 2019 Jul;20(7):389-403. (PMID: 30971806)
Bioinformatics. 2021 Aug 9;37(15):2112-2120. (PMID: 33538820)
Nat Methods. 2021 Oct;18(10):1196-1203. (PMID: 34608324)
- Contributed Indexing:
Keywords: AI and Biology; DNA language models; Deep Learning; Foundation Models; Genomics; Knowledge Representation
- الرقم المعرف:
9007-49-2 (DNA)
- الموضوع:
Date Created: 20240913 Date Completed: 20240914 Latest Revision: 20240916
- الموضوع:
20250114
- الرقم المعرف:
PMC11395559
- الرقم المعرف:
10.1186/s12859-024-05869-5
- الرقم المعرف:
39272021
No Comments.