Open Access
| Issue |
BIO Web Conf.
Volume 232, 2026
2026 16th International Conference on Bioscience, Biochemistry and Bioinformatics (ICBBB 2026)
|
|
|---|---|---|
| Article Number | 06001 | |
| Number of page(s) | 9 | |
| Section | AI-Driven Biomedical Text Mining and Intelligent Disease Diagnosis | |
| DOI | https://doi.org/10.1051/bioconf/202623206001 | |
| Published online | 24 April 2026 | |
- A A. Cesaro, S.C. Hoffman, P. Das, C. de la Fuente-Nunez, Challenges and applications of artificial intelligence in infectious diseases and antimicrobial resistance, npj Antimicrobials and Resistance 3, 2 (2025). [Google Scholar]
- V. Domazetoski, H. Kreft, H. Bestova, P. Wieder, R. Koynov, A. Zarei, P. Weigelt, Using large language models to extract plant functional traits from unstructured text, Applications in Plant Sciences 13, 70011 (2025). https://doi.org/10.1002/aps3.70011 [Google Scholar]
- D. Domingo-Fernández, Y. Gadiya, S. Mubeen, T.J. Bollerman, M.D. Healy, S. Chanana, R.G. Sadovsky, D. Healey, V. Colluru, Modern drug discovery using ethnobotany: A large-scale cross-cultural analysis of traditional medicine reveals common therapeutic uses, iScience 26, 107729 (2023). [Google Scholar]
- R.E. Turner, An introduction to transformers, arXiv preprint arXiv:2304.10557 (2023). [Google Scholar]
- J. Devlin, M.W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, in Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) (2019), pp. 4171–1186 [Google Scholar]
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J.D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., Language Models are Few-Shot Learners, in Advances in Neural Information Processing Systems, edited by H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Curran Associates, Inc., 2020), Vol. 33, pp. 1877–1901 [Google Scholar]
- J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C.H. So, J. Kang, Biobert: a pre-trained biomedical language representation model for biomedical text mining, Bioinformatics 36, 1234 (2020). [Google Scholar]
- S. Masoumi, H. Amirkhani, N. Sadeghian, S. Shahraz, Natural language processing (nlp) to facilitate abstract review in medical research: the application of biobert to exploring the 20-year use of nlp in medical research, Systematic Reviews 13, 107 (2024). [Google Scholar]
- M. Guo, M. Guo, E.T. Dougherty, F. Jin, MSQ-BioBERT: Ambiguity Resolution to Enhance BioBERT Medical Question-Answering, in Proceedings of the ACM Web Conference 2023 (Association for Computing Machinery, New York, NY, USA, 2023), WWW'23, p. 4020–1028, ISBN 9781450394161 [Google Scholar]
- C.H. Wei, A. Allot, P.T. Lai, R. Leaman, S. Tian, L. Luo, Q. Jin, Z. Wang, Q. Chen, Z. Lu, Pubtator 3.0: an ai-powered literature resource for unlocking biomedical knowledge, Nucleic Acids Research 52, W540 (2024). https://doi.org/10.1093/nar/gkae235 [Google Scholar]
- V. Kumar, G. Shankar, Y. Akhter, Deciphering drug discovery and microbial pathogenesis research in tuberculosis during the two decades of postgenomic era using entity mining approach, Archives of Microbiology 206, 46 (2024). [Google Scholar]
- T. He, K. Kreimeyer, M. Najjar, J. Spiker, M. Fatteh, V. Anagnostou, T. Botsis, Artificial Intelligence-assisted Biomedical Literature Knowledge Synthesis to Support Decision-making in Precision Oncology, in AMIA Annual Symposium Proceedings (2025), Vol. 2024, p. 513 [Google Scholar]
- H.W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma et al., Scaling instruction-finetuned language models (2022), https://arxiv.org/abs/2210.11416 [Google Scholar]
- R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, T.B. Hashimoto, Stanford alpaca: An instruction-following llama model (2023), https://github.com/tatsu-lab/stanford alpaca [Google Scholar]
- M. Wu, A. Waheed, C. Zhang, M. Abdul-Mageed, A.F. Aji, Lamini-lm: A diverse herd of distilled models from large-scale instructions, CoRR (2023), https://doi.org/2304.14402. [Google Scholar]
- Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, Roberta: A robustly optimized bert pretraining approach, arXiv preprint arXiv:1907.11692 (2019). [Google Scholar]
- P. Ruas, F.M. Couto, Nilinker: Attention-based approach to nil entity linking, Journal of Biomedical Informatics 132, 104137 (2022). https://doi.org/10.1016/j.jbi.2022.104137 [Google Scholar]
- Á. Alonso Casero, Named entity recognition and normalization in biomedical literature: a practical case in SARS-CoV-2 literature, Ph.D. thesis, ETSI_Informatica (2021), https://oa.upm.es/67933/ [Google Scholar]
- R. Ahmed, P. Berntsson, A. Skafte, S.K. Rashed, M. Klang, A. Barvesten, O. Olde, W. Lindholm, A.L. Arrizabalaga, P. Nugues et al., Easyner: A customizable easy-to-use pipeline for deep learning-and dictionary-based named entity recognition from medical text, arXiv preprint arXiv:2304.07805 (2023). [Google Scholar]
- L.J. Farrell, R. Lo, J.J. Wanford, A. Jenkins, A. Maxwell, L.J.V. Piddock, Revitalizing the drug pipeline: Antibioticdb, an open access database to aid antibacterial research and development, Journal of Antimicrobial Chemotherapy 73, 2284 (2018). https://doi.org/10.1093/jac/dky208 [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

