Open Access
BIO Web Conf.
Volume 41, 2021
The 4th International Conference on Bioinformatics, Biotechnology, and Biomedical Engineering (BioMIC 2021)
Article Number 04003
Number of page(s) 7
Section Bioinformatics and Data Mining
Published online 22 December 2021
  • “HUPO What is Proteomics?” [Online]. Available: [Accessed: 24-Feb-2021]. [Google Scholar]
  • A. Bateman, “UniProt: A worldwide hub of protein knowledge, ” Nucleic Acids Res., vol. 47, no. D1, pp. D506–D515, Jan. 2019. [CrossRef] [PubMed] [Google Scholar]
  • “Nomenclature and Symbolism for Amino Acids and Peptides: Recommendations 1983. ” Eur. J. Biochem., vol. 138, no. 1, pp. 9–37, 1984. [CrossRef] [PubMed] [Google Scholar]
  • D. Sakakibara et al., “Protein structure determination in living cells by in-cell NMR spectroscopy, ” Nature, vol. 458, no. 7234, pp. 102–105, Mar. 2009. [CrossRef] [PubMed] [Google Scholar]
  • “Home CASP14.” [Online]. Available: [Accessed: 15-Apr-2021]. [Google Scholar]
  • The Critical Assessment of protein Structure Prediction, “Artificial intelligence solution to a 50year-old science challenge could ‘revolutionise’ medical research, ” Press Release, 30-Nov-2020. [Online]. Available: [Accessed: 15-Apr-2021]. [Google Scholar]
  • M. AlQuraishi, “End-to-end differentiable learning of protein structure, ” bioRxiv. bioRxiv, p. 265-231, 14-Feb-2018. [Google Scholar]
  • J. M. Jumper, N. F. Faruk, K. F. Freed, and T. R. Sosnick, “Accurate calculation of side chain packing and free energy with applications to protein molecular dynamics, ” PLoS Comput. Biol., vol. 14, no. 12, p. e1006342, Dec. 2018. [CrossRef] [Google Scholar]
  • T. Lazaridis and M. Karplus, “Effective energy functions for protein structure prediction, ” Current Opinion in Structural Biology, vol. 10, no. 2. Current Biology Ltd, pp. 139–145, 01-Apr-2000. [CrossRef] [PubMed] [Google Scholar]
  • K. T. Schütt, H. E. Sauceda, P.-J. Kindermans, A. Tkatchenko, and K.-R. Müller, “SchNet a deep learning architecture for molecules and materials, ” J. Chem. Phys., vol. 148, no. 24, Dec. 2017. [Google Scholar]
  • J. Chen, J. Chen, G. Pinamonti, and C. Clementi, “Learning Effective Molecular Models from Experimental Observables, ” J. Chem. Theory Comput., vol. 14, no. 7, pp. 3849–3858, Jul. 2018. [CrossRef] [PubMed] [Google Scholar]
  • S. Chmiela, A. Tkatchenko, H. E. Sauceda, I. Poltavsky, K. T. Schütt, and K. R. Müller, “Machine learning of accurate energy-conserving molecular force fields, ” Sci. Adv., vol. 3, no. 5, p. e1603015, May 2017. [CrossRef] [Google Scholar]
  • J. S. Smith, O. Isayev, and A. E. Roitberg, “ANI1: an extensible neural network potential with DFT accuracy at force field computational cost, ” Chem. Sci., vol. 8, no. 4, pp. 3192–3203, Mar. 2017. [CrossRef] [PubMed] [Google Scholar]
  • J. S. Smith, B. Nebgen, N. Lubbers, O. Isayev, and A. E. Roitberg, “Less is more: Sampling chemical space with active learning, ” J. Chem. Phys., vol. 148, no. 24, p. 241-733, Jun. 2018. [Google Scholar]
  • J. Hermann, R. A. DiStasio, and A. Tkatchenko, “First-Principles Models for van der Waals Interactions in Molecules and Materials: Concepts, Theory, and Applications, ” Chemical Reviews, vol. 117, no. 6. American Chemical Society, pp. 4714–4758, 22-Mar-2017. [CrossRef] [PubMed] [Google Scholar]
  • B. Nebgen et al., “Transferable Dynamic Molecular Charge Assignment Using Deep Neural Networks, ” J. Chem. Theory Comput., vol. 14, no. 9, pp. 4687–4698, Sep. 2018. [CrossRef] [PubMed] [Google Scholar]
  • S. T. John and G. Csányi, “Many-Body CoarseGrained Interactions Using Gaussian Approximation Potentials, ” J. Phys. Chem. B, vol. 121, no. 48, pp. 10934–10949, Dec. 2017. [CrossRef] [PubMed] [Google Scholar]
  • M. K. Scherer, B. E. Husic, M. Hoffmann, F. Paul, H. Wu, and F. Noé, “Variational selection of features for molecular kinetics, ” J. Chem. Phys., vol. 150, no. 19, p. 194108, May 2019. [CrossRef] [PubMed] [Google Scholar]
  • A. J. Riesselman, J. B. Ingraham, and D. S. Marks, “Deep generative models of genetic variation capture the effects of mutations, ” Nat. Methods, vol. 15, no. 10, pp. 816–822, Oct. 2018. [CrossRef] [PubMed] [Google Scholar]
  • R. Rao et al., “MSA Transformer, ” bioRxiv, p. 2021.02.12.430858, Feb. 2021. [PubMed] [Google Scholar]
  • M. AlQuraishi, “Parallelized Natural Extension Reference Frame: Parallelized Conversion from Internal to Cartesian Coordinates, ” J. Comput. Chem., vol. 40, no. 7, pp. 885–892, Mar. 2019. [CrossRef] [Google Scholar]
  • A. W. Senior et al., “Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13), ” Proteins Struct. Funct. Bioinforma., vol. 87, no. 12, pp. 1141–1148, Dec. 2019. [CrossRef] [PubMed] [Google Scholar]
  • J. Jumper et al., “High Accuracy Protein Structure Prediction Using Deep Learning, ” 2020. [Google Scholar]
  • “AlphaFold: a solution to a 50-year-old grand challenge in biology | DeepMind.” [Online]. Available: [Accessed: 07-Apr-2021]. [Google Scholar]
  • A. W. Senior et al., “Improved protein structure prediction using potentials from deep learning, ” Nature, vol. 577, no. 7792, pp. 706–710, Jan. 2020. [CrossRef] [PubMed] [Google Scholar]
  • S. Wang, S. Sun, Z. Li, R. Zhang, and J. Xu, “Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model, ” PLOS Comput. Biol., vol. 13, no. 1, p. e1005324, Jan. 2017. [CrossRef] [Google Scholar]
  • Y. Liu, P. Palmedo, Q. Ye, B. Berger, and J. Peng, “Enhancing Evolutionary Couplings with Deep Convolutional Neural Networks, ” Cell Syst., vol. 6, no. 1, pp. 65-74.e3, Jan. 2018. [CrossRef] [Google Scholar]
  • J. Yang, I. Anishchenko, H. Park, Z. Peng, S. Ovchinnikov, and D. Baker, “Improved protein structure prediction using predicted inter-residue orientations, ” bioRxiv. bioRxiv, 18-Nov-2019. [Google Scholar]
  • B. Adhikari, “DEEPCON: Protein contact prediction using dilated convolutional neural networks with dropout, ” Bioinformatics, vol. 36, no. 2, pp. 470–477, Jan. 2020. [CrossRef] [PubMed] [Google Scholar]
  • C. Mirabello and B. Wallner, “RAWMSA: End-toend Deep Learning using raw Multiple Sequence Alignments, ” PLoS One, vol. 14, no. 8, Aug. 2019. [Google Scholar]
  • S. M. Kandathil, J. G. Greener, A. M. Lau, and D. T. Jones, “Deep learning-based prediction of protein structure using learned representations of multiple sequence alignments, ” bioRxiv. bioRxiv, 27-Nov-2020. [Google Scholar]
  • W. Russ et al., “Evolution-based design of chorismate mutase enzymes, ” bioRxiv, p. 2020.04.01.020487, Apr. 2020. [Google Scholar]
  • P. Tian, J. M. Louis, J. L. Baber, A. Aniana, and R. B. Best, “Co-Evolutionary Fitness Landscapes for Sequence Design, ” Angew. Chemie Int. Ed., vol. 57, no. 20, pp. 5674–5678, May 2018. [CrossRef] [Google Scholar]
  • T. Blazejewski, H. I. Ho, and H. H. Wang, “Synthetic sequence entanglement augments stability and containment of genetic information in cells, ” Science (80-. )., vol. 365, no. 6453, pp. 595–598, Aug. 2019. [CrossRef] [PubMed] [Google Scholar]
  • A. Rives et al., “Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences, ” bioRxiv. bioRxiv, p. 622-803, 29-Apr-2019. [Google Scholar]
  • E. C. Alley, G. Khimulya, S. Biswas, M. AlQuraishi, and G. M. Church, “Unified rational protein engineering with sequence-based deep representation learning, ” Nat. Methods, vol. 16, no. 12, pp. 1315–1322, Dec. 2019. [CrossRef] [PubMed] [Google Scholar]
  • R. Rao, J. Meier, T. Sercu, S. Ovchinnikov, and A. Rives, “Transformer protein language models are unsupervised structure learners, ” bioRxiv. bioRxiv, p. 2020.12.15.422761, 15-Dec-2020. [Google Scholar]
  • A. Madani et al., “ProGen: Language modeling for protein generation, ” bioRxiv. bioRxiv, p. 2020.03.07.982272, 08-Mar-2020. [Google Scholar]
  • M. Heinzinger et al., “Modeling the language of life Deep learning protein sequences, ” bioRxiv. bioRxiv, p. 614313, 19-Apr-2019. [Google Scholar]
  • A. Vaswani et al., “Attention is all you need, ” in Advances in Neural Information Processing Systems, 2017. vol. 2017. December, pp. 5999–6009. [Google Scholar]
  • J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, ” NAACL HLT 2019.2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. Proc. Conf., vol. 1, pp. 4171–4186, Oct. 2018. [Google Scholar]
  • D. T. Jones and S. M. Kandathil, “High precision in protein contact prediction using fully convolutional neural networks and minimal sequence features, ” Bioinformatics, vol. 34, no. 19, pp. 3308–3315, Oct. 2018. [CrossRef] [PubMed] [Google Scholar]
  • J. Ingraham, A. Riesselman, C. Sander, D. Marks, and H. M. School, “Learning Protein Structure With A Differentiable Simulator, ” Sep. 2018. [Google Scholar]
  • J. Xu, M. McPartlon, and J. Li, “Improved protein structure prediction by deep learning irrespective of co-evolution information, ” bioRxiv. bioRxiv, 12-Oct-2020. [Google Scholar]
  • N. Bhattacharya et al., “Single layers of attention suffice to predict protein contacts, ” bioRxiv. bioRxiv, p. 2020.12.21.423882, 22-Dec-2020. [Google Scholar]
  • A. Elnaggar et al., “ProtTrans: Towards Cracking the Language of Life’s Code Through SelfSupervised Deep Learning and High Performance Computing, ” bioRxiv, Jul. 2020. [Google Scholar]
  • T. Bepler and B. Berger, “Learning protein sequence embeddings using information from structure, ” arXiv, Feb. 2019. [Google Scholar]
  • J. Vig, A. Madani, L. R. Varshney, C. Xiong, R. Socher, and N. F. Rajani, “BERTology Meets Biology: Interpreting Attention in Protein Language Models, ” bioRxiv, Jun. 2020. [Google Scholar]
  • R. Rao et al., “Evaluating Protein Transfer Learning with TAPE, ” bioRxiv, Jun. 2019. [Google Scholar]
  • A. X. Lu, A. X. Lu, and A. Moses, “Evolution Is All You Need: Phylogenetic Augmentation for Contrastive Learning, ” arXiv, Dec. 2020. [Google Scholar]
  • P. Sturmfels, J. Vig, A. Madani, and N. F. Rajani, “Profile Prediction: An Alignment-Based PreTraining Task for Protein Sequence Models, ” arXiv, Nov. 2020. [Google Scholar]
  • T. Sercu et al., “Neural Potts Model | OpenReview, ” 2020. pp. 1–13. [Google Scholar]
  • S. Raman et al., “Structure prediction for CASP8 with all-atom refinement using Rosetta, ” Proteins Struct. Funct. Bioinforma., vol. 77, no. SUPPL. 9, pp. 89–99, 2009. [CrossRef] [Google Scholar]
  • M. S. I. Bhuyan and X. Gao, “A protein-dependent side-chain rotamer library., ” BMC Bioinformatics, vol. 12 Suppl 14, 2011. [Google Scholar]
  • M. V. Shapovalov and R. L. Dunbrack, “A smoothed backbone-dependent rotamer library for proteins derived from adaptive kernel density estimates and regressions, ” Structure, vol. 19, no. 6, pp. 844–858, Jun. 2011. [CrossRef] [PubMed] [Google Scholar]
  • K. Liu et al., “Prediction of amino acid side chain conformation using a deep neural network.” [Google Scholar]
  • J. E. King and D. Ryan Koes, “SidechainNet: An All-Atom Protein Structure Dataset for Machine Learning, ” 2020. [Google Scholar]
  • M. AlQuraishi, “ProteinNet: A standardized data set for machine learning of protein structure, ” BMC Bioinformatics, vol. 20, no. 1, p. 311, Jun. 2019. [CrossRef] [PubMed] [Google Scholar]
  • T. Wu, Z. Guo, J. Hou, and J. Cheng, “DeepDist: real-value inter-residue distance prediction with deep residual convolutional network, ” BMC Bioinformatics, vol. 22, no. 1, p. 30, Dec. 2021. [CrossRef] [PubMed] [Google Scholar]
  • N. Hiranuma, H. Park, M. Baek, I. Anishchenko, J. Dauparas, and D. Baker, “Improved protein structure refinement guided by deep learning based accuracy estimation, ” Nat. Commun., vol. 12, no. 1, p. 1340, Dec. 2021. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.