Samiksha Jaiswal (Editor)

Computational biology

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Computational biology

Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. The field is broadly defined and includes foundations in computer science, applied mathematics, animation, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, ecology, evolution, anatomy, neuroscience, and visualization.

Contents

Computational biology is different from biological computation, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers, but is similar to bioinformatics, which is an interdisciplinary science using computers to store and process biological data.

Introduction

Computational Biology, sometimes referred to as bioinformatics, is the science of using biological data to develop algorithms and relations among various biological systems. Prior to the advent of computational biology, biologists did not have access to large amounts of data. Researchers were able to develop analytical methods for interpreting biological information, but were unable to share them quickly among colleagues.

Bioinformatics began to develop in the early 1970s. It was considered the science of analyzing informatics processes of various biological systems. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data to develop other fields pushed biological researchers to revisit the idea of using computers to evaluate and compare large data sets. By 1982, information was being shared amongst researchers through the use of punch cards. The amount of data being shared began to grow exponentially by the end of the 1980s. This required the development of new computational methods in order to quickly analyze and interpret relevant information.

Since the late 1990s, computational biology has become an important part of developing emerging technologies for the field of biology. The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, Computational evolutionary biology is a subfield of it.

Computational biology has been used to help sequence the human genome, create accurate models of the human brain, and assist in modeling biological systems.

Computational anatomy

Computational anatomy is a discipline focusing on the study of anatomical shape and form at the visible or gross anatomical 50 100 μ scale of morphology. It involves the development and application of computational, mathematical and data-analytical methods for modeling and simulation of biological structures. The field is broadly defined and includes foundations in anatomy, applied mathematics and pure mathematics, machine learning, computational mechanics, computational science, medical imaging, neuroscience, physics, probability, and statistics; it also has strong connections with fluid mechanics and geometric mechanics. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. It is similar in spirit to the history of Computational linguistics, a discipline that focuses on the linguistic structures rather than the sensor acting as the transmission and communication medium(s).Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging (MRI), Computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morphome scale in 3D.

In computational anatomy, the diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in R 3 to another. Computational anatomy intersects the study of Riemannian manifolds where groups of diffeomorphisms are the central focus, intersecting with emerging high-dimensional theories of shape emerging from the field of shape statistics. The metric structures in Computational anatomy are related in spirit to morphometrics, with the distinction that Computational anatomy focuses on an infinite-dimensional space of coordinate systems transformed by a diffeomorphism, hence the central use of the terminology diffeomorphometry, the metric space study of coordinate systems via diffeomorphisms. At Computational anatomy's heart is the comparison of shape by recognizing in one shape the other. This connects it to D'Arcy Wentworth Thompson's developments On Growth and Form which has led to scientific explanations of morphogenesis, the process by which patterns are formed in Biology. The original formulation of Computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations.

The spirit of this discipline shares strong overlap with areas such as computer vision and kinematics of rigid bodies, where objects are studied by analysing the groups responsible for the movement in question. It is a branch of the image analysis and pattern theory school at Brown University pioneered by Ulf Grenander. making spaces of anatomical patterns in Pattern Theory, into a metric space is one of the fundamental operations since being able to cluster and recognize anatomical configurations often requires a metric of close and far between shapes. The diffeomorphometry metric of Computational anatomy measures how far two diffeomorphic changes of coordinates are from each other, which in turn induces a metric on the shapes and images indexed to them. The models of metric pattern theory, in particular group action on the orbit of shapes and forms is a central tool to the formal definitions in Computational anatomy.

Computational biomodeling

Computational biomodeling is a field concerned with building computer models of biological systems. Computational biomodeling aims to develop and use visual simulations in order to assess the complexity of biological systems. This is accomplished through the use of specialized algorithms, and visualization software. These models allow for prediction of how systems will react under different environments. This is useful for determining if a system is robust. A robust biological system is one that “maintain their state and functions against external and internal perturbations”, which is essential for a biological system to survive. Computational biomodeling generates a large archive of such data, allowing for analysis from multiple users. While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe that this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modelling approach is to use Petri nets via tools such as esyN

Computational genomics (Computational genetics)

Computational genomics is a field within genomics which studies the genomes of cells and organisms. It is often referred to as Computational and Statistical Genetics. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual’s pre-existing genetic patterns. This project has created many similar programs. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life.

One of the main ways that genomes are compared is by homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way.

This field is still in development. An untouched project in the development of computational genomics is the analysis of intergenic regions. Studies show that roughly 97% of the human genome consists of these regions. Researchers in computational genomics are working on understanding the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE (The Encyclopedia of DNA Elements) and the Roadmap Epigenomics Project.

Computational neuroscience

Computational neuroscience is the study of brain function in terms of the information processing properties of the structures that make up the nervous system. It is a subset of the field of neuroscience, and looks to analyze brain data to create practical applications. It looks to model the brain in order to examine specific types aspects of the neurological system. Various types of models of the brain include:

  • Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement.
  • Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model.
  • It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations.

    Computational pharmacology

    Computational pharmacology (from a computational biology perspective) is “the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data”. The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed.

    Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs.

    Computational evolutionary biology

    Computational biology has assisted the field of evolutionary biology in many capacities. This includes:

  • Using DNA data to reconstruct the tree of life with computational phylogenetics
  • Fitting population genetics models (either forward time or backward time) to DNA data to make inferences about demographic or selective history
  • Building population genetics models of evolutionary systems from first principles in order to predict what is likely to evolve.
  • Cancer computational biology

    Cancer computational biology is a field that aims to determine the future mutations in cancer through an algorithmic approach to analyzing data. Research in this field has led to the use of high-throughput measurement. High throughput measurement allows for the gathering of millions of data points using robotics and other sensing devices. This data is collected from DNA, RNA, and other biological structures. Areas of focus include determining the characteristics of tumors, analyzing molecules that are deterministic in causing cancer, and understanding how the human genome relates to the causation of tumors and cancer.

    Software and tools

    Computational Biologists use a wide range of software. These range from command line programs to graphical and web-based programs.

    Open source software

    Open source software provides a platform to develop computational biological methods. Specifically, open source means that anybody can access software developed in research. PLOS cites four main reasons for the use of open source software including:

  • Reproducibility: This allows for researchers to use the exact methods used to calculate the relations between biological data.
  • Faster Development: developers and researchers do not have to reinvent existing code for minor tasks. Instead they can use pre-existing programs to save time on the development and implementation of larger projects.
  • Increased quality: Having input from multiple researchers studying the same topic provides a layer of assurance that errors will not be in the code.
  • Long-term availability: Open source programs are not tied to any businesses or patents. This allows for them to be posted to multiple web pages and ensure that they are available in the future.
  • Conferences

    There are several large conferences that are concerned with computational biology. Some notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB) and Research in Computational Molecular Biology (RECOMB).

    Journals

    There are numerous journals dedicated to computational biology. Some notable examples include Journal of Computational Biology and PLOS Computational Biology. The PLOS computational biology journal is a peer-reviewed journal that has many notable research projects in the field of computational biology. They provide reviews on software, tutorials for open source software, and display information on upcoming computational biology conferences. PLOS Computational Biology is an open access journal. The publication may be openly used provided the author is cited. Recently a new open access journal Computational Molecular Biology was launched.

    Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data.

    Specifically, the NIH defines

    Computational biology: The development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.

    Bioinformatics: Research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral or health data, including those to acquire, store, organize, archive, analyze, or visualize such data.

    While each field is distinct, there may be significant overlap at their interface.

    References

    Computational biology Wikipedia