787 research outputs found

    The RCSB Protein Data Bank: a redesigned query system and relational database based on the mmCIF schema

    Get PDF
    The Protein Data Bank (PDB) is the central worldwide repository for three-dimensional (3D) structure data of biological macromolecules. The Research Collaboratory for Structural Bioinformatics (RCSB) has completely redesigned its resource for the distribution and query of 3D structure data. The re-engineered site is currently in public beta test at http://pdbbeta.rcsb.org. The new site expands the functionality of the existing site by providing structure data in greater detail and uniformity, improved query and enhanced analysis tools. A new key feature is the integration and searchability of data from over 20 other sources covering genomic, proteomic and disease relationships. The current capabilities of the re-engineered site, which will become the RCSB production site at http://www.pdb.org in late 2005, are described

    Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity\ud

    Get PDF
    This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term “complexity”. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of “true” complex phenomena.\u

    Federating structural models and data:Outcomes from a workshop on archiving integrative structures

    Get PDF
    Structures of biomolecular systems are increasingly computed by integrative modeling. In this approach, a structural model is constructed by combining information from multiple sources, including varied experimental methods and prior models. In 2019, a Workshop was held as a Biophysical Society Satellite Meeting to assess progress and discuss further requirements for archiving integrative structures. The primary goal of the Workshop was to build consensus for addressing the challenges involved in creating common data standards, building methods for federated data exchange, and developing mechanisms for validating integrative structures. The summary of the Workshop and the recommendations that emerged are presented here

    Three-dimensional Structure Databases of Biological Macromolecules

    Get PDF
    Databases of three-dimensional structures of proteins (and their associated molecules) provide: (a)Curated repositories of coordinates of experimentally determined structures, including extensive metadata; for instance information about provenance, details about data collection and interpretation, and validation of results.(b)Information-retrieval tools to allow searching to identify entries of interest and provide access to them.(c)Links among databases, especially to databases of amino-acid and genetic sequences, and of protein function; and links to software for analysis of amino-acid sequence and protein structure, and for structure prediction.(d)Collections of predicted three-dimensional structures of proteins. These will become more and more important after the breakthrough in structure prediction achieved by AlphaFold2. The single global archive of experimentally determined biomacromolecular structures is the Protein Data Bank (PDB). It is managed by wwPDB, a consortium of five partner institutions: the Protein Data Bank in Europe (PDBe), the Research Collaboratory for Structural Bioinformatics (RCSB), the Protein Data Bank Japan (PDBj), the BioMagResBank (BMRB), and the Electron Microscopy Data Bank (EMDB). In addition to jointly managing the PDB repository, the individual wwPDB partners offer many tools for analysis of protein and nucleic acid structures and their complexes, including providing computer-graphic representations. Their collective and individual websites serve as hubs of the community of structural biologists, offering newsletters, reports from Task Forces, training courses, and “helpdesks,” as well as links to external software. Many specialized projects are based on the information contained in the PDB. Especially important are SCOP, CATH, and ECOD, which present classifications of protein domains

    Artificial intelligence is algorithmic mimicry: why artificial "agents" are not (and won't be) proper agents

    Full text link
    What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems

    Genomic instantiation of consciousness in neurons through a biophoton field theory

    Get PDF
    A theoretical framework is developed based on the premise that brains evolved into sufficiently complex adaptive systems capable of instantiating genomic consciousness through self-awareness and complex interactions that recognize qualitatively the controlling factors of biological processes. Furthermore, our hypothesis assumes that the collective interactions in neurons yield macroergic effects, which can produce sufficiently strong electric energy fields for electronic excitations to take place on the surface of endogenous structures via alpha-helical integral proteins as electro-solitons. Specifically the process of radiative relaxation of the electro-solitons allows for the transfer of energy via interactions with deoxyribonucleic acid (DNA) molecules to induce conformational changes in DNA molecules producing an ultra weak non-thermal spontaneous emission of coherent biophotons through a quantum effect. The instantiation of coherent biophotons confined in spaces of DNA molecules guides the biophoton field to be instantaneously conducted along the axonal and neuronal arbors and in-between neurons and throughout the cerebral cortex (cortico-thalamic system) and subcortical areas (e.g., midbrain and hindbrain). Thus providing an informational character of the electric coherence of the brain — referred to as quantum coherence. The biophoton field is realized as a conscious field upon the re-absorption of biophotons by exciplex states of DNA molecules. Such quantum phenomenon brings about self-awareness and enables objectivity to have access to subjectivity in the unconscious. As such, subjective experiences can be recalled to consciousness as subjective conscious experiences or qualia through co-operative interactions between exciplex states of DNA molecules and biophotons leading to metabolic activity and energy transfer across proteins as a result of protein-ligand binding during protein-protein communication. The biophoton field as a conscious field is attributable to the resultant effect of specifying qualia from the metabolic energy field that is transported in macromolecular proteins throughout specific networks of neurons that are constantly transforming into more stable associable representations as molecular solitons. The metastability of subjective experiences based on resonant dynamics occurs when bottom-up patterns of neocortical excitatory activity are matched with top-down expectations as adaptive dynamic pressures. These dynamics of on-going activity patterns influenced by the environment and selected as the preferred subjective experience in terms of a functional field through functional interactions and biological laws are realized as subjectivity and actualized through functional integration as qualia. It is concluded that interactionism and not information processing is the key in understanding how consciousness bridges the explanatory gap between subjective experiences and their neural correlates in the transcendental brain

    A Formal Ontology of Subcellular Neuroanatomy

    Get PDF
    The complexity of the nervous system requires high-resolution microscopy to resolve the detailed 3D structure of nerve cells and supracellular domains. The analysis of such imaging data to extract cellular surfaces and cell components often requires the combination of expert human knowledge with carefully engineered software tools. In an effort to make better tools to assist humans in this endeavor, create a more accessible and permanent record of their data, and to aid the process of constructing complex and detailed computational models, we have created a core of formalized knowledge about the structure of the nervous system and have integrated that core into several software applications. In this paper, we describe the structure and content of a formal ontology whose scope is the subcellular anatomy of the nervous system (SAO), covering nerve cells, their parts, and interactions between these parts. Many applications of this ontology to image annotation, content-based retrieval of structural data, and integration of shared data across scales and researchers are also described

    A data science roadmap for open science organizations engaged in early-stage drug discovery

    Get PDF
    The Structural Genomics Consortium is an international open science research organization with a focus on accelerating early-stage drug discovery, namely hit discovery and optimization. We, as many others, believe that artificial intelligence (AI) is poised to be a main accelerator in the field. The question is then how to best benefit from recent advances in AI and how to generate, format and disseminate data to enable future breakthroughs in AI-guided drug discovery. We present here the recommendations of a working group composed of experts from both the public and private sectors. Robust data management requires precise ontologies and standardized vocabulary while a centralized database architecture across laboratories facilitates data integration into high-value datasets. Lab automation and opening electronic lab notebooks to data mining push the boundaries of data sharing and data modeling. Important considerations for building robust machine-learning models include transparent and reproducible data processing, choosing the most relevant data representation, defining the right training and test sets, and estimating prediction uncertainty. Beyond data-sharing, cloud-based computing can be harnessed to build and disseminate machine-learning models. Important vectors of acceleration for hit and chemical probe discovery will be (1) the real-time integration of experimental data generation and modeling workflows within design-make-test-analyze (DMTA) cycles openly, and at scale and (2) the adoption of a mindset where data scientists and experimentalists work as a unified team, and where data science is incorporated into the experimental design
    corecore