13,563 research outputs found

    Analysis of Three-Dimensional Protein Images

    Full text link
    A fundamental goal of research in molecular biology is to understand protein structure. Protein crystallography is currently the most successful method for determining the three-dimensional (3D) conformation of a protein, yet it remains labor intensive and relies on an expert's ability to derive and evaluate a protein scene model. In this paper, the problem of protein structure determination is formulated as an exercise in scene analysis. A computational methodology is presented in which a 3D image of a protein is segmented into a graph of critical points. Bayesian and certainty factor approaches are described and used to analyze critical point graphs and identify meaningful substructures, such as alpha-helices and beta-sheets. Results of applying the methodologies to protein images at low and medium resolution are reported. The research is related to approaches to representation, segmentation and classification in vision, as well as to top-down approaches to protein structure prediction.Comment: See http://www.jair.org/ for any accompanying file

    The Paleontological Stratigraphic Interval Construction and Analysis Tool

    Get PDF
    Core description diagrams are the primary record of the cylindrical rock samples that result from the scientific drilling process. Typically, these diagrams are drawn by hand in field books and then drafted up in a graphics program for publication. Very rarely are the actual data encoded in the diagrams, e.g., depth in core, grain size, and lithology, captured in a format that can be manipulated and analyzed. This thesis introduces the Paleontological Stratigraphic Interval Construction and Analysis Tool (PSICAT), an interactive, cross-platform environment for creating, viewing, and editing core description diagrams, and discusses the design and implementation of its extensible software architecture and data model which allows it to seamlessly capture and visualize core description data. PSICAT was used to log nearly 1300 meters of sediment core drilled during ANtarctic DRILLing (ANDRILL) project\u27s McMurdo Ice Shelf expedition

    Human-Computer User Interface Design for Semiliterate and Illiterate Users

    Get PDF
    Information and Communication Technology (ICT) has revolutionized the lives of the people. The technology is embedded in daily life of literate or semiliterate/illiterate users. However, the user interface (UI) requirements for semiliterate/illiterate users are different from that of an educated person. The researchers of Human Computer Interaction for Development (HCI4D) face challenges to improve the usability of a UI for the semiliterate users. Therefore, a Systematic Literature Review (SLR) is conducted to provide a set of design factors and guidelines for UI development of semiliterate users. The study is based on extensive research gathered from literature to understand the user-centered design (UCD) approach, enhancing user experience (UX) for semiliterate users. This study analyses fifty two research articles that are published during 2010-2020. The findings shed light on the systematization of UI design guidelines for semiliterate/illiterate users. These guidelines can help in taking advantage of ICT during the COVID-19 pandemic. The analysis shows that seventeen main design factors are indispensable for designing UI of semiliterate users. The most suggested design factors include localization and graphics, which should be incorporated in UI for the target population. Moreover, the lag in the design factors as personalization and consistency open a road for future research

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    Relational multimedia databases.

    Get PDF

    Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    Get PDF
    The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions, and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists
    corecore