1,620 research outputs found

    Computational biology in the 21st century

    Get PDF
    Computational biologists answer biological and biomedical questions by using computation in support of—or in place of—laboratory procedures, hoping to obtain more accurate answers at a greatly reduced cost. The past two decades have seen unprecedented technological progress with regard to generating biological data; next-generation sequencing, mass spectrometry, microarrays, cryo-electron microscopy, and other highthroughput approaches have led to an explosion of data. However, this explosion is a mixed blessing. On the one hand, the scale and scope of data should allow new insights into genetic and infectious diseases, cancer, basic biology, and even human migration patterns. On the other hand, researchers are generating datasets so massive that it has become difficult to analyze them to discover patterns that give clues to the underlying biological processes.National Institutes of Health. (U.S.) ( grant GM108348)Hertz Foundatio

    Conical scan impact study. Volume 2: Small local user data processing facility

    Get PDF
    The impact of a conical scan versus a linear scan multispectral scanner (MSS) instrument on a small local-user data processing facility was studied. User data requirements were examined to determine the unique system rquirements for a low cost ground system (LCGS) compatible with the Earth Observatory Satellite (EOS) system. Candidate concepts were defined for the LCGS and preliminary designs were developed for selected concepts. The impact of a conical scan MSS versus a linear scan MSS was evaluated for the selected concepts. It was concluded that there are valid user requirements for the LCGS and, as a result of these requirements, the impact of the conical scanner is minimal, although some new hardware development for the LCGS is necessary to handle conical scan data

    Southwest Research Institute assistance to NASA in biomedical areas of the technology

    Get PDF
    Significant applications of aerospace technology were achieved. These applications include: a miniaturized, noninvasive system to telemeter electrocardiographic signals of heart transplant patients during their recuperative period as graded situations are introduced; and economical vital signs monitor for use in nursing homes and rehabilitation hospitals to indicate the onset of respiratory arrest; an implantable telemetry system to indicate the onset of the rejection phenomenon in animals undergoing cardiac transplants; an exceptionally accurate current proportional temperature controller for pollution studies; an automatic, atraumatic blood pressure measurement device; materials for protecting burned areas in contact with joint bender splints; a detector to signal the passage of animals by a given point during ecology studies; and special cushioning for use with below-knee amputees to protect the integrity of the skin at the stump/prosthesis interface

    FPGAs in Bioinformatics: Implementation and Evaluation of Common Bioinformatics Algorithms in Reconfigurable Logic

    Get PDF
    Life. Much effort is taken to grant humanity a little insight in this fascinating and complex but fundamental topic. In order to understand the relations and to derive consequences humans have begun to sequence their genomes, i.e. to determine their DNA sequences to infer information, e.g. related to genetic diseases. The process of DNA sequencing as well as subsequent analysis presents a computational challenge for recent computing systems due to the large amounts of data alone. Runtimes of more than one day for analysis of simple datasets are common, even if the process is already run on a CPU cluster. This thesis shows how this general problem in the area of bioinformatics can be tackled with reconfigurable hardware, especially FPGAs. Three compute intensive problems are highlighted: sequence alignment, SNP interaction analysis and genotype imputation. In the area of sequence alignment the software BLASTp for protein database searches is exemplarily presented, implemented and evaluated.SNP interaction analysis is presented with three applications performing an exhaustive search for interactions including the corresponding statistical tests: BOOST, iLOCi and the mutual information measurement. All applications are implemented in FPGA-hardware and evaluated, resulting in an impressive speedup of more than in three orders of magnitude when compared to standard computers. The last topic of genotype imputation presents a two-step process composed of the phasing step and the actual imputation step. The focus lies on the phasing step which is targeted by the SHAPEIT2 application. SHAPEIT2 is discussed with its underlying mathematical methods in detail, and finally implemented and evaluated. A remarkable speedup of 46 is reached here as well

    A Review of Formal Methods applied to Machine Learning

    Full text link
    We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems. Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software, with a focus on abstract interpretation based techniques as they provide a high level of scalability. This provides a golden standard and sets high expectations for machine learning verification. We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques. We also discuss methods for support vector machines and decision tree ensembles, as well as methods targeting training and data preparation, which are critical but often neglected aspects of machine learning. Finally, we offer perspectives for future research directions towards the formal verification of machine learning systems

    Biomedical and Human Factors Requirements for a Manned Earth Orbiting Station

    Get PDF
    This report is the result of a study conducted by Republic Aviation Corporation in conjunction with Spacelabs, Inc.,in a team effort in which Republic Aviation Corporation was prime contractor. In order to determine the realistic engineering design requirements associated with the medical and human factors problems of a manned space station, an interdisciplinary team of personnel from the Research and Space Divisions was organized. This team included engineers, physicians, physiologists, psychologists, and physicists. Recognizing that the value of the study is dependent upon medical judgments as well as more quantifiable factors (such as design parameters) a group of highly qualified medical consultants participated in working sessions to determine which medical measurements are required to meet the objectives of the study. In addition, various Life Sciences personnel from NASA (Headquarters, Langley, MSC) participated in monthly review sessions. The organization, team members, consultants, and some of the part-time contributors are shown in Figure 1. This final report embodies contributions from all of these participants

    Foundational Models in Medical Imaging: A Comprehensive Survey and Future Vision

    Full text link
    Foundation models, large-scale, pre-trained deep-learning models adapted to a wide range of downstream tasks have gained significant interest lately in various deep-learning problems undergoing a paradigm shift with the rise of these models. Trained on large-scale dataset to bridge the gap between different modalities, foundation models facilitate contextual reasoning, generalization, and prompt capabilities at test time. The predictions of these models can be adjusted for new tasks by augmenting the model input with task-specific hints called prompts without requiring extensive labeled data and retraining. Capitalizing on the advances in computer vision, medical imaging has also marked a growing interest in these models. To assist researchers in navigating this direction, this survey intends to provide a comprehensive overview of foundation models in the domain of medical imaging. Specifically, we initiate our exploration by providing an exposition of the fundamental concepts forming the basis of foundation models. Subsequently, we offer a methodical taxonomy of foundation models within the medical domain, proposing a classification system primarily structured around training strategies, while also incorporating additional facets such as application domains, imaging modalities, specific organs of interest, and the algorithms integral to these models. Furthermore, we emphasize the practical use case of some selected approaches and then discuss the opportunities, applications, and future directions of these large-scale pre-trained models, for analyzing medical images. In the same vein, we address the prevailing challenges and research pathways associated with foundational models in medical imaging. These encompass the areas of interpretability, data management, computational requirements, and the nuanced issue of contextual comprehension.Comment: The paper is currently in the process of being prepared for submission to MI

    Loss of Signal: Aeromedical Lessons Learned from the STS-107 Columbia Space Shuttle Mishap

    Get PDF
    The editors of Loss of Signal wanted to document the aeromedical lessons learned from the Space Shuttle Columbia mishap. The book is intended to be an accurate and easily understood account of the entire process of recovering and analyzing the human remains, investigating and analyzing what happened to the crew, and using the resulting information to recommend ways to prevent mishaps and provide better protection to crewmembers. Our goal is to capture the passions of those who devoted their energies in responding to the Columbia mishap. We have reunited authors who were directly involved in each of these aspects. These authors tell the story of their efforts related to the Columbia mishap from their point of view. They give the reader an honest description of their responsibilities and share their challenges, their experiences, and their lessons learned on how to enhance crew safety and survival, and how to be prepared to support space mishap investigations. As a result of this approach, a few of the chapters have some redundancy of information and authors' opinions may differ. In no way did we or they intend to assign blame or criticize anyone's professional efforts. All those involved did their best to obtain the truth in the situations to which they were assigned
    • 

    corecore