39,264 research outputs found
Recommended from our members
Artificial Intelligence in Radiotherapy Treatment Planning: Present and Future.
Treatment planning is an essential step of the radiotherapy workflow. It has become more sophisticated over the past couple of decades with the help of computer science, enabling planners to design highly complex radiotherapy plans to minimize the normal tissue damage while persevering sufficient tumor control. As a result, treatment planning has become more labor intensive, requiring hours or even days of planner effort to optimize an individual patient case in a trial-and-error fashion. More recently, artificial intelligence has been utilized to automate and improve various aspects of medical science. For radiotherapy treatment planning, many algorithms have been developed to better support planners. These algorithms focus on automating the planning process and/or optimizing dosimetric trade-offs, and they have already made great impact on improving treatment planning efficiency and plan quality consistency. In this review, the smart planning tools in current clinical use are summarized in 3 main categories: automated rule implementation and reasoning, modeling of prior knowledge in clinical practice, and multicriteria optimization. Novel artificial intelligence-based treatment planning applications, such as deep learning-based algorithms and emerging research directions, are also reviewed. Finally, the challenges of artificial intelligence-based treatment planning are discussed for future works
Toward a Standardized Strategy of Clinical Metabolomics for the Advancement of Precision Medicine
Despite the tremendous success, pitfalls have been observed in every step of a clinical metabolomics workflow, which impedes the internal validity of the study. Furthermore, the demand for logistics, instrumentations, and computational resources for metabolic phenotyping studies has far exceeded our expectations. In this conceptual review, we will cover inclusive barriers of a metabolomics-based clinical study and suggest potential solutions in the hope of enhancing study robustness, usability, and transferability. The importance of quality assurance and quality control procedures is discussed, followed by a practical rule containing five phases, including two additional "pre-pre-" and "post-post-" analytical steps. Besides, we will elucidate the potential involvement of machine learning and demonstrate that the need for automated data mining algorithms to improve the quality of future research is undeniable. Consequently, we propose a comprehensive metabolomics framework, along with an appropriate checklist refined from current guidelines and our previously published assessment, in the attempt to accurately translate achievements in metabolomics into clinical and epidemiological research. Furthermore, the integration of multifaceted multi-omics approaches with metabolomics as the pillar member is in urgent need. When combining with other social or nutritional factors, we can gather complete omics profiles for a particular disease. Our discussion reflects the current obstacles and potential solutions toward the progressing trend of utilizing metabolomics in clinical research to create the next-generation healthcare system.11Ysciescopu
Recommended from our members
Large-scale Quality Control of Cardiac Imaging in Population Studies: Application to UK Biobank
In large population studies such as the UK Biobank (UKBB), quality control of the acquired images by visual assessment is unfeasible. In this paper, we apply a recently developed fully-automated quality control pipeline for cardiac MR (CMR) images to the first 19,265 short-axis (SA) cine stacks from the UKBB. We present the results for the three estimated quality metrics (heart coverage, inter-slice motion and image contrast in the cardiac region) as well as their potential associations with factors including acquisition details and subject-related phenotypes. Up to 14.2% of the analysed SA stacks had sub-optimal coverage (i.e. missing basal and/or apical slices), however most of them were limited to the first year of acquisition. Up to 16% of the stacks were affected by noticeable inter-slice motion (i.e. average inter-slice misalignment greater than 3.4 mm). Inter-slice motion was positively correlated with weight and body surface area. Only 2.1% of the stacks had an average end-diastolic cardiac image contrast below 30% of the dynamic range. These findings will be highly valuable for both the scientists involved in UKBB CMR acquisition and for the ones who use the dataset for research purposes
Running a distributed virtual observatory: US Virtual Astronomical Observatory operations
Operation of the US Virtual Astronomical Observatory shares some issues with
modern physical observatories, e.g., intimidating data volumes and rapid
technological change, and must also address unique concerns like the lack of
direct control of the underlying and scattered data resources, and the
distributed nature of the observatory itself. In this paper we discuss how the
VAO has addressed these challenges to provide the astronomical community with a
coherent set of science-enabling tools and services. The distributed nature of
our virtual observatory-with data and personnel spanning geographic,
institutional and regime boundaries-is simultaneously a major operational
headache and the primary science motivation for the VAO. Most astronomy today
uses data from many resources. Facilitation of matching heterogeneous datasets
is a fundamental reason for the virtual observatory. Key aspects of our
approach include continuous monitoring and validation of VAO and VO services
and the datasets provided by the community, monitoring of user requests to
optimize access, caching for large datasets, and providing distributed storage
services that allow user to collect results near large data repositories. Some
elements are now fully implemented, while others are planned for subsequent
years. The distributed nature of the VAO requires careful attention to what can
be a straightforward operation at a conventional observatory, e.g., the
organization of the web site or the collection and combined analysis of logs.
Many of these strategies use and extend protocols developed by the
international virtual observatory community.Comment: 7 pages with 2 figures included within PD
ImmPort, toward repurposing of open access immunological assay data for translational and clinical research
Immunology researchers are beginning to explore the possibilities of reproducibility, reuse and secondary analyses of immunology data. Open-access datasets are being applied in the validation of the methods used in the original studies, leveraging studies for meta-analysis, or generating new hypotheses. To promote these goals, the ImmPort data repository was created for the broader research community to explore the wide spectrum of clinical and basic research data and associated findings. The ImmPort ecosystem consists of four components–Private Data, Shared Data, Data Analysis, and Resources—for data archiving, dissemination, analyses, and reuse. To date, more than 300 studies have been made freely available through the ImmPort Shared Data portal , which allows research data to be repurposed to accelerate the translation of new insights into discoveries
Multiplierz: An Extensible API Based Desktop Environment for Proteomics Data Analysis
BACKGROUND. Efficient analysis of results from mass spectrometry-based proteomics experiments requires access to disparate data types, including native mass spectrometry files, output from algorithms that assign peptide sequence to MS/MS spectra, and annotation for proteins and pathways from various database sources. Moreover, proteomics technologies and experimental methods are not yet standardized; hence a high degree of flexibility is necessary for efficient support of high- and low-throughput data analytic tasks. Development of a desktop environment that is sufficiently robust for deployment in data analytic pipelines, and simultaneously supports customization for programmers and non-programmers alike, has proven to be a significant challenge. RESULTS. We describe multiplierz, a flexible and open-source desktop environment for comprehensive proteomics data analysis. We use this framework to expose a prototype version of our recently proposed common API (mzAPI) designed for direct access to proprietary mass spectrometry files. In addition to routine data analytic tasks, multiplierz supports generation of information rich, portable spreadsheet-based reports. Moreover, multiplierz is designed around a "zero infrastructure" philosophy, meaning that it can be deployed by end users with little or no system administration support. Finally, access to multiplierz functionality is provided via high-level Python scripts, resulting in a fully extensible data analytic environment for rapid development of custom algorithms and deployment of high-throughput data pipelines. CONCLUSION. Collectively, mzAPI and multiplierz facilitate a wide range of data analysis tasks, spanning technology development to biological annotation, for mass spectrometry-based proteomics research.Dana-Farber Cancer Institute; National Human Genome Research Institute (P50HG004233); National Science Foundation Integrative Graduate Education and Research Traineeship grant (DGE-0654108
Quantifying Performance of Bipedal Standing with Multi-channel EMG
Spinal cord stimulation has enabled humans with motor complete spinal cord
injury (SCI) to independently stand and recover some lost autonomic function.
Quantifying the quality of bipedal standing under spinal stimulation is
important for spinal rehabilitation therapies and for new strategies that seek
to combine spinal stimulation and rehabilitative robots (such as exoskeletons)
in real time feedback. To study the potential for automated electromyography
(EMG) analysis in SCI, we evaluated the standing quality of paralyzed patients
undergoing electrical spinal cord stimulation using both video and
multi-channel surface EMG recordings during spinal stimulation therapy
sessions. The quality of standing under different stimulation settings was
quantified manually by experienced clinicians. By correlating features of the
recorded EMG activity with the expert evaluations, we show that multi-channel
EMG recording can provide accurate, fast, and robust estimation for the quality
of bipedal standing in spinally stimulated SCI patients. Moreover, our analysis
shows that the total number of EMG channels needed to effectively predict
standing quality can be reduced while maintaining high estimation accuracy,
which provides more flexibility for rehabilitation robotic systems to
incorporate EMG recordings
The LIFE2 final project report
Executive summary: The first phase of LIFE (Lifecycle Information For E-Literature) made a major contribution to
understanding the long-term costs of digital preservation; an essential step in helping
institutions plan for the future. The LIFE work models the digital lifecycle and calculates the
costs of preserving digital information for future years. Organisations can apply this process
in order to understand costs and plan effectively for the preservation of their digital
collections
The second phase of the LIFE Project, LIFE2, has refined the LIFE Model adding three new
exemplar Case Studies to further build upon LIFE1. LIFE2 is an 18-month JISC-funded
project between UCL (University College London) and The British Library (BL), supported
by the LIBER Access and Preservation Divisions. LIFE2 began in March 2007, and
completed in August 2008.
The LIFE approach has been validated by a full independent economic review and has
successfully produced an updated lifecycle costing model (LIFE Model v2) and digital
preservation costing model (GPM v1.1). The LIFE Model has been tested with three further
Case Studies including institutional repositories (SHERPA-LEAP), digital preservation
services (SHERPA DP) and a comparison of analogue and digital collections (British Library
Newspapers). These Case Studies were useful for scenario building and have fed back into
both the LIFE Model and the LIFE Methodology.
The experiences of implementing the Case Studies indicated that enhancements made to the
LIFE Methodology, Model and associated tools have simplified the costing process. Mapping
a specific lifecycle to the LIFE Model isn’t always a straightforward process. The revised and
more detailed Model has reduced ambiguity. The costing templates, which were refined
throughout the process of developing the Case Studies, ensure clear articulation of both
working and cost figures, and facilitate comparative analysis between different lifecycles.
The LIFE work has been successfully disseminated throughout the digital preservation and
HE communities. Early adopters of the work include the Royal Danish Library, State
Archives and the State and University Library, Denmark as well as the LIFE2 Project partners.
Furthermore, interest in the LIFE work has not been limited to these sectors, with interest in
LIFE expressed by local government, records offices, and private industry. LIFE has also
provided input into the LC-JISC Blue Ribbon Task Force on the Economic Sustainability of
Digital Preservation.
Moving forward our ability to cost the digital preservation lifecycle will require further
investment in costing tools and models. Developments in estimative models will be needed to
support planning activities, both at a collection management level and at a later preservation
planning level once a collection has been acquired. In order to support these developments a
greater volume of raw cost data will be required to inform and test new cost models. This
volume of data cannot be supported via the Case Study approach, and the LIFE team would
suggest that a software tool would provide the volume of costing data necessary to provide a
truly accurate predictive model
Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation
Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected.
Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.
National Mesothelioma Virtual Bank: A standard based biospecimen and clinical data resource to enhance translational research
Background: Advances in translational research have led to the need for well characterized biospecimens for research. The National Mesothelioma Virtual Bank is an initiative which collects annotated datasets relevant to human mesothelioma to develop an enterprising biospecimen resource to fulfill researchers' need. Methods: The National Mesothelioma Virtual Bank architecture is based on three major components: (a) common data elements (based on College of American Pathologists protocol and National North American Association of Central Cancer Registries standards), (b) clinical and epidemiologic data annotation, and (c) data query tools. These tools work interoperably to standardize the entire process of annotation. The National Mesothelioma Virtual Bank tool is based upon the caTISSUE Clinical Annotation Engine, developed by the University of Pittsburgh in cooperation with the Cancer Biomedical Informatics Grid™ (caBIG™, see http://cabig.nci.nih.gov). This application provides a web-based system for annotating, importing and searching mesothelioma cases. The underlying information model is constructed utilizing Unified Modeling Language class diagrams, hierarchical relationships and Enterprise Architect software. Result: The database provides researchers real-time access to richly annotated specimens and integral information related to mesothelioma. The data disclosed is tightly regulated depending upon users' authorization and depending on the participating institute that is amenable to the local Institutional Review Board and regulation committee reviews. Conclusion: The National Mesothelioma Virtual Bank currently has over 600 annotated cases available for researchers that include paraffin embedded tissues, tissue microarrays, serum and genomic DNA. The National Mesothelioma Virtual Bank is a virtual biospecimen registry with robust translational biomedical informatics support to facilitate basic science, clinical, and translational research. Furthermore, it protects patient privacy by disclosing only de-identified datasets to assure that biospecimens can be made accessible to researchers. © 2008 Amin et al; licensee BioMed Central Ltd
- …