26,778 research outputs found
Towards structured sharing of raw and derived neuroimaging data across existing resources
Data sharing efforts increasingly contribute to the acceleration of
scientific discovery. Neuroimaging data is accumulating in distributed
domain-specific databases and there is currently no integrated access mechanism
nor an accepted format for the critically important meta-data that is necessary
for making use of the combined, available neuroimaging data. In this
manuscript, we present work from the Derived Data Working Group, an open-access
group sponsored by the Biomedical Informatics Research Network (BIRN) and the
International Neuroimaging Coordinating Facility (INCF) focused on practical
tools for distributed access to neuroimaging data. The working group develops
models and tools facilitating the structured interchange of neuroimaging
meta-data and is making progress towards a unified set of tools for such data
and meta-data exchange. We report on the key components required for integrated
access to raw and derived neuroimaging data as well as associated meta-data and
provenance across neuroimaging resources. The components include (1) a
structured terminology that provides semantic context to data, (2) a formal
data model for neuroimaging with robust tracking of data provenance, (3) a web
service-based application programming interface (API) that provides a
consistent mechanism to access and query the data model, and (4) a provenance
library that can be used for the extraction of provenance data by image
analysts and imaging software developers. We believe that the framework and set
of tools outlined in this manuscript have great potential for solving many of
the issues the neuroimaging community faces when sharing raw and derived
neuroimaging data across the various existing database systems for the purpose
of accelerating scientific discovery
The benefits of in silico modeling to identify possible small-molecule drugs and their off-target interactions
Accepted for publication in a future issue of Future Medicinal Chemistry.The research into the use of small molecules as drugs continues to be a key driver in the development of molecular databases, computer-aided drug design software and collaborative platforms. The evolution of computational approaches is driven by the essential criteria that a drug molecule has to fulfill, from the affinity to targets to minimal side effects while having adequate absorption, distribution, metabolism, and excretion (ADME) properties. A combination of ligand- and structure-based drug development approaches is already used to obtain consensus predictions of small molecule activities and their off-target interactions. Further integration of these methods into easy-to-use workflows informed by systems biology could realize the full potential of available data in the drug discovery and reduce the attrition of drug candidates.Peer reviewe
Information Outlook, October 2006
Volume 10, Issue 10https://scholarworks.sjsu.edu/sla_io_2006/1009/thumbnail.jp
The Dark Energy Survey Data Management System
The Dark Energy Survey collaboration will study cosmic acceleration with a
5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The
DES data management (DESDM) system will be used to process and archive these
data and the resulting science ready data products. The DESDM system consists
of an integrated archive, a processing framework, an ensemble of astronomy
codes and a data access framework. We are developing the DESDM system for
operation in the high performance computing (HPC) environments at NCSA and
Fermilab. Operating the DESDM system in an HPC environment offers both speed
and flexibility. We will employ it for our regular nightly processing needs,
and for more compute-intensive tasks such as large scale image coaddition
campaigns, extraction of weak lensing shear from the full survey dataset, and
massive seasonal reprocessing of the DES data. Data products will be available
to the Collaboration and later to the public through a virtual-observatory
compatible web portal. Our approach leverages investments in publicly available
HPC systems, greatly reducing hardware and maintenance costs to the project,
which must deploy and maintain only the storage, database platforms and
orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we
tested the current DESDM system on both simulated and real survey data. We used
Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and
calibrating approximately 250 million objects into the DES Archive database. We
also used DESDM to process and calibrate over 50 nights of survey data acquired
with the Mosaic2 camera. Comparison to truth tables in the case of the
simulated data and internal crosschecks in the case of the real data indicate
that astrometric and photometric data quality is excellent.Comment: To be published in the proceedings of the SPIE conference on
Astronomical Instrumentation (held in Marseille in June 2008). This preprint
is made available with the permission of SPIE. Further information together
with preprint containing full quality images is available at
http://desweb.cosmology.uiuc.edu/wik
BEAT: An Open-Source Web-Based Open-Science Platform
With the increased interest in computational sciences, machine learning (ML),
pattern recognition (PR) and big data, governmental agencies, academia and
manufacturers are overwhelmed by the constant influx of new algorithms and
techniques promising improved performance, generalization and robustness.
Sadly, result reproducibility is often an overlooked feature accompanying
original research publications, competitions and benchmark evaluations. The
main reasons behind such a gap arise from natural complications in research and
development in this area: the distribution of data may be a sensitive issue;
software frameworks are difficult to install and maintain; Test protocols may
involve a potentially large set of intricate steps which are difficult to
handle. Given the raising complexity of research challenges and the constant
increase in data volume, the conditions for achieving reproducible research in
the domain are also increasingly difficult to meet.
To bridge this gap, we built an open platform for research in computational
sciences related to pattern recognition and machine learning, to help on the
development, reproducibility and certification of results obtained in the
field. By making use of such a system, academic, governmental or industrial
organizations enable users to easily and socially develop processing
toolchains, re-use data, algorithms, workflows and compare results from
distinct algorithms and/or parameterizations with minimal effort. This article
presents such a platform and discusses some of its key features, uses and
limitations. We overview a currently operational prototype and provide design
insights.Comment: References to papers published on the platform incorporate
Impliance: A Next Generation Information Management Appliance
ably successful in building a large market and adapting to the changes of the
last three decades, its impact on the broader market of information management
is surprisingly limited. If we were to design an information management system
from scratch, based upon today's requirements and hardware capabilities, would
it look anything like today's database systems?" In this paper, we introduce
Impliance, a next-generation information management system consisting of
hardware and software components integrated to form an easy-to-administer
appliance that can store, retrieve, and analyze all types of structured,
semi-structured, and unstructured information. We first summarize the trends
that will shape information management for the foreseeable future. Those trends
imply three major requirements for Impliance: (1) to be able to store, manage,
and uniformly query all data, not just structured records; (2) to be able to
scale out as the volume of this data grows; and (3) to be simple and robust in
operation. We then describe four key ideas that are uniquely combined in
Impliance to address these requirements, namely the ideas of: (a) integrating
software and off-the-shelf hardware into a generic information appliance; (b)
automatically discovering, organizing, and managing all data - unstructured as
well as structured - in a uniform way; (c) achieving scale-out by exploiting
simple, massive parallel processing, and (d) virtualizing compute and storage
resources to unify, simplify, and streamline the management of Impliance.
Impliance is an ambitious, long-term effort to define simpler, more robust, and
more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement
(http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute,
display, and perform the work, make derivative works and make commercial use
of the work, but, you must attribute the work to the author and CIDR 2007.
3rd Biennial Conference on Innovative Data Systems Research (CIDR) January
710, 2007, Asilomar, California, US
The Country-specific Organizational and Information Architecture of ERP Systems at Globalised Enterprises
The competition on the market forces companies to adapt to the changing environment. Most recently, the economic and financial crisis has been accelerating the alteration of both business and IT models of enterprises. The forces of globalization and internationalization motivate the restructuring of business processes and consequently IT processes. To depict the changes in a unified framework, we need the concept of Enterprise Architecture as a theoretical approach that deals with various tiers, aspects and views of business processes and different layers of application, software and hardware systems. The paper outlines a wide-range theoretical background for analyzing the re-engineering and re-organization of ERP systems at international or transnational companies in the middle-sized EU member states. The research carried out up to now has unravelled the typical structural changes, the models for internal business networks and their modification that reflect the centralization, decentralization and hybrid approaches. Based on the results obtained recently, a future research program has been drawn up to deepen our understanding of the trends within the world of ERP systems.Information System; ERP; Enterprise Resource Planning; Enterprise Architecture; Globalization; Centralization; Decentralization; Hybrid
- …