18 research outputs found
High-throughput next-generation sequencing technologies foster new cutting-edge computing techniques in bioinformatics
The advent of high-throughput next generation sequencing technologies have fostered enormous potential applications of supercomputing techniques in genome sequencing, epi-genetics, metagenomics, personalized medicine, discovery of non-coding RNAs and protein-binding sites. To this end, the 2008 International Conference on Bioinformatics and Computational Biology (Biocomp) – 2008 World Congress on Computer Science, Computer Engineering and Applied Computing (Worldcomp) was designed to promote synergistic inter/multidisciplinary research and education in response to the current research trends and advances. The conference attracted more than two thousand scientists, medical doctors, engineers, professors and students gathered at Las Vegas, Nevada, USA during July 14–17 and received great success. Supported by International Society of Intelligent Biological Medicine (ISIBM), International Journal of Computational Biology and Drug Design (IJCBDD), International Journal of Functional Informatics and Personalized Medicine (IJFIPM) and the leading research laboratories from Harvard, M.I.T., Purdue, UIUC, UCLA, Georgia Tech, UT Austin, U. of Minnesota, U. of Iowa etc, the conference received thousands of research papers. Each submitted paper was reviewed by at least three reviewers and accepted papers were required to satisfy reviewers' comments. Finally, the review board and the committee decided to select only 19 high-quality research papers for inclusion in this supplement to BMC Genomics based on the peer reviews only. The conference committee was very grateful for the Plenary Keynote Lectures given by: Dr. Brian D. Athey (University of Michigan Medical School), Dr. Vladimir N. Uversky (Indiana University School of Medicine), Dr. David A. Patterson (Member of United States National Academy of Sciences and National Academy of Engineering, University of California at Berkeley) and Anousheh Ansari (Prodea Systems, Space Ambassador). The theme of the conference to promote synergistic research and education has been achieved successfully
Modelling and Performance analysis of a Network of Chemical Sensors with Dynamic Collaboration
The problem of environmental monitoring using a wireless network of chemical
sensors with a limited energy supply is considered. Since the conventional
chemical sensors in active mode consume vast amounts of energy, an optimisation
problem arises in the context of a balance between the energy consumption and
the detection capabilities of such a network. A protocol based on "dynamic
sensor collaboration" is employed: in the absence of any pollutant, majority of
sensors are in the sleep (passive) mode; a sensor is invoked (activated) by
wake-up messages from its neighbors only when more information is required. The
paper proposes a mathematical model of a network of chemical sensors using this
protocol. The model provides valuable insights into the network behavior and
near optimal capacity design (energy consumption against detection). An
analytical model of the environment, using turbulent mixing to capture chaotic
fluctuations, intermittency and non-homogeneity of the pollutant distribution,
is employed in the study. A binary model of a chemical sensor is assumed (a
device with threshold detection). The outcome of the study is a set of simple
analytical tools for sensor network design, optimisation, and performance
analysis.Comment: 21 pages and 7 figure
A Large Scale Analysis of Information-Theoretic Network Complexity Measures Using Chemical Structures
This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based) complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases
Recommended from our members
DOE EPSCoR Initiative in Structural and computational Biology/Bioinformatics
The overall goal of the DOE EPSCoR Initiative in Structural and Computational Biology was to enhance the competiveness of Vermont research in these scientific areas. To develop self-sustaining infrastructure, we increased the critical mass of faculty, developed shared resources that made junior researchers more competitive for federal research grants, implemented programs to train graduate and undergraduate students who participated in these research areas and provided seed money for research projects. During the time period funded by this DOE initiative: (1) four new faculty were recruited to the University of Vermont using DOE resources, three in Computational Biology and one in Structural Biology; (2) technical support was provided for the Computational and Structural Biology facilities; (3) twenty-two graduate students were directly funded by fellowships; (4) fifteen undergraduate students were supported during the summer; and (5) twenty-eight pilot projects were supported. Taken together these dollars resulted in a plethora of published papers, many in high profile journals in the fields and directly impacted competitive extramural funding based on structural or computational biology resulting in 49 million dollars awarded in grants (Appendix I), a 600% return on investment by DOE, the State and University
Improving classification performance of microarray analysis by feature selection and feature extraction methods
In this study, we compared two feature extraction methods (PCA, PLS) and seven feature
selection methods (mRMR and its variations, MaxRel, QPFS) on four different classifiers (SVM,
RF, KNN, NN). We use ratio comparison validation for PCA method and 10-folds cross
validation method for both the feature extraction and feature selection methods. We use
Leukemia data set and Colon data set to apply the combinations and measured accuracy as well
as area under ROC. The results illustrated that feature selection and extraction methods can both
somehow improve the performance of classification tasks on microarray data sets. Some
combinations of classifier and feature preprocessing method can greatly improve the accuracy as
well as the AUC value are given in this study.Master of Science (MSc) in Computational Science