9 research outputs found

    ILR Faculty Publications 2011-12

    Get PDF

    Pin-Align

    Get PDF
    To date, few tools for aligning protein-protein interaction networks have been suggested. These tools typically find conserved interaction patterns using various local or global alignment algorithms. However, the improvement of the speed, scalability, simplification, and accuracy of network alignment tools is still the target of new researches. In this paper, we introduce Pin-Align, a new tool for local alignment of protein-protein interaction networks. Pin-Align accuracy is tested on protein interaction networks from IntAct, DIP, and the Stanford Network Database and the results are compared with other well-known algorithms. It is shown that Pin-Align has higher sensitivity and specificity in terms of KEGG Ortholog groups

    Biological investigation and predictive modelling of foaming in anaerobic digester

    Get PDF
    Anaerobic digestion (AD) of waste has been identified as a leading technology for greener renewable energy generation as an alternative to fossil fuel. AD will reduce waste through biochemical processes, converting it to biogas which could be used as a source of renewable energy and the residue bio-solids utilised in enriching the soil. A problem with AD though is with its foaming and the associated biogas loss. Tackling this problem effectively requires identifying and effectively controlling factors that trigger and promote foaming. In this research, laboratory experiments were initially carried out to differentiate foaming causal and exacerbating factors. Then the impact of the identified causal factors (organic loading rate-OLR and volatile fatty acid-VFA) on foaming occurrence were monitored and recorded. Further analysis of foaming and nonfoaming sludge samples by metabolomics techniques confirmed that the OLR and VFA are the prime causes of foaming occurrence in AD. In addition, the metagenomics analysis showed that the phylum bacteroidetes and proteobacteria were found to be predominant with a higher relative abundance of 30% and 29% respectively while the phylum actinobacteria representing the most prominent filamentous foam causing bacteria such as Norcadia amarae and Microthrix Parvicella had a very low and consistent relative abundance of 0.9% indicating that the foaming occurrence in the AD studied was not triggered by the presence of filamentous bacteria. Consequently, data driven models to predict foam formation were developed based on experimental data with inputs (OLR and VFA in the feed) and output (foaming occurrence). The models were extensively validated and assessed based on the mean squared error (MSE), root mean squared error (RMSE), R2 and mean absolute error (MAE). Levenberg Marquadt neural network model proved to be the best model for foaming prediction in AD, with RMSE = 5.49, MSE = 30.19 and R2 = 0.9435. The significance of this study is the development of a parsimonious and effective modelling tool that enable AD operators to proactively avert foaming occurrence, as the two model input variables (OLR and VFA) can be easily adjustable through simple programmable logic controller

    PRIVACY LITERACY 2.0: A THREE-LAYERED APPROACH COMPREHENSIVE LITERATURE REVIEW

    Get PDF
    With technological advancement, privacy has become a concept that is difficult to define, understand, and research. Social networking sites, as an example of technological advancements, have blurred the lines between physical and virtual spaces. Sharing and self-disclosure with our networks of people, or with strangers at times, is becoming a socially acceptable norm. However, the vast sharing of personal data with others on social networking sites engenders concern over data loss, concern for unintended audience, and an opportunity for mass surveillance. Through a dialectical pluralism lens and following the comprehensive literature methodological framework, the purpose of this study was to map and define what it means to be a privacy literate citizen. The goal was to inform privacy research and educational practices. The findings of this study revealed that placing the sole responsibility on the individual user to manage their privacy is an inefficient model. Users are guided by unmasked and hidden software practices, which they do not fully comprehend. Another finding was the noticeable increase of citizen targeting and liquified surveillance, which are accepted practices in society. Liquified surveillance takes any shape; is both concreate and discrete; and it happens through complete profile data collection as well as raw data aggregation. Privacy management, as a research model or management approach, does not prevent data from leaking nor does it stop surveillance. For privacy to be successful, privacy engineering should include citizens’ opinions and require high levels of data transparency prior to any data collection software design. The implications of this study showed that privacy literacy 2.0 is a combination of several inter-connected skills, such as knowledge about the law, software, platform architecture, and the psychology of self-disclosure

    Generating Reliable and Responsive Observational Evidence: Reducing Pre-analysis Bias

    Get PDF
    A growing body of evidence generated from observational data has demonstrated the potential to influence decision-making and improve patient outcomes. For observational evidence to be actionable, however, it must be generated reliably and in a timely manner. Large distributed observational data networks enable research on diverse patient populations at scale and develop new sound methods to improve reproducibility and robustness of real-world evidence. Nevertheless, the problems of generalizability, portability and scalability persist and compound. As analytical methods only partially address bias, reliable observational research (especially in networks) must address the bias at the design stage (i.e., pre-analysis bias) including the strategies for identifying patients of interest and defining comparators. This thesis synthesizes and enumerates a set of challenges to addressing pre-analysis bias in observational studies and presents mixed-methods approaches and informatics solutions for overcoming a number of those obstacles. We develop frameworks, methods and tools for scalable and reliable phenotyping including data source granularity estimation, comprehensive concept set selection, index date specification, and structured data-based patient review for phenotype evaluation. We cover the research on potential bias in the unexposed comparator definition including systematic background rates estimation and interpretation, and definition and evaluation of the unexposed comparator. We propose that the use of standardized approaches and methods as described in this thesis not only improves reliability but also increases responsiveness of observational evidence. To test this hypothesis, we designed and piloted a Data Consult Service - a service that generates new on-demand evidence at the bedside. We demonstrate that it is feasible to generate reliable evidence to address clinicians’ information needs in a robust and timely fashion and provide our analysis of the current limitations and future steps needed to scale such a service

    Defining complex rule-based models in space and over time

    Get PDF
    Computational biology seeks to understand complex spatio-temporal phenomena across multiple levels of structural and functional organisation. However, questions raised in this context are difficult to answer without modelling methodologies that are intuitive and approachable for non-expert users. Stochastic rule-based modelling languages such as Kappa have been the focus of recent attention in developing complex biological models that are nevertheless concise, comprehensible, and easily extensible. We look at further developing Kappa, in terms of how we might define complex models in both the spatial and the temporal axes. In defining complex models in space, we address the assumption that the reaction mixture of a Kappa model is homogeneous and well-mixed. We propose evolutions of the current iteration of Spatial Kappa to streamline the process of defining spatial structures for different modelling purposes. We also verify the existing implementation against established results in diffusion and narrow escape, thus laying the foundations for querying a wider range of spatial systems with greater confidence in the accuracy of the results. In defining complex models over time, we draw attention to how non-modelling specialists might define, verify, and analyse rules throughout a rigorous model development process. We propose structured visual methodologies for developing and maintaining knowledge base data structures, incorporating the information needed to construct a Kappa rule-based model. We further extend these methodologies to deal with biological systems defined by the activity of synthetic genetic parts, with the hope of providing tractable operations that allow multiple users to contribute to their development over time according to their area of expertise. Throughout the thesis we pursue the aim of bridging the divide between information sources such as literature and bioinformatics databases and the abstracting decisions inherent in a model. We consider methodologies for automating the construction of spatial models, providing traceable links from source to model element, and updating a model via an iterative and collaborative development process. By providing frameworks for modellers from multiple domains of expertise to work with the language, we reduce the entry barrier and open the field to further questions and new research
    corecore