168 research outputs found

    Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semantics

    Get PDF
    This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks

    Proteome characterizations of microbial systems using MS-based experimental and informatics approaches to examine key metabolic pathways, proteins of unknown function, and phenotypic adaptation

    Get PDF
    Microbes express complex phenotypes and coordinate activities to build microbial communities. Recent work has focused on understanding the ability of microbial systems to efficiently utilize cellulosic biomass to produce bioenergy-related products. In order to maximize the yield of these bioenergy-related products from a microbial system, it is necessary to understand the molecular mechanisms.The ability of mass spectrometry to precisely identify thousands of proteins from a bacterial source has established mass spectrometry-based proteomics as an indispensable tool for various biological disciplines. This dissertation developed and optimized various proteomics experimental and informatic protocols, and integrated the resulting data with metabolomics, transcriptomics, and genomics in order to understand the systems biology of bio-energy relevant organisms. Integration of these various omics technologies led to an improved understanding of microbial cell-to-cell communication in response to external stimuli, microbial adaptation during deconstruction of lignocellulosic biomass and proteome diversity when an organism is subjected to different growth conditions.Integrated omics revealed Clostridium thermocellum\u27s accumulate long-chain, branched fatty acids over time in response to cytotoxic inhibitors released during the deconstruction and utilization of switchgrass. A striking feature implies a restructuring of C. thermocellum\u27s cellular membrane as the culture progresses. The membrane remodulation was further examined in a study involving the swarming and swimming phenotypes of Paenibacillus polymyxa. The possible roles of phospholipids, hydrolytic enzymes, surfactin, flagellar assembly, chemotaxis and glycerol metabolism in swarming motility were investigated by integrating lipidomics with proteomics.Extracellular proteome analysis of Caldicellulosiruptor bescii revealed secretome plasticity based on the complexity (mono-/disaccharides vs. polysaccharides) and type of carbon (C5 vs. C6) available to the microorganism. This study further opened the avenue for research to characterize proteins of unknown function (PUFs) specific to growth conditions.To gain a better understanding of the possible functions of PUFs in C. thermocellum, a time course analysis of C. thermocellum was conducted. Based on the concept of guilt-by-association, protein intensities and their co-expressions were used to tease out the functional aspect of PUFs. Clustering trends and network analysis were used to infer potential functions of PUFs. Selected PUFs were further interrogated by the use of phylogeny and structural modeling

    Batch and continuous production of stable dense suspensions of drug nanoparticles in a wet stirred media mill

    Get PDF
    One way to improve the bioavailability of poorly water-soluble drugs is to reduce particle size of drug crystals down to nanoscale via wet stirred media milling. An increase in total surface area per mass loading of the drug and specific surface area as well as reduced external mass transfer resistance allow a faster dissolution of the poorly-water soluble drug from nanocrystals. To prevent aggregation of nanoparticles, polymers and surfactants are dissolved in water acting as stabilizers via adsorption onto the drug crystals. In the last two decades, ample experimental data were generated in the area of wet stirred media milling for the production of drug nanoparticle suspensions. However, a fundamental scientific/engineering understanding of various aspects of this process is still lacking. These challenges include elucidation of the governing mechanism(s) during nanoparticle formation and physical stabilization of the nanosuspension with the use of polymers and surfactants (formulation parameters), understanding the impact of process parameters in the context of first-principle-based models, and production of truly nanosized drug particles (10-100 nm) with acceptable physical stability and minimal contamination with the media. Recirculation mode of milling operation, where the drug suspension in a holding tank continuously circulates through the stirred media mill, has been commonly used in lab, pilot, and commercial scales. Although the recirculation is continuous, the recirculation operation mode is overall a batch operation, requiring significant number of batches for a large-volume pharmaceutical product. Hence, development and investigation of a truly continuous process should offer significant advantages. To explain the impact of some of the processing parameters, stress intensity and stress number concepts were widely used in literature, which do not account for the effect of suspension viscosity explicitly. The impact of the processing parameters has not been explained in a predictive and reliable manner. In this dissertation, a comprehensive investigation of the production of Griseofulvin nanosuspensions in a wet stirred media mill operating in both the recirculation and continuous modes has been conducted to address the aforementioned fundamental challenges. Griseofulvin has been selected as a model poorly water-soluble BCS Class II drug. Impact of various formulation parameters such as stabilizer type and loading as well as processing parameters such as rotor speed, bead loading, bead size, suspension flow rate and drug loading was studied. A major novelty of the present contribution is that the impact of processing and formulation parameters has been analyzed and interpreted using a combined experimental-theoretical (microhydrodynamic model) approach. Such a comprehensive approach allowed us to intensify the process for the production of sub-100 nm drug particles, which could not be produced with top-down approaches in the literature so far. In addition, a multi-pass mode of continuous operation was developed and the so-called “Rehbinder effect”, which has not been shown for the breakage of drug particles, was also elucidated. The dissertation work (1) indicated the need for a minimum polymeric stabilizer-to-drug ratio for proper stabilization of drug nanosuspensions as dictated by polymer adsorption and synergistic interactions between a polymeric stabilizer and a surfactant, (2) demonstrated the existence of an optimum polymer concentration from a breakage rate perspective in the presence of a surfactant, which results from the competing effects of viscous dampening and enhanced steric stabilization at higher polymer concentration, (3) developed fundamental understanding of the breakage dynamics-processing-formulation relationships and rationalized preparation of a single highly drug-loaded batch (20% or higher) instead of multiple dilute batches, (4) designed an intensified process for faster preparation of sub-100 nm particles with reduced specific energy consumption and media wear (i.e. minimal drug contamination), and (5) provided first evidence for the proof of Rehbinder effect during the milling of drugs. Not only do the polymers and surfactants allow proper physical stabilization of the nanoparticles in the suspensions, but they also do facilitate drug particle breakage. This dissertation also discusses applications of nanosuspensions and practical issues encountered during wet media milling

    Design, Integration, and Evaluation of IoT-Based Electrochromic Building Envelopes for Visual Comfort and Energy Efficiency

    Get PDF
    Electrochromic glazing has been identified as the next-generation high-performance glazing material for building envelopes due to its dynamic properties, which allow the buildings to respond to various climate conditions. IoT technologies have improved the sensing, communication, and interactions of building environmental data. Few studies have been done to synthesize the advancements in EC materials and building IoT technologies for better building performance. The challenge remains in the lack of compatible design and simulation tools, limited understanding of integration, and a paucity of evaluation measures to support the convergence between the EC building envelopes and IoT technologies. This research first explores the existing challenges of using EC building envelopes using secondary data analysis and case studies. An IoT-based EC prototype system is developed to demonstrate the feasibility of IoT and EC integration. Functionalities, reliability, interoperability, and scalability are assessed with comparisons of four alternative building envelope systems. Nation-wide evaluations of EC building performance are conducted to show regional differences and trade-offs of visual comfort and energy efficiency. A machine learning approach is proposed to solve the predictive EC control problem under random weather conditions. The best prediction models achieve 91.08% mean accuracy with the 16-climate-zone data set. The importance of predictive variables is also measured in each climate zone to develop a better understanding of the effectiveness of climatic sensors. Additionally, a simulation study is conducted to investigate the relationships between design factors and EC building performance. An instantaneous daylight measure is developed to support active daylight control with IoT-based EC building envelopes

    Ancient and historical systems

    Get PDF

    Architecture aware parallel programming in Glasgow parallel Haskell (GPH)

    Get PDF
    General purpose computing architectures are evolving quickly to become manycore and hierarchical: i.e. a core can communicate more quickly locally than globally. To be effective on such architectures, programming models must be aware of the communications hierarchy. This thesis investigates a programming model that aims to share the responsibility of task placement, load balance, thread creation, and synchronisation between the application developer and the runtime system. The main contribution of this thesis is the development of four new architectureaware constructs for Glasgow parallel Haskell that exploit information about task size and aim to reduce communication for small tasks, preserve data locality, or to distribute large units of work. We define a semantics for the constructs that specifies the sets of PEs that each construct identifies, and we check four properties of the semantics using QuickCheck. We report a preliminary investigation of architecture aware programming models that abstract over the new constructs. In particular, we propose architecture aware evaluation strategies and skeletons. We investigate three common paradigms, such as data parallelism, divide-and-conquer and nested parallelism, on hierarchical architectures with up to 224 cores. The results show that the architecture-aware programming model consistently delivers better speedup and scalability than existing constructs, together with a dramatic reduction in the execution time variability. We present a comparison of functional multicore technologies and it reports some of the first ever multicore results for the Feedback Directed Implicit Parallelism (FDIP) and the semi-explicit parallelism (GpH and Eden) languages. The comparison reflects the growing maturity of the field by systematically evaluating four parallel Haskell implementations on a common multicore architecture. The comparison contrasts the programming effort each language requires with the parallel performance delivered. We investigate the minimum thread granularity required to achieve satisfactory performance for three implementations parallel functional language on a multicore platform. The results show that GHC-GUM requires a larger thread granularity than Eden and GHC-SMP. The thread granularity rises as the number of cores rises

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    The Elements of Big Data Value

    Get PDF
    This open access book presents the foundations of the Big Data research and innovation ecosystem and the associated enablers that facilitate delivering value from data for business and society. It provides insights into the key elements for research and innovation, technical architectures, business models, skills, and best practices to support the creation of data-driven solutions and organizations. The book is a compilation of selected high-quality chapters covering best practices, technologies, experiences, and practical recommendations on research and innovation for big data. The contributions are grouped into four parts: · Part I: Ecosystem Elements of Big Data Value focuses on establishing the big data value ecosystem using a holistic approach to make it attractive and valuable to all stakeholders. · Part II: Research and Innovation Elements of Big Data Value details the key technical and capability challenges to be addressed for delivering big data value. · Part III: Business, Policy, and Societal Elements of Big Data Value investigates the need to make more efficient use of big data and understanding that data is an asset that has significant potential for the economy and society. · Part IV: Emerging Elements of Big Data Value explores the critical elements to maximizing the future potential of big data value. Overall, readers are provided with insights which can support them in creating data-driven solutions, organizations, and productive data ecosystems. The material represents the results of a collective effort undertaken by the European data community as part of the Big Data Value Public-Private Partnership (PPP) between the European Commission and the Big Data Value Association (BDVA) to boost data-driven digital transformation

    System organization and operation in the context of local flexibility markets at distribution level

    Get PDF
    9. Industry, innovation and infrastructur

    Network-based methods for biological data integration in precision medicine

    Full text link
    [eng] The vast and continuously increasing volume of available biomedical data produced during the last decades opens new opportunities for large-scale modeling of disease biology, facilitating a more comprehensive and integrative understanding of its processes. Nevertheless, this type of modelling requires highly efficient computational systems capable of dealing with such levels of data volumes. Computational approximations commonly used in machine learning and data analysis, namely dimensionality reduction and network-based approaches, have been developed with the goal of effectively integrating biomedical data. Among these methods, network-based machine learning stands out due to its major advantage in terms of biomedical interpretability. These methodologies provide a highly intuitive framework for the integration and modelling of biological processes. This PhD thesis aims to explore the potential of integration of complementary available biomedical knowledge with patient-specific data to provide novel computational approaches to solve biomedical scenarios characterized by data scarcity. The primary focus is on studying how high-order graph analysis (i.e., community detection in multiplex and multilayer networks) may help elucidate the interplay of different types of data in contexts where statistical power is heavily impacted by small sample sizes, such as rare diseases and precision oncology. The central focus of this thesis is to illustrate how network biology, among the several data integration approaches with the potential to achieve this task, can play a pivotal role in addressing this challenge provided its advantages in molecular interpretability. Through its insights and methodologies, it introduces how network biology, and in particular, models based on multilayer networks, facilitates bringing the vision of precision medicine to these complex scenarios, providing a natural approach for the discovery of new biomedical relationships that overcomes the difficulties for the study of cohorts presenting limited sample sizes (data-scarce scenarios). Delving into the potential of current artificial intelligence (AI) and network biology applications to address data granularity issues in the precision medicine field, this PhD thesis presents pivotal research works, based on multilayer networks, for the analysis of two rare disease scenarios with specific data granularities, effectively overcoming the classical constraints hindering rare disease and precision oncology research. The first research article presents a personalized medicine study of the molecular determinants of severity in congenital myasthenic syndromes (CMS), a group of rare disorders of the neuromuscular junction (NMJ). The analysis of severity in rare diseases, despite its importance, is typically neglected due to data availability. In this study, modelling of biomedical knowledge via multilayer networks allowed understanding the functional implications of individual mutations in the cohort under study, as well as their relationships with the causal mutations of the disease and the different levels of severity observed. Moreover, the study presents experimental evidence of the role of a previously unsuspected gene in NMJ activity, validating the hypothetical role predicted using the newly introduced methodologies. The second research article focuses on the applicability of multilayer networks for gene priorization. Enhancing concepts for the analysis of different data granularities firstly introduced in the previous article, the presented research provides a methodology based on the persistency of network community structures in a range of modularity resolution, effectively providing a new framework for gene priorization for patient stratification. In summary, this PhD thesis presents major advances on the use of multilayer network-based approaches for the application of precision medicine to data-scarce scenarios, exploring the potential of integrating extensive available biomedical knowledge with patient-specific data
    • 

    corecore