11,407 research outputs found

    Planning as Optimization: Dynamically Discovering Optimal Configurations for Runtime Situations

    Full text link
    The large number of possible configurations of modern software-based systems, combined with the large number of possible environmental situations of such systems, prohibits enumerating all adaptation options at design time and necessitates planning at run time to dynamically identify an appropriate configuration for a situation. While numerous planning techniques exist, they typically assume a detailed state-based model of the system and that the situations that warrant adaptations are known. Both of these assumptions can be violated in complex, real-world systems. As a result, adaptation planning must rely on simple models that capture what can be changed (input parameters) and observed in the system and environment (output and context parameters). We therefore propose planning as optimization: the use of optimization strategies to discover optimal system configurations at runtime for each distinct situation that is also dynamically identified at runtime. We apply our approach to CrowdNav, an open-source traffic routing system with the characteristics of a real-world system. We identify situations via clustering and conduct an empirical study that compares Bayesian optimization and two types of evolutionary optimization (NSGA-II and novelty search) in CrowdNav

    Recon 2.2: from reconstruction to model of human metabolism.

    Get PDF
    IntroductionThe human genome-scale metabolic reconstruction details all known metabolic reactions occurring in humans, and thereby holds substantial promise for studying complex diseases and phenotypes. Capturing the whole human metabolic reconstruction is an on-going task and since the last community effort generated a consensus reconstruction, several updates have been developed.ObjectivesWe report a new consensus version, Recon 2.2, which integrates various alternative versions with significant additional updates. In addition to re-establishing a consensus reconstruction, further key objectives included providing more comprehensive annotation of metabolites and genes, ensuring full mass and charge balance in all reactions, and developing a model that correctly predicts ATP production on a range of carbon sources.MethodsRecon 2.2 has been developed through a combination of manual curation and automated error checking. Specific and significant manual updates include a respecification of fatty acid metabolism, oxidative phosphorylation and a coupling of the electron transport chain to ATP synthase activity. All metabolites have definitive chemical formulae and charges specified, and these are used to ensure full mass and charge reaction balancing through an automated linear programming approach. Additionally, improved integration with transcriptomics and proteomics data has been facilitated with the updated curation of relationships between genes, proteins and reactions.ResultsRecon 2.2 now represents the most predictive model of human metabolism to date as demonstrated here. Extensive manual curation has increased the reconstruction size to 5324 metabolites, 7785 reactions and 1675 associated genes, which now are mapped to a single standard. The focus upon mass and charge balancing of all reactions, along with better representation of energy generation, has produced a flux model that correctly predicts ATP yield on different carbon sources.ConclusionThrough these updates we have achieved the most complete and best annotated consensus human metabolic reconstruction available, thereby increasing the ability of this resource to provide novel insights into normal and disease states in human. The model is freely available from the Biomodels database (http://identifiers.org/biomodels.db/MODEL1603150001)

    Neurophysiological Responses to Different Product Experiences

    Get PDF
    It is well known that the evaluation of a product from the shelf considers the simultaneous cerebral and emotional evaluation of the different qualities of the product such as its colour, the eventual images shown, and the envelope’s texture (hereafter all included in the term “product experience”). However, the measurement of cerebral and emotional reactions during the interaction with food products has not been investigated in depth in specialized literature. (e aim of this paper was to investigate such reactions by the EEG and the autonomic activities, as elicited by the cross-sensory interaction (sight and touch) across several different products. In addition, we investigated whether (i) the brand (Major Brand or Private Label), (ii) the familiarity (Foreign or Local Brand), and (iii) the hedonic value of products (Comfort Food or Daily Food) influenced the reaction of a group of volunteers during their interaction with the products. Results showed statistically significantly higher tendency of cerebral approach (as indexed by EEG frontal alpha asymmetry) in response to comfort food during the visual exploration and the visual and tactile exploration phases. Furthermore, for the same index, a higher tendency of approach has been found toward foreign food products in comparison with local food products during the visual and tactile exploration phase. Finally, the same comparison performed on a different index (EEG frontal theta) showed higher mental effort during the interaction with foreign products during the visual exploration and the visual and tactile exploration phases. Results from the present study could deepen the knowledge on the neurophysiological response to food products characterized by different nature in terms of hedonic value familiarity; moreover, they could have implications for food marketers and finally lead to further study on how people make food choices through the interactions with their commercial envelope

    Computing Vertex Centrality Measures in Massive Real Networks with a Neural Learning Model

    Full text link
    Vertex centrality measures are a multi-purpose analysis tool, commonly used in many application environments to retrieve information and unveil knowledge from the graphs and network structural properties. However, the algorithms of such metrics are expensive in terms of computational resources when running real-time applications or massive real world networks. Thus, approximation techniques have been developed and used to compute the measures in such scenarios. In this paper, we demonstrate and analyze the use of neural network learning algorithms to tackle such task and compare their performance in terms of solution quality and computation time with other techniques from the literature. Our work offers several contributions. We highlight both the pros and cons of approximating centralities though neural learning. By empirical means and statistics, we then show that the regression model generated with a feedforward neural networks trained by the Levenberg-Marquardt algorithm is not only the best option considering computational resources, but also achieves the best solution quality for relevant applications and large-scale networks. Keywords: Vertex Centrality Measures, Neural Networks, Complex Network Models, Machine Learning, Regression ModelComment: 8 pages, 5 tables, 2 figures, version accepted at IJCNN 2018. arXiv admin note: text overlap with arXiv:1810.1176
    • …
    corecore