87 research outputs found
G protein-coupled receptors: A target for microbial metabolites and a mechanistic link to microbiome-immune-brain interactions
Human-microorganism interactions play a key role in human health. However, the underlying molecular mechanisms remain poorly understood. Small-molecules that offer a functional readout of microbe-microbe-human relationship are of great interest for deeper understanding of the inter-kingdom crosstalk at the molecular level. Recent studies have demonstrated that small-molecules from gut microbiota act as ligands for specific human G protein-coupled receptors (GPCRs) and modulate a range of human physiological functions, offering a mechanistic insight into the microbe-human interaction. To this end, we focused on analysis of bacterial metabolites that are currently recognized to bind to GPCRs and are found to activate the known downstream signaling pathways. We further mapped the distribution of these molecules across the public mass spectrometry-based metabolomics data, to identify the presence of these molecules across body sites and their association with health status. By combining this with RNA-Seq expression and spatial localization of GPCRs from a public human protein atlas database, we inferred the most predominant GPCR-mediated microbial metabolite-human cell interactions regulating gut-immune-brain axis. Furthermore, by evaluating the intestinal absorption properties and blood-brain barrier permeability of the small-molecules we elucidated their molecular interactions with specific human cell receptors, particularly expressed on human intestinal epithelial cells, immune cells and the nervous system that are shown to hold much promise for clinical translational potential. Furthermore, we provide an overview of an open-source resource for simultaneous interrogation of bioactive molecules across the druggable human GPCRome, a useful framework for integration of microbiome and metabolite cataloging with mechanistic studies for an improved understanding of gut microbiota-immune-brain molecular interactions and their potential therapeutic use
Teaching Reading Comprehension Through The Interactive Technique
The objective of this research was to find out whether the Interactive technique could improve students' reading comprehension. This was a quasi-experimental research. The population of this research was the eighth grade students of SLTP Negeri 1 Kota Bengkulu which consisted of 193 students. The sample of this research was the class VIII.2 (34 students) as the experimental group and the VIII.3 (40 students) as the control group. The instrument was a reading comprehension test, which consisted of 40 items. Before the pre-test was given, it was tried out to the students of the same level. The result was t-count in the pre-test was smaller than t-table (1.26 2.042). This indicated that the interactive technique could improve students' reading comprehension
D-SPACE4Cloud: A Design Tool for Big Data Applications
The last years have seen a steep rise in data generation worldwide, with the
development and widespread adoption of several software projects targeting the
Big Data paradigm. Many companies currently engage in Big Data analytics as
part of their core business activities, nonetheless there are no tools and
techniques to support the design of the underlying hardware configuration
backing such systems. In particular, the focus in this report is set on Cloud
deployed clusters, which represent a cost-effective alternative to on premises
installations. We propose a novel tool implementing a battery of optimization
and prediction techniques integrated so as to efficiently assess several
alternative resource configurations, in order to determine the minimum cost
cluster deployment satisfying QoS constraints. Further, the experimental
campaign conducted on real systems shows the validity and relevance of the
proposed method
Performance Degradation and Cost Impact Evaluation of Privacy Preserving Mechanisms in Big Data Systems
Big Data is an emerging area and concerns managing datasets whose size is beyond commonly used software tools ability to capture, process, and perform analyses in a timely way. The Big Data software market is growing at 32% compound annual rate, almost four times more than the whole ICT market, and the quantity of data to be analyzed is expected to double every two years.
Security and privacy are becoming very urgent Big Data aspects that need to be tackled. Indeed, users share more and more personal data and user-generated content through their mobile devices and computers to social networks and cloud services, losing data and content control with a serious impact on their own privacy. Privacy is one area that had a serious debate recently, and many governments require data providers and companies to protect usersâ sensitive data. To mitigate these problems, many solutions have been developed to provide data privacy but, unfortunately, they introduce some computational overhead when data is processed.
The goal of this paper is to quantitatively evaluate the performance and cost impact of multiple privacy protection mechanisms. A real industry case study concerning tax fraud detection has been considered. Many experiments have been performed to analyze the performance degradation and additional cost (required to provide a given service level) for running applications in a cloud system
Constructing Search Spaces for Search-Based Software Testing Using Neural Networks
A central requirement for any Search-Based Software Testing (SBST) technique is a convenient and meaningful fitness landscape. Whether one follows a targeted or a diversification driven strategy, a search landscape needs to be large, continuous, easy to construct and representative of the underlying property of interest. Constructing such a landscape is not a trivial task often requiring a significant manual effort by an expert.
We present an approach for constructing meaningful and convenient fitness landscapes using neural networks (NN) â for targeted and diversification strategies alike. We suggest that output of an NN predictor can be interpreted as a fitness for a targeted strategy. The NN is trained on a corpus of execution traces and various properties of interest, prior to searching. During search, the trained NN is queried to predict an estimate of a property given an execution trace. The outputs of the NN form a convenient search space which is strongly representative of a number of properties. We believe that such a search space can be readily used for driving a search towards specific properties of interest.
For a diversification strategy, we propose the use of an autoencoder; a mechanism for compacting data into an n-dimensional âlatentâ space. In it, datapoints are arranged according to the similarity of their salient features. We show that a latent space of execution traces possesses characteristics of a convenient search landscape: it is continuous, large and crucially, it defines a notion of similarity to arbitrary observations
A Data-driven Model of Nucleosynthesis with Chemical Tagging in a Lower-dimensional Latent Space
Chemical tagging seeks to identify unique star formation sites from present-day stellar abundances. Previous techniques have treated each abundance dimension as being statistically independent, despite theoretical expectations that many elements can be produced by more than one nucleosynthetic process. In this work, we introduce a data-driven model of nucleosynthesis, where a set of latent factors (e.g., nucleosynthetic yields) contribute to all stars with different scores and clustering (e.g., chemical tagging) is modeled by a mixture of multivariate Gaussians in a lower-dimensional latent space. We use an exact method to simultaneously estimate the factor scores for each star, the partial assignment of each star to each cluster, and the latent factors common to all stars, even in the presence of missing data entries. We use an information-theoretic Bayesian principle to estimate the number of latent factors and clusters. Using the second Galah data release, we find that six latent factors are preferred to explain N = 2566 stars with 17 chemical abundances. We identify the rapid- and slow neutron-capture processes, as well as latent factors consistent with Fe-peak and α-element production, and another where K and Zn dominate. When we consider N ~ 160,000 stars with missing abundances, we find another seven factors, as well as 16 components in latent space. Despite these components showing separation in chemistry, which is explained through different yield contributions, none show significant structure in their positions or motions. We argue that more data and joint priors on cluster membership that are constrained by dynamical models are necessary to realize chemical tagging at a galactic-scale. We release accompanying software that scales well with the available data, allowing for the model's parameters to be optimized in seconds given a fixed number of latent factors, components, and ~107 abundance measurements.We
acknowledge support from the Australian Research Council
through Discovery Project DP160100637. J.B.H. is supported
by a Laureate Fellowship from the Australian Research
Council. Parts of this research were supported by the Australian
Research Council (ARC) Centre of Excellence for All Sky
Astrophysics in 3 Dimensions (ASTRO 3D), through project
number CE170100013. S.~B. acknowledges funds from the
Alexander von Humboldt Foundation in the framework of the
Sofja Kovalevskaja Award endowed by the Federal Ministry of
Education and Research. S.B. is supported by the Australian
Research Council (grants DP150100250 and DP160103747).
S.L.M. acknowledges the support of the UNSW Scientia
Fellowship program. J.D.S., S.L.M., and D.B.Z. acknowledge
the support of the Australian Research Council through
Discovery Project grant DP180101791. The Galah survey is
based on observations made at the Australian Astronomical
Observatory, under programmes A/2013B/13, A/2014A/25,
A/2015A/19, and A/2017A/18. We acknowledge the traditional owners of the land on which the AAT stands, the
Gamilaraay people, and pay our respects to elders past and
present. This research has made use of NASAâs Astrophysics
Data System
Solving the Task Variant Allocation Problem in Distributed Robotics
We consider the problem of assigning software processes (or tasks) to hardware processors in distributed robotics environments. We introduce the notion of a task variant, which supports the adaptation of software to specific hardware configurations. Task variants facilitate the trade-off of functional quality versus the requisite capacity and type of target execution processors. We formalise the problem of assigning task variants to processors as a mathematical model that incorporates typical constraints found in robotics applications; the model is a constrained form of a multi-objective, multi-dimensional, multiple-choice knapsack problem. We propose and evaluate three different solution methods to the problem: constraint programming, a constructive greedy heuristic and a local search metaheuristic. Furthermore, we demonstrate the use of task variants in a real instance of a distributed interactive multi-agent navigation system, showing that our best solution method (constraint programming) improves the systemâs quality of service, as compared to the local search metaheuristic, the greedy heuristic and a randomised solution, by an average of 16, 31 and 56% respectively
MIBiG 3.0: a community-driven effort to annotate experimentally validated biosynthetic gene clusters
- âŠ