12,882 research outputs found
Direct Measurement of Periodic Electric Forces in Liquids
The electric forces acting on an atomic force microscope tip in solution have
been measured using a microelectrochemical cell formed by two periodically
biased electrodes. The forces were measured as a function of lift height and
bias amplitude and frequency, providing insight into electrostatic interactions
in liquids. Real-space mapping of the vertical and lateral components of
electrostatic forces acting on the tip from the deflection and torsion of the
cantilever is demonstrated. This method enables direct probing of electrostatic
and convective forces involved in electrophoretic and dielectroforetic
self-assembly and electrical tweezer operation in liquid environments
Mitigating Gender Bias in Machine Learning Data Sets
Artificial Intelligence has the capacity to amplify and perpetuate societal
biases and presents profound ethical implications for society. Gender bias has
been identified in the context of employment advertising and recruitment tools,
due to their reliance on underlying language processing and recommendation
algorithms. Attempts to address such issues have involved testing learned
associations, integrating concepts of fairness to machine learning and
performing more rigorous analysis of training data. Mitigating bias when
algorithms are trained on textual data is particularly challenging given the
complex way gender ideology is embedded in language. This paper proposes a
framework for the identification of gender bias in training data for machine
learning.The work draws upon gender theory and sociolinguistics to
systematically indicate levels of bias in textual training data and associated
neural word embedding models, thus highlighting pathways for both removing bias
from training data and critically assessing its impact.Comment: 10 pages, 5 figures, 5 Tables, Presented as Bias2020 workshop (as
part of the ECIR Conference) - http://bias.disim.univaq.i
Abductively Robust Inference
Inference to the Best Explanation (IBE) is widely criticized for being an unreliable form of ampliative inference – partly because the explanatory hypotheses we have considered at a given time may all be false, and partly because there is an asymmetry between the comparative judgment on which an IBE is based and the absolute verdict that IBE is meant to license. In this paper, I present a further reason to doubt the epistemic merits of IBE and argue that it motivates moving to an inferential pattern in which IBE emerges as a degenerate limiting case. Since this inferential pattern is structurally similar to an argumentative strategy known as Inferential Robustness Analysis (IRA), it effectively combines the most attractive features of IBE and IRA into a unified approach to non-deductive inference
Capturing health and eating status through a nutritional perception screening questionnaire (NPSQ9) in a randomised internet-based personalised nutrition intervention : the Food4Me study
BACKGROUND: National guidelines emphasize healthy eating to promote wellbeing and prevention of non-communicable diseases. The perceived healthiness of food is determined by many factors affecting food intake. A positive perception of healthy eating has been shown to be associated with greater diet quality. Internet-based methodologies allow contact with large populations. Our present study aims to design and evaluate a short nutritional perception questionnaire, to be used as a screening tool for assessing nutritional status, and to predict an optimal level of personalisation in nutritional advice delivered via the Internet. METHODS: Data from all participants who were screened and then enrolled into the Food4Me proof-of-principle study (n = 2369) were used to determine the optimal items for inclusion in a novel screening tool, the Nutritional Perception Screening Questionnaire-9 (NPSQ9). Exploratory and confirmatory factor analyses were performed on anthropometric and biochemical data and on dietary indices acquired from participants who had completed the Food4Me dietary intervention (n = 1153). Baseline and intervention data were analysed using linear regression and linear mixed regression, respectively. RESULTS: A final model with 9 NPSQ items was validated against the dietary intervention data. NPSQ9 scores were inversely associated with BMI (β = -0.181, p < 0.001) and waist circumference (Β = -0.155, p < 0.001), and positively associated with total carotenoids (β = 0.198, p < 0.001), omega-3 fatty acid index (β = 0.155, p < 0.001), Healthy Eating Index (HEI) (β = 0.299, p < 0.001) and Mediterranean Diet Score (MDS) (β = 0. 279, p < 0.001). Findings from the longitudinal intervention study showed a greater reduction in BMI and improved dietary indices among participants with lower NPSQ9 scores. CONCLUSIONS: Healthy eating perceptions and dietary habits captured by the NPSQ9 score, based on nine questionnaire items, were associated with reduced body weight and improved diet quality. Likewise, participants with a lower score achieved greater health improvements than those with higher scores, in response to personalised advice, suggesting that NPSQ9 may be used for early evaluation of nutritional status and to tailor nutritional advice. TRIAL REGISTRATION: NCT01530139 .Peer reviewedFinal Published versio
Precession and Recession of the Rock'n'roller
We study the dynamics of a spherical rigid body that rocks and rolls on a
plane under the effect of gravity. The distribution of mass is non-uniform and
the centre of mass does not coincide with the geometric centre.
The symmetric case, with moments of inertia I_1=I_2, is integrable and the
motion is completely regular. Three known conservation laws are the total
energy E, Jellett's quantity Q_J and Routh's quantity Q_R.
When the inertial symmetry I_1=I_2 is broken, even slightly, the character of
the solutions is profoundly changed and new types of motion become possible. We
derive the equations governing the general motion and present analytical and
numerical evidence of the recession, or reversal of precession, that has been
observed in physical experiments.
We present an analysis of recession in terms of critical lines dividing the
(Q_R,Q_J) plane into four dynamically disjoint zones. We prove that recession
implies the lack of conservation of Jellett's and Routh's quantities, by
identifying individual reversals as crossings of the orbit (Q_R(t),Q_J(t))
through the critical lines. Consequently, a method is found to produce a large
number of initial conditions so that the system will exhibit recession
Anti-prion drug mPPIg5 inhibits PrP(C) conversion to PrP(Sc).
Prion diseases, also known as transmissible spongiform encephalopathies, are a group of fatal neurodegenerative diseases that include scrapie in sheep, bovine spongiform encephalopathy (BSE) in cattle and Creutzfeldt-Jakob disease (CJD) in humans. The 'protein only hypothesis' advocates that PrP(Sc), an abnormal isoform of the cellular protein PrP(C), is the main and possibly sole component of prion infectious agents. Currently, no effective therapy exists for these diseases at the symptomatic phase for either humans or animals, though a number of compounds have demonstrated the ability to eliminate PrPSc in cell culture models. Of particular interest are synthetic polymers known as dendrimers which possess the unique ability to eliminate PrP(Sc) in both an intracellular and in vitro setting. The efficacy and mode of action of the novel anti-prion dendrimer mPPIg5 was investigated through the creation of a number of innovative bio-assays based upon the scrapie cell assay. These assays were used to demonstrate that mPPIg5 is a highly effective anti-prion drug which acts, at least in part, through the inhibition of PrP(C) to PrP(Sc) conversion. Understanding how a drug works is a vital component in maximising its performance. By establishing the efficacy and method of action of mPPIg5, this study will help determine which drugs are most likely to enhance this effect and also aid the design of dendrimers with anti-prion capabilities for the future
Hedging Effectiveness under Conditions of Asymmetry
We examine whether hedging effectiveness is affected by asymmetry in the
return distribution by applying tail specific metrics to compare the hedging
effectiveness of short and long hedgers using crude oil futures contracts. The
metrics used include Lower Partial Moments (LPM), Value at Risk (VaR) and
Conditional Value at Risk (CVAR). Comparisons are applied to a number of
hedging strategies including OLS and both Symmetric and Asymmetric GARCH
models. Our findings show that asymmetry reduces in-sample hedging performance
and that there are significant differences in hedging performance between short
and long hedgers. Thus, tail specific performance metrics should be applied in
evaluating hedging effectiveness. We also find that the Ordinary Least Squares
(OLS) model provides consistently good performance across different measures of
hedging effectiveness and estimation methods irrespective of the
characteristics of the underlying distribution
Genetic Classification of Populations using Supervised Learning
There are many instances in genetics in which we wish to determine whether
two candidate populations are distinguishable on the basis of their genetic
structure. Examples include populations which are geographically separated,
case--control studies and quality control (when participants in a study have
been genotyped at different laboratories). This latter application is of
particular importance in the era of large scale genome wide association
studies, when collections of individuals genotyped at different locations are
being merged to provide increased power. The traditional method for detecting
structure within a population is some form of exploratory technique such as
principal components analysis. Such methods, which do not utilise our prior
knowledge of the membership of the candidate populations. are termed
\emph{unsupervised}. Supervised methods, on the other hand are able to utilise
this prior knowledge when it is available.
In this paper we demonstrate that in such cases modern supervised approaches
are a more appropriate tool for detecting genetic differences between
populations. We apply two such methods, (neural networks and support vector
machines) to the classification of three populations (two from Scotland and one
from Bulgaria). The sensitivity exhibited by both these methods is considerably
higher than that attained by principal components analysis and in fact
comfortably exceeds a recently conjectured theoretical limit on the sensitivity
of unsupervised methods. In particular, our methods can distinguish between the
two Scottish populations, where principal components analysis cannot. We
suggest, on the basis of our results that a supervised learning approach should
be the method of choice when classifying individuals into pre-defined
populations, particularly in quality control for large scale genome wide
association studies.Comment: Accepted PLOS On
GOexpress: an R/Bioconductor package for the identification and visualisation of robust gene ontology signatures through supervised learning of gene expression data
Background: Identification of gene expression profiles that differentiate experimental groups is critical for discovery and analysis of key molecular pathways and also for selection of robust diagnostic or prognostic biomarkers. While integration of differential expression statistics has been used to refine gene set enrichment analyses, such approaches are typically limited to single gene lists resulting from simple two-group comparisons or time-series analyses. In contrast, functional class scoring and machine learning approaches provide powerful alternative methods to leverage molecular measurements for pathway analyses, and to compare continuous and multi-level categorical factors. Results: We introduce GOexpress, a software package for scoring and summarising the capacity of gene ontology features to simultaneously classify samples from multiple experimental groups. GOexpress integrates normalised gene expression data (e.g., from microarray and RNA-seq experiments) and phenotypic information of individual samples with gene ontology annotations to derive a ranking of genes and gene ontology terms using a supervised learning approach. The default random forest algorithm allows interactions between all experimental factors, and competitive scoring of expressed genes to evaluate their relative importance in classifying predefined groups of samples. Conclusions: GOexpress enables rapid identification and visualisation of ontology-related gene panels that robustly classify groups of samples and supports both categorical (e.g., infection status, treatment) and continuous (e.g., time-series, drug concentrations) experimental factors. The use of standard Bioconductor extension packages and publicly available gene ontology annotations facilitates straightforward integration of GOexpress within existing computational biology pipelines.Department of Agriculture, Food and the MarineEuropean Commission - Seventh Framework Programme (FP7)Science Foundation IrelandUniversity College Dubli
Error Correction for Index Coding With Coded Side Information
Index coding is a source coding problem in which a broadcaster seeks to meet
the different demands of several users, each of whom is assumed to have some
prior information on the data held by the sender. If the sender knows its
clients' requests and their side-information sets, then the number of packet
transmissions required to satisfy all users' demands can be greatly reduced if
the data is encoded before sending. The collection of side-information indices
as well as the indices of the requested data is described as an instance of the
index coding with side-information (ICSI) problem. The encoding function is
called the index code of the instance, and the number of transmissions employed
by the code is referred to as its length. The main ICSI problem is to determine
the optimal length of an index code for and instance. As this number is hard to
compute, bounds approximating it are sought, as are algorithms to compute
efficient index codes. Two interesting generalizations of the problem that have
appeared in the literature are the subject of this work. The first of these is
the case of index coding with coded side information, in which linear
combinations of the source data are both requested by and held as users'
side-information. The second is the introduction of error-correction in the
problem, in which the broadcast channel is subject to noise.
In this paper we characterize the optimal length of a scalar or vector linear
index code with coded side information (ICCSI) over a finite field in terms of
a generalized min-rank and give bounds on this number based on constructions of
random codes for an arbitrary instance. We furthermore consider the length of
an optimal error correcting code for an instance of the ICCSI problem and
obtain bounds on this number, both for the Hamming metric and for rank-metric
errors. We describe decoding algorithms for both categories of errors
- …