774 research outputs found
Photoluminiscence of a quantum dot hybridized with a continuum
We calculate the intensity of photon emission from a trion in a single
quantum dot, as a function of energy and gate voltage, using the impurity
Anderson model and variational wave functions. Assuming a flat density of
conduction states and constant hybridization energy, the results agree with the
main features observed in recent experiments: non-monotonic dependence of the
energy on gate voltage, non-Lorentzian line shapes, and a line width that
increases near the regions of instability of the single electron final state to
occupations zero or two.Comment: 4 pages, 3 figures, Journal-ref adde
A new method for identifying bivariate differential expression in high dimensional microarray data using quadratic discriminant analysis
<p>Abstract</p> <p>Background</p> <p>One of the drawbacks we face up when analyzing gene to phenotype associations in genomic data is the ugly performance of the designed classifier due to the small sample-high dimensional data structures (<it>n</it> ≪ <it>p</it>) at hand. This is known as the peaking phenomenon, a common situation in the analysis of gene expression data. Highly predictive bivariate gene interactions whose marginals are useless for discrimination are also affected by such phenomenon, so they are commonly discarded by state of the art sequential search algorithms. Such patterns are known as weak/marginal strong bivariate interactions. This paper addresses the problem of uncovering them in high dimensional settings.</p> <p>Results</p> <p>We propose a new approach which uses the quadratic discriminant analysis (QDA) as a search engine in order to detect such signals. The choice of QDA is justified by a simulation study for a benchmark of classifiers which reveals its appealing properties. The procedure rests on an exhaustive search which explores the feature space in a blockwise manner by dividing it in blocks and by assessing the accuracy of the QDA for the predictors within each pair of blocks; the block size is determined by the resistance of the QDA to peaking. This search highlights chunks of features which are expected to contain the type of subtle interactions we are concerned with; a closer look at this smaller subset of features by means of an exhaustive search guided by the QDA error rate for all the pairwise input combinations within this subset will enable their final detection. The proposed method is applied both to synthetic data and to a public domain microarray data. When applied to gene expression data, it leads to pairs of genes which are not univariate differentially expressed but exhibit subtle patterns of bivariate differential expression.</p> <p>Conclusions</p> <p>We have proposed a novel approach for identifying weak marginal/strong bivariate interactions. Unlike standard approaches as the top scoring pair (TSP) and the CorScor, our procedure does not assume a specified shape of phenotype separation and may enrich the type of bivariate differential expression patterns that can be uncovered in high dimensional data.</p
Factorizing LambdaMART for cold start recommendations
Recommendation systems often rely on point-wise loss metrics such as the mean
squared error. However, in real recommendation settings only few items are
presented to a user. This observation has recently encouraged the use of
rank-based metrics. LambdaMART is the state-of-the-art algorithm in learning to
rank which relies on such a metric. Despite its success it does not have a
principled regularization mechanism relying in empirical approaches to control
model complexity leaving it thus prone to overfitting.
Motivated by the fact that very often the users' and items' descriptions as
well as the preference behavior can be well summarized by a small number of
hidden factors, we propose a novel algorithm, LambdaMART Matrix Factorization
(LambdaMART-MF), that learns a low rank latent representation of users and
items using gradient boosted trees. The algorithm factorizes lambdaMART by
defining relevance scores as the inner product of the learned representations
of the users and items. The low rank is essentially a model complexity
controller; on top of it we propose additional regularizers to constraint the
learned latent representations that reflect the user and item manifolds as
these are defined by their original feature based descriptors and the
preference behavior. Finally we also propose to use a weighted variant of NDCG
to reduce the penalty for similar items with large rating discrepancy.
We experiment on two very different recommendation datasets, meta-mining and
movies-users, and evaluate the performance of LambdaMART-MF, with and without
regularization, in the cold start setting as well as in the simpler matrix
completion setting. In both cases it outperforms in a significant manner
current state of the art algorithms
Adding value to natural clays as low‑cost adsorbents of methylene blue in polluted water through honeycomb monoliths manufacture
natural Moroccan illite–smectite was used as an adsorbent for the removal of methylene blue (MB) from aqueous solutions.
The clay was characterized by FTIR spectroscopy, TGA, SEM–EDS, X-ray fluorescence, XRD and N2
physisorption. The
influence of pH, temperature and time on the MB adsorption by the clay was investigated. The maximum equilibrium
adsorption capacity was 100 mg g−1 at 45 °C. The kinetic behavior and the isotherms better-fitted with the pseudosecond-
order and Langmuir models, respectively. Clay honeycomb monoliths (50 cells cm−2) were obtained by means
of extrusion from the starting material without any additive except water. The structured filters exhibited better performance
under dynamic conditions than the powdered clay, adding value to the application of this low-cost adsorbent
- …