130,923 research outputs found
Multi-Criteria Handover Using Modified Weighted TOPSIS Methods for Heterogeneous Networks
Ultra-dense small cell deployment in future 5G networks is a promising solution to the ever increasing demand of capacity and coverage. However, this deployment can lead to severe interference and high number of handovers, which in turn cause increased signaling overhead. In order to ensure service continuity for mobile users, minimize the number of unnecessary handovers and reduce the signaling overhead in heterogeneous networks, it is important to model adequately the handover decision problem. In this paper, we model the handover decision based on the multiple attribute decision making method, namely Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The base stations are considered as alternatives, and the handover metrics are considered as attributes to selecting the proper base station for handover. In this paper, we propose two modified TOPSIS methods for the purpose of handover management in the heterogeneous network. The first method incorporates the entropy weighting technique for handover metrics weighting. The second proposed method uses a standard deviation weighting technique to score the importance of each handover metric. Simulation results reveal that the proposed methods outperformed the existing methods by reducing the number of frequent handovers and radio link failures, in addition to enhancing the achieved mean user throughput
Detection of gene annotations and protein-protein interaction associated disorders through transitive relationships between integrated annotations
Background
Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful.
Methods
We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set.
Results
We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm.
Conclusions
Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper weighting policy, it is able to predict a significant number of novel annotations, demonstrating to actually be a helpful tool in supporting scientists in the curation process of gene functional annotations
Computational algorithms to predict Gene Ontology annotations
Background
Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful.
Methods
We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set.
Results
We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm.
Conclusions
Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper weighting policy, it is able to predict a significant number of novel annotations, demonstrating to actually be a helpful tool in supporting scientists in the curation process of gene functional annotations
Perceptions of the Use of Blueprinting in a Formative Theory Assessment in Pharmacology Education
Objectives: This study aimed to assess perceptions of the use of a blueprint in a pharmacology formative theory assessment. Methods: This study took place from October 2015 to February 2016 at a medical college in Gujurat, India. Faculty from the Department of Pharmacology used an internal syllabus to prepare an assessment blueprint. A total of 12 faculty members prepared learning objectives and categorised cognitive domain levels by consensus. Learning objectives were scored according to clinical importance and marks were distributed according to proportional weighting. A three-dimensional test specification table of syllabus content, assessment tools and cognitive domains was prepared. Based on this table, a theory paper was created and administered to 126 pharmacology students. Feedback was then collected from the faculty members and students using a 5-point Likert scale. Results: The majority of faculty members agreed that using a blueprint ensured proper weighting of marks for important topics (90.00%), aligned questions with learning objectives (80.00%), distributed questions according to clinical importance (100.00%) and minimised inter-examiner variations in selecting questions (90.00%). Few faculty members believed that use of the blueprint created too many easy questions (10.00%) or too many difficult questions (10.00%). Most students felt that the paper had a uniform distribution of questions from the syllabus (90.24%), that important topics were appropriately weighted (77.23%), was well organised (79.67%) and tested indepth subject knowledge (74.80%). Conclusion: These findings indicate that blueprinting should be an integral part of written assessments in pharmacology education
Group Importance Sampling for Particle Filtering and MCMC
Bayesian methods and their implementations by means of sophisticated Monte
Carlo techniques have become very popular in signal processing over the last
years. Importance Sampling (IS) is a well-known Monte Carlo technique that
approximates integrals involving a posterior distribution by means of weighted
samples. In this work, we study the assignation of a single weighted sample
which compresses the information contained in a population of weighted samples.
Part of the theory that we present as Group Importance Sampling (GIS) has been
employed implicitly in different works in the literature. The provided analysis
yields several theoretical and practical consequences. For instance, we discuss
the application of GIS into the Sequential Importance Resampling framework and
show that Independent Multiple Try Metropolis schemes can be interpreted as a
standard Metropolis-Hastings algorithm, following the GIS approach. We also
introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS.
The first one, named Group Metropolis Sampling method, produces a Markov chain
of sets of weighted samples. All these sets are then employed for obtaining a
unique global estimator. The second one is the Distributed Particle
Metropolis-Hastings technique, where different parallel particle filters are
jointly used to drive an MCMC algorithm. Different resampled trajectories are
compared and then tested with a proper acceptance probability. The novel
schemes are tested in different numerical experiments such as learning the
hyperparameters of Gaussian Processes, two localization problems in a wireless
sensor network (with synthetic and real data) and the tracking of vegetation
parameters given satellite observations, where they are compared with several
benchmark Monte Carlo techniques. Three illustrative Matlab demos are also
provided.Comment: To appear in Digital Signal Processing. Related Matlab demos are
provided at https://github.com/lukafree/GIS.gi
Methods for many-objective optimization: an analysis
Decomposition-based methods are often cited as the
solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as
Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods
Nested Sequential Monte Carlo Methods
We propose nested sequential Monte Carlo (NSMC), a methodology to sample from
sequences of probability distributions, even where the random variables are
high-dimensional. NSMC generalises the SMC framework by requiring only
approximate, properly weighted, samples from the SMC proposal distribution,
while still resulting in a correct SMC algorithm. Furthermore, NSMC can in
itself be used to produce such properly weighted samples. Consequently, one
NSMC sampler can be used to construct an efficient high-dimensional proposal
distribution for another NSMC sampler, and this nesting of the algorithm can be
done to an arbitrary degree. This allows us to consider complex and
high-dimensional models using SMC. We show results that motivate the efficacy
of our approach on several filtering problems with dimensions in the order of
100 to 1 000.Comment: Extended version of paper published in Proceedings of the 32nd
International Conference on Machine Learning (ICML), Lille, France, 201
Towards a Simplified Dynamic Wake Model using POD Analysis
We apply the proper orthogonal decomposition (POD) to large eddy simulation
data of a wind turbine wake in a turbulent atmospheric boundary layer. The
turbine is modeled as an actuator disk. Our analyis mainly focuses on the
question whether POD could be a useful tool to develop a simplified dynamic
wake model. The extracted POD modes are used to obtain approximate descriptions
of the velocity field. To assess the quality of these POD reconstructions, we
define simple measures which are believed to be relevant for a sequential
turbine in the wake such as the energy flux through a disk in the wake. It is
shown that only a few modes are necessary to capture basic dynamical aspects of
these measures even though only a small part of the turbulent kinetic energy is
restored. Furthermore, we show that the importance of the individual modes
depends on the measure chosen. Therefore, the optimal choice of modes for a
possible model could in principle depend on the application of interest. We
additionally present a possible interpretation of the POD modes relating them
to specific properties of the wake. For example the first mode is related to
the horizontal large scale movement. Besides yielding a deeper understanding,
this also enables us to view our results in comparison to existing dynamic wake
models
Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework
Example weighting algorithm is an effective solution to the training bias
problem, however, most previous typical methods are usually limited to human
knowledge and require laborious tuning of hyperparameters. In this paper, we
propose a novel example weighting framework called Learning to Auto Weight
(LAW). The proposed framework finds step-dependent weighting policies
adaptively, and can be jointly trained with target networks without any
assumptions or prior knowledge about the dataset. It consists of three key
components: Stage-based Searching Strategy (3SM) is adopted to shrink the huge
searching space in a complete training process; Duplicate Network Reward (DNR)
gives more accurate supervision by removing randomness during the searching
process; Full Data Update (FDU) further improves the updating efficiency.
Experimental results demonstrate the superiority of weighting policy explored
by LAW over standard training pipeline. Compared with baselines, LAW can find a
better weighting schedule which achieves much more superior accuracy on both
biased CIFAR and ImageNet.Comment: Accepted by AAAI 202
- …