2,316 research outputs found
Dynamic Multi-Objective Optimization With jMetal and Spark: a Case Study
Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems
In this article we propose a descent method for equality and inequality
constrained multiobjective optimization problems (MOPs) which generalizes the
steepest descent method for unconstrained MOPs by Fliege and Svaiter to
constrained problems by using two active set strategies. Under some regularity
assumptions on the problem, we show that accumulation points of our descent
method satisfy a necessary condition for local Pareto optimality. Finally, we
show the typical behavior of our method in a numerical example
Absorption efficiency of gold nanorods determined by quantum dot fluorescence thermometry
In this work quantum dot fluorescence thermometry, in combination with double-beam confocal microscopy, has been applied to determine the thermal loading of gold nanorods when subjected to an optical excitation at the longitudinal surface plasmon resonance. The absorbing/heating efficiency of low (≈3) aspect ratio gold nanorods has been experimentally determined to be close to 100%, in excellent agreement with theoretical simulations of the extinction, absorption, and scattering spectra based on the discrete dipole approximation
PSA based multi objective evolutionary algorithms
It has generally been acknowledged that both proximity to the Pareto front and a certain diversity along the front, should be targeted when using evolutionary multiobjective optimization. Recently, a new partitioning mechanism, the Part and Select Algorithm (PSA), has been introduced. It was shown that this partitioning allows for the selection of a well-diversified set out of an arbitrary given set, while maintaining low computational cost. When embedded into an evolutionary search (NSGA-II), the PSA has significantly enhanced the exploitation of diversity. In this paper, the ability of the PSA to enhance evolutionary multiobjective algorithms (EMOAs) is further investigated. Two research directions are explored here. The first one deals with the integration of the PSA within an EMOA with a novel strategy. Contrary to most EMOAs, that give a higher priority to proximity over diversity, this new strategy promotes the balance between the two. The suggested algorithm allows some dominated solutions to survive, if they contribute to diversity. It is shown that such an approach substantially reduces the risk of the algorithm to fail in finding the Pareto front. The second research direction explores the use of the PSA as an archiving selection mechanism, to improve the averaged Hausdorff distance obtained by existing EMOAs. It is shown that the integration of the PSA into NSGA-II-I and Δ p -EMOA as an archiving mechanism leads to algorithms that are superior to base EMOAS on problems with disconnected Pareto fronts. © 2014 Springer International Publishing Switzerland
Trading-off Data Fit and Complexity in Training Gaussian Processes with Multiple Kernels
This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this recordLOD 2019: Fifth International Conference on Machine Learning, Optimization, and Data Science, 10-13 September 2019, Siena, ItalyGaussian processes (GPs) belong to a class of probabilistic techniques that have been successfully used in different domains of machine learning and optimization. They are popular because they provide uncertainties in predictions, which sets them apart from other modelling methods providing only point predictions. The uncertainty is particularly useful for decision making as we can gauge how reliable a prediction is. One of the fundamental challenges in using GPs is that the efficacy of a model is conferred by selecting an appropriate kernel and the associated hyperparameter values for a given problem. Furthermore, the training of GPs, that is optimizing the hyperparameters using a data set is traditionally performed using a cost function that is a weighted sum of data fit and model complexity, and the underlying trade-off is completely ignored. Addressing these challenges and shortcomings, in this article, we propose the following automated training scheme. Firstly, we use a weighted product of multiple kernels with a view to relieve the users from choosing an appropriate kernel for the problem at hand without any domain specific knowledge. Secondly, for the first time, we modify GP training by using a multi-objective optimizer to tune the hyperparameters and weights of multiple kernels and extract an approximation of the complete trade-off front between data-fit and model complexity. We then propose to use a novel solution selection strategy based on mean standardized log loss (MSLL) to select a solution from the estimated trade-off front and finalise training of a GP model. The results on three data sets and comparison with the standard approach clearly show the potential benefit of the proposed approach of using multi-objective optimization with multiple kernels.Natural Environment Research Council (NERC
Final results of the EDELWEISS-II WIMP search using a 4-kg array of cryogenic germanium detectors with interleaved electrodes
The EDELWEISS-II collaboration has completed a direct search for WIMP dark
matter with an array of ten 400-g cryogenic germanium detectors in operation at
the Laboratoire Souterrain de Modane. The combined use of thermal phonon
sensors and charge collection electrodes with an interleaved geometry enables
the efficient rejection of gamma-induced radioactivity as well as near-surface
interactions. A total effective exposure of 384 kg.d has been achieved, mostly
coming from fourteen months of continuous operation. Five nuclear recoil
candidates are observed above 20 keV, while the estimated background is 3.0
events. The result is interpreted in terms of limits on the cross-section of
spin-independent interactions of WIMPs and nucleons. A cross-section of
4.4x10^-8 pb is excluded at 90%CL for a WIMP mass of 85 GeV. New constraints
are also set on models where the WIMP-nucleon scattering is inelastic.Comment: 23 pages, 5 figures; matches published versio
A search for low-mass WIMPs with EDELWEISS-II heat-and-ionization detectors
We report on a search for low-energy (E < 20 keV) WIMP-induced nuclear
recoils using data collected in 2009 - 2010 by EDELWEISS from four germanium
detectors equipped with thermal sensors and an electrode design (ID) which
allows to efficiently reject several sources of background. The data indicate
no evidence for an exponential distribution of low-energy nuclear recoils that
could be attributed to WIMP elastic scattering after an exposure of 113 kg.d.
For WIMPs of mass 10 GeV, the observation of one event in the WIMP search
region results in a 90% CL limit of 1.0x10^-5 pb on the spin-independent
WIMP-nucleon scattering cross-section, which constrains the parameter space
associated with the findings reported by the CoGeNT, DAMA and CRESST
experiments.Comment: PRD rapid communication accepte
Muon-induced background in the EDELWEISS dark matter search
A dedicated analysis of the muon-induced background in the EDELWEISS dark
matter search has been performed on a data set acquired in 2009 and 2010. The
total muon flux underground in the Laboratoire Souterrain de Modane (LSM) was
measured to be \,muons/m/d. The
modular design of the muon-veto system allows the reconstruction of the muon
trajectory and hence the determination of the angular dependent muon flux in
LSM. The results are in good agreement with both MC simulations and earlier
measurements. Synchronization of the muon-veto system with the phonon and
ionization signals of the Ge detector array allowed identification of
muon-induced events. Rates for all muon-induced events and of WIMP-like events were extracted. After
vetoing, the remaining rate of accepted muon-induced neutrons in the
EDELWEISS-II dark matter search was determined to be at 90%\,C.L. Based on
these results, the muon-induced background expectation for an anticipated
exposure of 3000\,\kgd\ for EDELWEISS-3 is
events.Comment: 21 pages, 16 figures, Accepted for publication in Astropart. Phy
First Measurement of Coherent Elastic Neutrino-Nucleus Scattering on Argon
We report the first measurement of coherent elastic neutrino-nucleus
scattering (\cevns) on argon using a liquid argon detector at the Oak Ridge
National Laboratory Spallation Neutron Source. Two independent analyses prefer
\cevns over the background-only null hypothesis with greater than
significance. The measured cross section, averaged over the incident neutrino
flux, is (2.2 0.7) 10 cm -- consistent with the
standard model prediction. The neutron-number dependence of this result,
together with that from our previous measurement on CsI, confirms the existence
of the \cevns process and provides improved constraints on non-standard
neutrino interactions.Comment: 8 pages, 5 figures with 2 pages, 6 figures supplementary material V3:
fixes to figs 3,4 V4: fix typo in table 1, V5: replaced missing appendix, V6:
fix Eq 1, new fig 3, V7 final version, updated with final revision
Multi-Objective Optimization with an Adaptive Resonance Theory-Based Estimation of Distribution Algorithm: A Comparative Study
Proceedings of: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011.The introduction of learning to the search mechanisms of optimization algorithms has been nominated as one of the viable approaches when dealing with complex optimization problems, in particular with multi-objective ones. One of the forms of carrying out this hybridization process is by using multi-objective optimization estimation of distribution algorithms (MOEDAs). However, it has been pointed out that current MOEDAs have a intrinsic shortcoming in their model-building algorithms that hamper their performance. In this work we argue that error-based learning, the class of learning most commonly used in MOEDAs is responsible for current MOEDA underachievement. We present adaptive resonance theory (ART) as a
suitable learning paradigm alternative and present a novel algorithm called multi-objective ART-based EDA (MARTEDA) that uses a Gaussian ART neural network for model-building and an hypervolume-based selector as described for the HypE algorithm. In order to assert the improvement obtained by combining two cutting-edge approaches to optimization an extensive set of experiments are carried out. These experiments also test the scalability of MARTEDA as the number of objective functions increases.This work was supported by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad
- …