2,440 research outputs found
Controlling the size distribution of nanoparticles through the use of physical boundaries during laser ablation in liquids
A simple, yet effective method of controlling the size and size distributions
of nanoparticles produced as a result of laser ablation of target material is
presented. The method employs the presence of physical boundaries on either
sides of the ablation site. In order to demonstrate the potential of the
method, experiments have been conducted with copper and titanium as the target
materials that are placed in two different liquid media (water and isopropyl
alcohol). The ablation of the target material immersed in the liquid medium has
been carried out using an Nd:YAG laser. Significant differences in the size and
size distributions are observed in the cases of nanoparticles produced with and
without confining boundaries. It is seen that for any given liquid medium and
the target material, the mean size of the nanoparticles obtained with the
boundary-fitted target surface is consistently higher than that achieved in the
case of open (flat) targets. The observed trend has been attributed to the
plausible role(s) of the confining boundaries in prolonging the thermalisation
time of the plasma plume. In order to ascertain that the observed differences
in sizes of the nanoparticles produced with and without the presence of the
physical barriers are predominantly because of the prolonged thermalisation of
the plasma plume and not due to the possible formation of oxide layer, select
experiments with gold as the target material in water have also been performed.
The experiments also show that, irrespective of the liquid medium, the increase
in the mean size of the copper-based nanoparticles due to the presence of
physical boundaries is relatively higher than that observed in the case of
titanium target material under similar experimental conditions.Comment: 24 pages, 9 figures, a part of this work has been published in
Photonics Prague 2017, (Proc. SPIE 10603, Photonics, Devices, and Systems
VII, 1060304) titled "A novel method for fabrication of size-controlled
metallic nanoparticles
Hyperparameter Importance Across Datasets
With the advent of automated machine learning, automated hyperparameter
optimization methods are by now routinely used in data mining. However, this
progress is not yet matched by equal progress on automatic analyses that yield
information beyond performance-optimizing hyperparameter settings. In this
work, we aim to answer the following two questions: Given an algorithm, what
are generally its most important hyperparameters, and what are typically good
values for these? We present methodology and a framework to answer these
questions based on meta-learning across many datasets. We apply this
methodology using the experimental meta-data available on OpenML to determine
the most important hyperparameters of support vector machines, random forests
and Adaboost, and to infer priors for all their hyperparameters. The results,
obtained fully automatically, provide a quantitative basis to focus efforts in
both manual algorithm design and in automated hyperparameter optimization. The
conducted experiments confirm that the hyperparameters selected by the proposed
method are indeed the most important ones and that the obtained priors also
lead to statistically significant improvements in hyperparameter optimization.Comment: \c{opyright} 2018. Copyright is held by the owner/author(s).
Publication rights licensed to ACM. This is the author's version of the work.
It is posted here for your personal use, not for redistribution. The
definitive Version of Record was published in Proceedings of the 24th ACM
SIGKDD International Conference on Knowledge Discovery & Data Minin
Component-wise Analysis of Automatically Designed Multiobjective Algorithms on Constrained Problems
The performance of multiobjective algorithms varies across problems, making
it hard to develop new algorithms or apply existing ones to new problems. To
simplify the development and application of new multiobjective algorithms,
there has been an increasing interest in their automatic design from component
parts. These automatically designed metaheuristics can outperform their
human-developed counterparts. However, it is still uncertain what are the most
influential components leading to their performance improvement. This study
introduces a new methodology to investigate the effects of the final
configuration of an automatically designed algorithm. We apply this methodology
to a well-performing Multiobjective Evolutionary Algorithm Based on
Decomposition (MOEA/D) designed by the irace package on nine constrained
problems. We then contrast the impact of the algorithm components in terms of
their Search Trajectory Networks (STNs), the diversity of the population, and
the hypervolume. Our results indicate that the most influential components were
the restart and update strategies, with higher increments in performance and
more distinct metric values. Also, their relative influence depends on the
problem difficulty: not using the restart strategy was more influential in
problems where MOEA/D performs better; while the update strategy was more
influential in problems where MOEA/D performs the worst
Using automated algorithm configuration to improve the optimization of decentralized energy systems modeled as large-scale, two-stage stochastic programs
The optimization of decentralized energy systems is an important practical problem that can be modeled using stochastic programs and solved via their large-scale, deterministic equivalent formulations. Unfortunately, using this approach, even when leveraging a high degree of parallelism on large high-performance computing (HPC) systems, finding close-to-optimal solutions still requires long computation. In this work, we present a procedure to reduce this computational effort substantially, using a stateof-the-art automated algorithm configuration method. We apply this procedure to a well-known example of a residential quarter with photovoltaic systems and storages, modeled as a two-stage stochastic mixed-integer linear program (MILP). We demonstrate substantially reduced computing time and costs of up to 50% achieved by our procedure. Our methodology can be applied to other, similarly-modeled energy
systems
Recommended from our members
Multiomics modeling of the immunome, transcriptome, microbiome, proteome and metabolome adaptations during human pregnancy.
MotivationMultiple biological clocks govern a healthy pregnancy. These biological mechanisms produce immunologic, metabolomic, proteomic, genomic and microbiomic adaptations during the course of pregnancy. Modeling the chronology of these adaptations during full-term pregnancy provides the frameworks for future studies examining deviations implicated in pregnancy-related pathologies including preterm birth and preeclampsia.ResultsWe performed a multiomics analysis of 51 samples from 17 pregnant women, delivering at term. The datasets included measurements from the immunome, transcriptome, microbiome, proteome and metabolome of samples obtained simultaneously from the same patients. Multivariate predictive modeling using the Elastic Net (EN) algorithm was used to measure the ability of each dataset to predict gestational age. Using stacked generalization, these datasets were combined into a single model. This model not only significantly increased predictive power by combining all datasets, but also revealed novel interactions between different biological modalities. Future work includes expansion of the cohort to preterm-enriched populations and in vivo analysis of immune-modulating interventions based on the mechanisms identified.Availability and implementationDatasets and scripts for reproduction of results are available through: https://nalab.stanford.edu/multiomics-pregnancy/.Supplementary informationSupplementary data are available at Bioinformatics online
Deep Drone Racing: From Simulation to Reality with Domain Randomization
Dynamically changing environments, unreliable state estimation, and operation
under severe resource constraints are fundamental challenges that limit the
deployment of small autonomous drones. We address these challenges in the
context of autonomous, vision-based drone racing in dynamic environments. A
racing drone must traverse a track with possibly moving gates at high speed. We
enable this functionality by combining the performance of a state-of-the-art
planning and control system with the perceptual awareness of a convolutional
neural network (CNN). The resulting modular system is both platform- and
domain-independent: it is trained in simulation and deployed on a physical
quadrotor without any fine-tuning. The abundance of simulated data, generated
via domain randomization, makes our system robust to changes of illumination
and gate appearance. To the best of our knowledge, our approach is the first to
demonstrate zero-shot sim-to-real transfer on the task of agile drone flight.
We extensively test the precision and robustness of our system, both in
simulation and on a physical platform, and show significant improvements over
the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics
Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854
- …