31 research outputs found
Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars
The discovery of neural architectures from simple building blocks is a
long-standing goal of Neural Architecture Search (NAS). Hierarchical search
spaces are a promising step towards this goal but lack a unifying search space
design framework and typically only search over some limited aspect of
architectures. In this work, we introduce a unifying search space design
framework based on context-free grammars that can naturally and compactly
generate expressive hierarchical search spaces that are 100s of orders of
magnitude larger than common spaces from the literature. By enhancing and using
their properties, we effectively enable search over the complete architecture
and can foster regularity. Further, we propose an efficient hierarchical kernel
design for a Bayesian Optimization search strategy to efficiently search over
such huge spaces. We demonstrate the versatility of our search space design
framework and show that our search strategy can be superior to existing NAS
approaches. Code is available at
https://github.com/automl/hierarchical_nas_construction
NAS-ASDet: An Adaptive Design Method for Surface Defect Detection Network using Neural Architecture Search
Deep convolutional neural networks (CNNs) have been widely used in surface
defect detection. However, no CNN architecture is suitable for all detection
tasks and designing effective task-specific requires considerable effort. The
neural architecture search (NAS) technology makes it possible to automatically
generate adaptive data-driven networks. Here, we propose a new method called
NAS-ASDet to adaptively design network for surface defect detection. First, a
refined and industry-appropriate search space that can adaptively adjust the
feature distribution is designed, which consists of repeatedly stacked basic
novel cells with searchable attention operations. Then, a progressive search
strategy with a deep supervision mechanism is used to explore the search space
faster and better. This method can design high-performance and lightweight
defect detection networks with data scarcity in industrial scenarios. The
experimental results on four datasets demonstrate that the proposed method
achieves superior performance and a relatively lighter model size compared to
other competitive methods, including both manual and NAS-based approaches
Syntactical Similarity Learning by Means of Grammatical Evolution
Several research efforts have shown that a similarity function synthesized from examples may capture an application-specific similarity criterion in a way that fits the application needs more effectively than a generic distance definition. In this work, we propose a similarity learning algorithm tailored to problems of syntax-based entity extraction from unstructured text streams. The algorithm takes in input pairs of strings along with an indication of whether they adhere or not adhere to the same syntactic pattern. Our approach is based on Grammatical Evolution and explores systematically a similarity definition space including all functions that may be expressed with a specialized, simple language that we have defined for this purpose. We assessed our proposal on patterns representative of practical applications. The results suggest that the proposed approach is indeed feasible and that the learned similarity function is more effective than the Levenshtein distance and the Jaccard similarity index
POPNASv2: An Efficient Multi-Objective Neural Architecture Search Technique
Automating the research for the best neural network model is a task that has gained more and more relevance in the last few years. In this context, Neural Architecture Search (NAS) represents the most effective technique whose results rival
the state of the art hand-crafted architectures.
However, this approach requires a lot of computational capabilities as well as research time, which make prohibitive its usage in many real-world scenarios.
With its sequential model-based optimization strategy, Progressive Neural Architecture Search (PNAS) represents a possible step forward to face this resources issue. Despite the quality of the found network architectures, this technique is still limited in research time.
A significant step in this direction has been done by Pareto-Optimal Progressive Neural Architecture Search (POPNAS), which expand PNAS with a time predictor to enable a trade-off between search time and accuracy, considering a multi-objective optimization problem.
This paper proposes a new version of the Pareto-Optimal Progressive Neural Architecture Search, called POPNASv2.
Our approach enhances its first version and improves its performance.
We expanded the search space by adding new operators and improved the quality of both predictors to build more accurate Pareto fronts.
Moreover, we introduced cell equivalence checks and enriched the search strategy with an adaptive greedy exploration step.
Our efforts allow POPNASv2 to achieve PNAS-like performance with an average 4x factor search time speed-up.
The official version of this tool is located in the following link: AndreaFalanti/popnas-v2 (github.com
FIFE: an Infrastructure-as-code based Framework for Evaluating VM instances from multiple clouds
Funding: ABC project (Adaptive Brokerage for the Cloud) funded by EPSRC EP/R010528/1.To choose an optimal VM, Cloud users often need to step a process of evaluating the performance of VMs by benchmarking or running a black-box search technique such as Bayesian optimisation. To facilitate the process, we develop a generic and highly configurable Framework with Infrastructure-as-Code (IaC) support For VM Evaluation (FIFE). FIFE abstract the process as a searcher, selector, deployer and interpreter. It allows users to specify the target VM sets and evaluation objectives with JSON to automate the process. We demonstrate the use of the framework by setting up of a Bayesian optimization VM searching system. We evaluate the system with various experimental setups, i.e. different combinations of cloud provider numbers and parallel search. The results show that the search efficiency remains the same for the case when the search space is consist of VM from multiple cloud providers, and the parallel search can significantly reduce search time when the number of parallelisation is set properly.Postprin
Chemical-reaction-inspired metaheuristic for optimization
We encounter optimization problems in our daily lives and in various research domains. Some of them are so hard that we can, at best, approximate the best solutions with (meta-) heuristic methods. However, the huge number of optimization problems and the small number of generally acknowledged methods mean that more metaheuristics are needed to fill the gap. We propose a new metaheuristic, called chemical reaction optimization (CRO), to solve optimization problems. It mimics the interactions of molecules in a chemical reaction to reach a low energy stable state. We tested the performance of CRO with three nondeterministic polynomial-time hard combinatorial optimization problems. Two of them were traditional benchmark problems and the other was a real-world problem. Simulation results showed that CRO is very competitive with the few existing successful metaheuristics, having outperformed them in some cases, and CRO achieved the best performance in the real-world problem. Moreover, with the No-Free-Lunch theorem, CRO must have equal performance as the others on average, but it can outperform all other metaheuristics when matched to the right problem type. Therefore, it provides a new approach for solving optimization problems. CRO may potentially solve those problems which may not be solvable with the few generally acknowledged approaches. © 2006 IEEE.published_or_final_versio
Learning Where To Look -- Generative NAS is Surprisingly Efficient
The efficient, automated search for well-performing neural architectures
(NAS) has drawn increasing attention in the recent past. Thereby, the
predominant research objective is to reduce the necessity of costly evaluations
of neural architectures while efficiently exploring large search spaces. To
this aim, surrogate models embed architectures in a latent space and predict
their performance, while generative models for neural architectures enable
optimization-based search within the latent space the generator draws from.
Both, surrogate and generative models, have the aim of facilitating
query-efficient search in a well-structured latent space. In this paper, we
further improve the trade-off between query-efficiency and promising
architecture generation by leveraging advantages from both, efficient surrogate
models and generative design. To this end, we propose a generative model,
paired with a surrogate predictor, that iteratively learns to generate samples
from increasingly promising latent subspaces. This approach leads to very
effective and efficient architecture search, while keeping the query amount
low. In addition, our approach allows in a straightforward manner to jointly
optimize for multiple objectives such as accuracy and hardware latency. We show
the benefit of this approach not only w.r.t. the optimization of architectures
for highest classification accuracy but also in the context of hardware
constraints and outperform state-of-the-art methods on several NAS benchmarks
for single and multiple objectives. We also achieve state-of-the-art
performance on ImageNet. The code is available at
http://github.com/jovitalukasik/AG-Net .Comment: Accepted to European Conference on Computer Vision 202