10,974 research outputs found
Automatic machine learning:methods, systems, challenges
This open access book presents the first comprehensive overview of general methods in Automatic Machine Learning (AutoML), collects descriptions of existing systems based on these methods, and discusses the first international challenge of AutoML systems. The book serves as a point of entry into this quickly-developing field for researchers and advanced students alike, as well as providing a reference for practitioners aiming to use AutoML in their work. The recent success of commercial ML applications and the rapid growth of the field has created a high demand for off-the-shelf ML methods that can be used easily and without expert knowledge. Many of the recent machine learning successes crucially rely on human experts, who select appropriate ML architectures (deep learning architectures or more traditional ML workflows) and their hyperparameters; however the field of AutoML targets a progressive automation of machine learning, based on principles from optimization and machine learning itself
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
ADGym: Design Choices for Deep Anomaly Detection
Deep learning (DL) techniques have recently found success in anomaly
detection (AD) across various fields such as finance, medical services, and
cloud computing. However, most of the current research tends to view deep AD
algorithms as a whole, without dissecting the contributions of individual
design choices like loss functions and network architectures. This view tends
to diminish the value of preliminary steps like data preprocessing, as more
attention is given to newly designed loss functions, network architectures, and
learning paradigms. In this paper, we aim to bridge this gap by asking two key
questions: (i) Which design choices in deep AD methods are crucial for
detecting anomalies? (ii) How can we automatically select the optimal design
choices for a given AD dataset, instead of relying on generic, pre-existing
solutions? To address these questions, we introduce ADGym, a platform
specifically crafted for comprehensive evaluation and automatic selection of AD
design elements in deep methods. Our extensive experiments reveal that relying
solely on existing leading methods is not sufficient. In contrast, models
developed using ADGym significantly surpass current state-of-the-art
techniques.Comment: NeurIPS 2023. The first three authors contribute equally. Code
available at https://github.com/Minqi824/ADGy
- …