234,437 research outputs found
Big Data - Supply Chain Management Framework for Forecasting: Data Preprocessing and Machine Learning Techniques
This article intends to systematically identify and comparatively analyze
state-of-the-art supply chain (SC) forecasting strategies and technologies. A
novel framework has been proposed incorporating Big Data Analytics in SC
Management (problem identification, data sources, exploratory data analysis,
machine-learning model training, hyperparameter tuning, performance evaluation,
and optimization), forecasting effects on human-workforce, inventory, and
overall SC. Initially, the need to collect data according to SC strategy and
how to collect them has been discussed. The article discusses the need for
different types of forecasting according to the period or SC objective. The SC
KPIs and the error-measurement systems have been recommended to optimize the
top-performing model. The adverse effects of phantom inventory on forecasting
and the dependence of managerial decisions on the SC KPIs for determining model
performance parameters and improving operations management, transparency, and
planning efficiency have been illustrated. The cyclic connection within the
framework introduces preprocessing optimization based on the post-process KPIs,
optimizing the overall control process (inventory management, workforce
determination, cost, production and capacity planning). The contribution of
this research lies in the standard SC process framework proposal, recommended
forecasting data analysis, forecasting effects on SC performance, machine
learning algorithms optimization followed, and in shedding light on future
research
Application of Saliency Maps for Optimizing Camera Positioning in Deep Learning Applications
In the fields of process control engineering and robotics, especially in automatic control, optimization challenges frequently manifest as complex problems with expensive evaluations. This thesis zeroes in on one such problem: the optimization of camera positions for Convolutional Neural Networks (CNNs). CNNs have specific attention points in images that are often not intuitive to human perception, making camera placement critical for performance.
The research is guided by two primary questions. The first investigates the role of Explainable Artificial Intelligence (XAI), specifically GradCAM++ visual explanations, in Computer Vision for aiding in the evaluation of different camera positions. Building on this, the second question assesses a novel algorithm that leverages these XAI features against traditional black-box optimization methods.
To answer these questions, the study employs a robotic auto-positioning system for data collection, CNN model training, and performance evaluation. A case study focused on classifying flow regimes in industrial-grade bioreactors validates the method. The proposed approach shows improvements over established techniques like Grid Search, Random Search, Bayesian optimization, and Simulated Annealing. Future work will focus on gathering more data and including noise for generalized conclusions.:Contents
1 Introduction
1.1 Motivation
1.2 Problem Analysis
1.3 Research Question
1.4 Structure of the Thesis
2 State of the Art
2.1 Literature Research Methodology
2.1.1 Search Strategy
2.1.2 Inclusion and Exclusion Criteria
2.2 Blackbox Optimization
2.3 Mathematical Notation
2.4 Bayesian Optimization
2.5 Simulated Annealing
2.6 Random Search
2.7 Gridsearch
2.8 Explainable A.I. and Saliency Maps
2.9 Flowregime Classification in Stirred Vessels
2.10 Performance Metrics
2.10.1 R2 Score and Polynomial Regression for Experiment Data Analysis
2.10.2 Blackbox Optimization Performance Metrics
2.10.3 CNN Performance Metrics
3 Methodology
3.1 Requirement Analysis and Research Hypothesis
3.2 Research Approach: Case Study
3.3 Data Collection
3.4 Evaluation and Justification
4 Concept
4.1 System Overview
4.2 Data Flow
4.3 Experimental Setup
4.4 Optimization Challenges and Approaches
5 Data Collection and Experimental Setup
5.1 Hardware Components
5.2 Data Recording and Design of Experiments
5.3 Data Collection
5.4 Post-Experiment
6 Implementation
6.1 Simulation Unit
6.2 Recommendation Scalar from Saliency Maps
6.3 Saliency Map Features as Guidance Mechanism
6.4 GradCam++ Enhanced Bayesian Optimization
6.5 Benchmarking Unit
6.6 Benchmarking
7 Results and Evaluation
7.1 Experiment Data Analysis
7.2 Recommendation Scalar
7.3 Benchmarking Results and Quantitative Analysis
7.3.1 Accuracy Results from the Benchmarking Process
7.3.2 Cumulative Results Interpretation
7.3.3 Analysis of Variability
7.4 Answering the Research Questions
7.5 Summary
8 Discussion
8.1 Critical Examination of Limitations
8.2 Discussion of Solutions to Limitations
8.3 Practice-Oriented Discussion of Findings
9 Summary and OutlookIm Bereich der Prozessleittechnik und Robotik, speziell bei der automatischen Steuerung, treten oft komplexe Optimierungsprobleme auf. Diese Arbeit konzentriert sich auf die Optimierung der Kameraplatzierung in Anwendungen, die Convolutional Neural Networks (CNNs) verwenden. Da CNNs spezifische, fĂŒr den Menschen nicht immer ersichtliche, Merkmale in Bildern hervorheben, ist die intuitive Platzierung der Kamera oft nicht optimal.
Zwei Forschungsfragen leiten diese Arbeit: Die erste Frage untersucht die Rolle von ErklĂ€rbarer KĂŒnstlicher Intelligenz (XAI) in der Computer Vision zur Bereitstellung von Merkmalen fĂŒr die Bewertung von Kamerapositionen. Die zweite Frage vergleicht einen darauf basierenden Algorithmus mit anderen Blackbox-Optimierungstechniken. Ein robotisches Auto-Positionierungssystem wird zur Datenerfassung und fĂŒr Experimente eingesetzt.
Als Lösungsansatz wird eine Methode vorgestellt, die XAI-Merkmale, insbesondere solche aus GradCAM++ Erkenntnissen, mit einem Bayesschen Optimierungsalgorithmus kombiniert. Diese Methode wird in einer Fallstudie zur Klassifizierung von Strömungsregimen in industriellen Bioreaktoren angewendet und zeigt eine gesteigerte performance im Vergleich zu etablierten Methoden. ZukĂŒnftige Forschung wird sich auf die Sammlung weiterer Daten, die Inklusion von verrauschten Daten und die Konsultation von Experten fĂŒr eine kostengĂŒnstigere Implementierung konzentrieren.:Contents
1 Introduction
1.1 Motivation
1.2 Problem Analysis
1.3 Research Question
1.4 Structure of the Thesis
2 State of the Art
2.1 Literature Research Methodology
2.1.1 Search Strategy
2.1.2 Inclusion and Exclusion Criteria
2.2 Blackbox Optimization
2.3 Mathematical Notation
2.4 Bayesian Optimization
2.5 Simulated Annealing
2.6 Random Search
2.7 Gridsearch
2.8 Explainable A.I. and Saliency Maps
2.9 Flowregime Classification in Stirred Vessels
2.10 Performance Metrics
2.10.1 R2 Score and Polynomial Regression for Experiment Data Analysis
2.10.2 Blackbox Optimization Performance Metrics
2.10.3 CNN Performance Metrics
3 Methodology
3.1 Requirement Analysis and Research Hypothesis
3.2 Research Approach: Case Study
3.3 Data Collection
3.4 Evaluation and Justification
4 Concept
4.1 System Overview
4.2 Data Flow
4.3 Experimental Setup
4.4 Optimization Challenges and Approaches
5 Data Collection and Experimental Setup
5.1 Hardware Components
5.2 Data Recording and Design of Experiments
5.3 Data Collection
5.4 Post-Experiment
6 Implementation
6.1 Simulation Unit
6.2 Recommendation Scalar from Saliency Maps
6.3 Saliency Map Features as Guidance Mechanism
6.4 GradCam++ Enhanced Bayesian Optimization
6.5 Benchmarking Unit
6.6 Benchmarking
7 Results and Evaluation
7.1 Experiment Data Analysis
7.2 Recommendation Scalar
7.3 Benchmarking Results and Quantitative Analysis
7.3.1 Accuracy Results from the Benchmarking Process
7.3.2 Cumulative Results Interpretation
7.3.3 Analysis of Variability
7.4 Answering the Research Questions
7.5 Summary
8 Discussion
8.1 Critical Examination of Limitations
8.2 Discussion of Solutions to Limitations
8.3 Practice-Oriented Discussion of Findings
9 Summary and Outloo
Adaptive performance optimization for large-scale traffic control systems
In this paper, we study the problem of optimizing (fine-tuning) the design parameters of large-scale traffic control systems that are composed of distinct and mutually interacting modules. This problem usually requires a considerable amount of human effort and time to devote to the successful deployment and operation of traffic control systems due to the lack of an automated well-established systematic approach. We investigate the adaptive fine-tuning algorithm for determining the set of design parameters of two distinct mutually interacting modules of the traffic-responsive urban control (TUC) strategy, i.e., split and cycle, for the large-scale urban road network of the city of Chania, Greece. Simulation results are presented, demonstrating that the network performance in terms of the daily mean speed, which is attained by the proposed adaptive optimization methodology, is significantly better than the original TUC system in the case in which the aforementioned design parameters are manually fine-tuned to virtual perfection by the system operators
Psychological principles of successful aging technologies: A mini-review
Based on resource-oriented conceptions of successful life-span development, we propose three principles for evaluating assistive technology: (a) net resource release; (b) person specificity, and (c) proximal versus distal frames of evaluation. We discuss how these general principles can aid the design and evaluation of assistive technology in adulthood and old age, and propose two technological strategies, one targeting sensorimotor and the other cognitive functioning. The sensorimotor strategy aims at releasing cognitive resources such as attention and working memory by reducing the cognitive demands of sensory or sensorimotor aspects of performance. The cognitive strategy attempts to provide adaptive and individualized cuing structures orienting the individual in time and space by providing prompts that connect properties of the environment to the individual's action goals. We argue that intelligent assistive technology continuously adjusts the balance between `environmental support' and `self-initiated processing' in person-specific and aging-sensitive ways, leading to enhanced allocation of cognitive resources. Furthermore, intelligent assistive technology may foster the generation of formerly latent cognitive resources by activating developmental reserves (plasticity). We conclude that `lifespan technology', if co-constructed by behavioral scientists, engineers, and aging individuals, offers great promise for improving both the transition from middle adulthood to old age and the degree of autonomy in old age in present and future generations. Copyright (C) 2008 S. Karger AG, Basel
Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search
Target search with unmanned aerial vehicles (UAVs) is relevant problem to
many scenarios, e.g., search and rescue (SaR). However, a key challenge is
planning paths for maximal search efficiency given flight time constraints. To
address this, we propose the Obstacle-aware Adaptive Informative Path Planning
(OA-IPP) algorithm for target search in cluttered environments using UAVs. Our
approach leverages a layered planning strategy using a Gaussian Process
(GP)-based model of target occupancy to generate informative paths in
continuous 3D space. Within this framework, we introduce an adaptive replanning
scheme which allows us to trade off between information gain, field coverage,
sensor performance, and collision avoidance for efficient target detection.
Extensive simulations show that our OA-IPP method performs better than
state-of-the-art planners, and we demonstrate its application in a realistic
urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and
Automation (ICRA-2019) to be held at Montreal, Canad
Recommended from our members
State-of-the-art on research and applications of machine learning in the building life cycle
Fueled by big data, powerful and affordable computing resources, and advanced algorithms, machine learning has been explored and applied to buildings research for the past decades and has demonstrated its potential to enhance building performance. This study systematically surveyed how machine learning has been applied at different stages of building life cycle. By conducting a literature search on the Web of Knowledge platform, we found 9579 papers in this field and selected 153 papers for an in-depth review. The number of published papers is increasing year by year, with a focus on building design, operation, and control. However, no study was found using machine learning in building commissioning. There are successful pilot studies on fault detection and diagnosis of HVAC equipment and systems, load prediction, energy baseline estimate, load shape clustering, occupancy prediction, and learning occupant behaviors and energy use patterns. None of the existing studies were adopted broadly by the building industry, due to common challenges including (1) lack of large scale labeled data to train and validate the model, (2) lack of model transferability, which limits a model trained with one data-rich building to be used in another building with limited data, (3) lack of strong justification of costs and benefits of deploying machine learning, and (4) the performance might not be reliable and robust for the stated goals, as the method might work for some buildings but could not be generalized to others. Findings from the study can inform future machine learning research to improve occupant comfort, energy efficiency, demand flexibility, and resilience of buildings, as well as to inspire young researchers in the field to explore multidisciplinary approaches that integrate building science, computing science, data science, and social science
- âŠ