29,700 research outputs found
AI and OR in management of operations: history and trends
The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Bayesian learning of models for estimating uncertainty in alert systems: application to air traffic conflict avoidance
Alert systems detect critical events which can happen in the short term. Uncertainties in data and in the models used for detection cause alert errors. In the case of air traffic control systems such as Short-Term Conflict Alert (STCA), uncertainty increases errors in alerts of separation loss. Statistical methods that are based on analytical assumptions can provide biased estimates of uncertainties. More accurate analysis can be achieved by using Bayesian Model Averaging, which provides estimates of the posterior probability distribution of a prediction. We propose a new approach to estimate the prediction uncertainty, which is based on observations that the uncertainty can be quantified by variance of predicted outcomes. In our approach, predictions for which variances of posterior probabilities are above a given threshold are assigned to be uncertain. To verify our approach we calculate a probability of alert based on the extrapolation of closest point of approach. Using Heathrow airport flight data we found that alerts are often generated under different conditions, variations in which lead to alert detection errors. Achieving 82.1% accuracy of modelling the STCA system, which is a necessary condition for evaluating the uncertainty in prediction, we found that the proposed method is capable of reducing the uncertain component. Comparison with a bootstrap aggregation method has demonstrated a significant reduction of uncertainty in predictions. Realistic estimates of uncertainties will open up new approaches to improving the performance of alert systems
Theoretical, Measured and Subjective Responsibility in Aided Decision Making
When humans interact with intelligent systems, their causal responsibility
for outcomes becomes equivocal. We analyze the descriptive abilities of a newly
developed responsibility quantification model (ResQu) to predict actual human
responsibility and perceptions of responsibility in the interaction with
intelligent systems. In two laboratory experiments, participants performed a
classification task. They were aided by classification systems with different
capabilities. We compared the predicted theoretical responsibility values to
the actual measured responsibility participants took on and to their subjective
rankings of responsibility. The model predictions were strongly correlated with
both measured and subjective responsibility. A bias existed only when
participants with poor classification capabilities relied less-than-optimally
on a system that had superior classification capabilities and assumed
higher-than-optimal responsibility. The study implies that when humans interact
with advanced intelligent systems, with capabilities that greatly exceed their
own, their comparative causal responsibility will be small, even if formally
the human is assigned major roles. Simply putting a human into the loop does
not assure that the human will meaningfully contribute to the outcomes. The
results demonstrate the descriptive value of the ResQu model to predict
behavior and perceptions of responsibility by considering the characteristics
of the human, the intelligent system, the environment and some systematic
behavioral biases. The ResQu model is a new quantitative method that can be
used in system design and can guide policy and legal decisions regarding human
responsibility in events involving intelligent systems
A brief network analysis of Artificial Intelligence publication
In this paper, we present an illustration to the history of Artificial
Intelligence(AI) with a statistical analysis of publish since 1940. We
collected and mined through the IEEE publish data base to analysis the
geological and chronological variance of the activeness of research in AI. The
connections between different institutes are showed. The result shows that the
leading community of AI research are mainly in the USA, China, the Europe and
Japan. The key institutes, authors and the research hotspots are revealed. It
is found that the research institutes in the fields like Data Mining, Computer
Vision, Pattern Recognition and some other fields of Machine Learning are quite
consistent, implying a strong interaction between the community of each field.
It is also showed that the research of Electronic Engineering and Industrial or
Commercial applications are very active in California. Japan is also publishing
a lot of papers in robotics. Due to the limitation of data source, the result
might be overly influenced by the number of published articles, which is to our
best improved by applying network keynode analysis on the research community
instead of merely count the number of publish.Comment: 18 pages, 7 figure
Recommended from our members
A computer-based product classification and component detection for demanufacturing processes
This is an Author's Accepted Manuscript of an article published in International Journal of Computer Integrated
Manufacturing, 24(10), 900-914, 2011 [copyright Taylor & Francis], available online at:
http://www.tandfonline.com/10.1080/0951192X.2011.579169.The aim of this paper is to propose a novel computer-based product classification, component detection and tracking for demanufacturing and disassembly process. This is achieved by introducing a series of automated and sequential product scanning, component identification, image analysis and sorting – leading to the development of a bill of material (BOM). The produced BOM can then be associated with the relevant disassembly/demanufacture proviso. The proposed integrated image sorting and product classification (ISPC) approach can be considered as a step forward in automation of demanufacturing activities. The ISPC model proposed in this paper utilises and builds on the state-of-the-art technology and current body of research in computer-integrated demanufacturing and remanufacturing (CIDR). An appraisal of the latest research material and the factors that inhibit CIDR methods inpractice are presented. A novel solution for the integration of imaging and material identification techniques toovercome some of the existing shortcomings of automated recycling processes is proposed in this paper. The proposed product scanning and component detection ISPC software consists of four distinct models: the repertory database, the search engine, the product-attributes updater and the image sorting and classification algorithm. The software framework that integrates the four components is presented in this paper. Finally, an overall assessment of applying ISPC at various stages of CIDR processes concludes the article.University of Ibadan MacArthur Foundation Gran
- …