1,796 research outputs found

    Multi crteria decision making and its applications : a literature review

    Get PDF
    This paper presents current techniques used in Multi Criteria Decision Making (MCDM) and their applications. Two basic approaches for MCDM, namely Artificial Intelligence MCDM (AIMCDM) and Classical MCDM (CMCDM) are discussed and investigated. Recent articles from international journals related to MCDM are collected and analyzed to find which approach is more common than the other in MCDM. Also, which area these techniques are applied to. Those articles are appearing in journals for the year 2008 only. This paper provides evidence that currently, both AIMCDM and CMCDM are equally common in MCDM

    Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions

    Full text link
    Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels from face images. The estimated scores are then fed into the personalized Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by each person. Personalization of the model is performed using a newly introduced facial expressiveness score, unique for each person. To the best of our knowledge, this is the first approach to automatically estimate VAS from face images. We show the benefits of the proposed personalized over traditional non-personalized approach on a benchmark dataset for pain analysis from face images.Comment: Computer Vision and Pattern Recognition Conference, The 1st International Workshop on Deep Affective Learning and Context Modelin

    Bridging Systems: Open Problems for Countering Destructive Divisiveness across Ranking, Recommenders, and Governance

    Full text link
    Divisiveness appears to be increasing in much of the world, leading to concern about political violence and a decreasing capacity to collaboratively address large-scale societal challenges. In this working paper we aim to articulate an interdisciplinary research and practice area focused on what we call bridging systems: systems which increase mutual understanding and trust across divides, creating space for productive conflict, deliberation, or cooperation. We give examples of bridging systems across three domains: recommender systems on social media, collective response systems, and human-facilitated group deliberation. We argue that these examples can be more meaningfully understood as processes for attention-allocation (as opposed to "content distribution" or "amplification") and develop a corresponding framework to explore similarities - and opportunities for bridging - across these seemingly disparate domains. We focus particularly on the potential of bridging-based ranking to bring the benefits of offline bridging into spaces which are already governed by algorithms. Throughout, we suggest research directions that could improve our capacity to incorporate bridging into a world increasingly mediated by algorithms and artificial intelligence.Comment: 40 pages, 11 figures. See https://bridging.systems for more about this wor

    Autotuning Stencil Computations with Structural Ordinal Regression Learning

    Get PDF
    Stencil computations expose a large and complex space of equivalent implementations. These computations often rely on autotuning techniques, based on iterative compilation or machine learning (ML), to achieve high performance. Iterative compilation autotuning is a challenging and time-consuming task that may be unaffordable in many scenarios. Meanwhile, traditional ML autotuning approaches exploiting classification algorithms (such as neural networks and support vector machines) face difficulties in capturing all features of large search spaces. This paper proposes a new way of automatically tuning stencil computations based on structural learning. By organizing the training data in a set of partially-sorted samples (i.e., rankings), the problem is formulated as a ranking prediction model, which translates to an ordinal regression problem. Our approach can be coupled with an iterative compilation method or used as a standalone autotuner. We demonstrate its potential by comparing it with state-of-the-art iterative compilation methods on a set of nine stencil codes and by analyzing the quality of the obtained ranking in terms of Kendall rank correlation coefficients

    DEMAND-RESPONSIVE AIRSPACE SECTORIZATION AND AIR TRAFFIC CONTROLLER STAFFING

    Get PDF
    This dissertation optimizes the problem of designing sector boundaries and assigning air traffic controllers to sectors while considering demand variation over time. For long-term planning purposes, an optimization problem of clean-sheet sectorization is defined to generate a set of sector boundaries that accommodates traffic variation across the planning horizon while minimizing staffing. The resulting boundaries should best accommodate traffic over space and time and be the most efficient in terms of controller shifts. Two integer program formulations are proposed to address the defined problem, and their equivalency is proven. The performance of both formulations is examined with randomly generated numerical examples. Then, a real-world application confirms that the proposed model can save 10%-16% controller-hours, depending on the degree of demand variation over time, in comparison with the sectorization model with a strategy that does not take demand variation into account. Due to the size of realistic sectorization problems, a heuristic based on mathematical programming is developed for a large-scale neighborhood search and implemented in a parallel computing framework in order to obtain quality solutions within time limits. The impact of neighborhood definition and initial solution on heuristic performance has been examined. Numerical results show that the heuristic and the proposed neighborhood selection schemes can find significant improvements beyond the best solutions that are found exclusively from the Mixed Integer Program solver's global search. For operational purposes, under given sector boundaries, an optimization model is proposed to create an operational plan for dynamically combining or splitting sectors and determining controller staffing. In particular, the relation between traffic condition and the staffing decisions is no longer treated as a deterministic, step-wise function but a probabilistic, nonlinear one. Ordinal regression analysis is applied to estimate a set of sector-specific models for predicting sector staffing decisions. The statistical results are then incorporated into the proposed sector combination model. With realistic traffic and staffing data, the proposed model demonstrates the potential saving in controller staffing achievable by optimizing the combination schemes, depending on how freely sectors can combine and split. To address concerns about workload increases resulting from frequent changes of sector combinations, the proposed model is then expanded to a time-dependent one by including a minimum duration of a sector combination scheme. Numerical examples suggest there is a strong tradeoff between combination stability and controller staffing

    An Empirical Study of Meta- and Hyper-Heuristic Search for Multi-Objective Release Planning

    Get PDF
    A variety of meta-heuristic search algorithms have been introduced for optimising software release planning. However, there has been no comprehensive empirical study of different search algorithms across multiple different real-world datasets. In this article, we present an empirical study of global, local, and hybrid meta- and hyper-heuristic search-based algorithms on 10 real-world datasets. We find that the hyper-heuristics are particularly effective. For example, the hyper-heuristic genetic algorithm significantly outperformed the other six approaches (and with high effect size) for solution quality 85% of the time, and was also faster than all others 70% of the time. Furthermore, correlation analysis reveals that it scales well as the number of requirements increases

    Retail Demand Management: Forecasting, Assortment Planning and Pricing

    Get PDF
    In the first part of the dissertation, we focus on the retailer\u27s problem of forecasting demand for products in a category (including those that they have never carried before), optimizing the selected assortment, and customizing the assortment by store to maximize chain-wide revenues or profits. We develop algorithms for demand forecasting and assortment optimization, and demonstrate their use in practical applications. In the second part, we study the sensitivity of the optimal assortment to the underlying assumptions made about demand, substitution and inventory. In particular, we explore the impact of choice model mis-specification and ignoring stock-outs on the optimal profits. We develop bounds on the optimality gap in terms of demand variability, in-stock rate and consumer heterogeneity. Understanding this sensitivity is key to developing more robust approaches to assortment optimization. In the third and final part of the dissertation, we study how the seat value perceived by consumers attending an event in a stadium, depends on the location of their seat relative to the field. We develop a measure of seat value, called the Seat Value Index (SVI), and relate it to seat location and consumer characteristics. We apply our methodology to a proprietary dataset collected by a professional baseball franchise in Japan. Based on the observed heterogeneity in SVI, we provide segment-specific pricing recommendations to achieve a service level objective
    corecore