192 research outputs found

    MOOCs Meet Measurement Theory: A Topic-Modelling Approach

    Full text link
    This paper adapts topic models to the psychometric testing of MOOC students based on their online forum postings. Measurement theory from education and psychology provides statistical models for quantifying a person's attainment of intangible attributes such as attitudes, abilities or intelligence. Such models infer latent skill levels by relating them to individuals' observed responses on a series of items such as quiz questions. The set of items can be used to measure a latent skill if individuals' responses on them conform to a Guttman scale. Such well-scaled items differentiate between individuals and inferred levels span the entire range from most basic to the advanced. In practice, education researchers manually devise items (quiz questions) while optimising well-scaled conformance. Due to the costly nature and expert requirements of this process, psychometric testing has found limited use in everyday teaching. We aim to develop usable measurement models for highly-instrumented MOOC delivery platforms, by using participation in automatically-extracted online forum topics as items. The challenge is to formalise the Guttman scale educational constraint and incorporate it into topic models. To favour topics that automatically conform to a Guttman scale, we introduce a novel regularisation into non-negative matrix factorisation-based topic modelling. We demonstrate the suitability of our approach with both quantitative experiments on three Coursera MOOCs, and with a qualitative survey of topic interpretability on two MOOCs by domain expert interviews.Comment: 12 pages, 9 figures; accepted into AAAI'201

    Biblio-Analysis of Cohort Intelligence (CI) Algorithm and its allied applications from Scopus and Web of Science Perspective

    Full text link
    Cohort Intelligence or CI is one of its kind of novel optimization algorithm. Since its inception, in a very short span it is applied successfully in various domains and its results are observed to be effectual in contrast to algorithm of its kind. Till date, there is no such type of bibliometric analysis carried out on CI and its related applications. So, this research paper in a way will be an ice breaker for those who want to take up CI to a new level. In this research papers, CI publications available in Scopus are analyzed through graphs, networked diagrams about authors, source titles, keywords over the years, journals over the time. In a way this bibliometric paper showcase CI, its applications and detail outs systematic review in terms its bibliometric details

    Applied (Meta)-Heuristic in Intelligent Systems

    Get PDF
    Engineering and business problems are becoming increasingly difficult to solve due to the new economics triggered by big data, artificial intelligence, and the internet of things. Exact algorithms and heuristics are insufficient for solving such large and unstructured problems; instead, metaheuristic algorithms have emerged as the prevailing methods. A generic metaheuristic framework guides the course of search trajectories beyond local optimality, thus overcoming the limitations of traditional computation methods. The application of modern metaheuristics ranges from unmanned aerial and ground surface vehicles, unmanned factories, resource-constrained production, and humanoids to green logistics, renewable energy, circular economy, agricultural technology, environmental protection, finance technology, and the entertainment industry. This Special Issue presents high-quality papers proposing modern metaheuristics in intelligent systems

    A Multi–Objective Gaining–Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems

    Get PDF
    Metaheuristics in recent years has proven its effectiveness; however, robust algorithms that can solve real-world problems are always needed. In this paper, we suggest the first extended version of the recently introduced gaining–sharing knowledge optimization (GSK) algorithm, named multiobjective gaining–sharing knowledge optimization (MOGSK), to deal with multiobjective optimization problems (MOPs). MOGSK employs an external archive population to store the nondominated solutions generated thus far, with the aim of guiding the solutions during the exploration process. Furthermore, fast nondominated sorting with crowding distance was incorporated to sustain the diversity of the solutions and ensure the convergence towards the Pareto optimal set, while the e- dominance relation was used to update the archive population solutions. e-dominance helps provide a good boost to diversity, coverage, and convergence overall. The validation of the proposed MOGSK was conducted using five biobjective (ZDT) and seven three-objective test functions (DTLZ) problems, along with the recently introduced CEC 2021, with fifty-five test problems in total, including power electronics, process design and synthesis, mechanical design, chemical engineering, and power system optimization. The proposed MOGSK was compared with seven existing optimization algorithms, including MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA. The experimental findings show the good behavior of our proposed MOGSK against the comparative algorithms in particular real-world optimization problems

    Optimization and Machine Learning Methods for Diagnostic Testing of Prostate Cancer

    Full text link
    Technological advances in biomarkers and imaging tests are creating new avenues to advance precision health for early detection of cancer. These advances have resulted in multiple layers of information that can be used to make clinical decisions, but how to best use these multiple sources of information is a challenging engineering problem due to the high cost and imperfect sensitivity and specificity of these tests. Questions that need to be addressed include which diagnostic tests to choose and how to best integrate them, in order to optimally balance the competing goals of early disease detection and minimal cost and harm from unnecessary testing. To study these research questions, we present new optimization-based models and data-driven analytic methods in three parts to improve early detection of prostate cancer (PCa). In the first part, we develop and validate predictive models to assess individual PCa risk using known clinical risk factors. Because not all men with newly-diagnosed PCa received imaging at diagnosis, we use an established method to correct for verification bias to evaluate the accuracy of published imaging guidelines. In addition to the published guidelines, we implement advanced classification modeling techniques to develop accurate classification rules identifying which patients should receive imaging. We propose a new algorithm for a classification model that considers information of patients with unverified disease and the high cost of misclassifying a metastatic patient. We summarize our development and implementation of state-wide, evidence-based imaging criteria that weigh the benefits and harms of radiological imaging for detection of metastatic PCa. In the second part of this thesis, we combine optimization and machine learning approaches into a robust optimization framework to design imaging guidelines that can account for imperfect calibration of predictions. We investigate efficient and effective ways to combine multiple medical diagnostic tests where the result of one test may be used to predict the outcome of another. We analyze the properties of the proposed optimization models from the perspectives of multiple stakeholders, and we present the results of fast approximation methods that we show can be used to solve large-scale models. In the third and final part of this thesis, we investigate the optimal design of composite multi-biomarker tests to achieve early detection of prostate cancer. Biomarker tests vary significantly in cost, and cause false positive and false negative results, leading to serious health implications for patients. Since no single biomarker on its own is considered satisfactory, we utilize simulation and statistical methods to develop the optimal diagnosis procedure for early detection of PCa consisting of a sequence of biomarker tests, balancing the benefits of early detection, such as increased survival, with the harms of testing, such as unnecessary prostate biopsies. In this dissertation, we identify new principles and methods to guide the design of early detection protocols for PCa using new diagnostic technologies. We provide important clinical evidence that can be used to improve health outcomes of patients while reducing wasteful application of diagnostic tests to patients for whom they are not effective. Moreover, some of the findings of this dissertation have been implemented directly into clinical practice in the state of Michigan. The models and methodologies we present in this thesis are not limited to PCa, and can be applied to a broad range of chronic diseases for which diagnostic tests are available.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143976/1/smerdan_1.pd

    Efficient and Modular Implicit Differentiation

    Full text link
    Automatic differentiation (autodiff) has revolutionized machine learning. It allows expressing complex computations by composing elementary ones in creative ways and removes the burden of computing their derivatives by hand. More recently, differentiation of optimization problem solutions has attracted widespread attention with applications such as optimization as a layer, and in bi-level problems such as hyper-parameter optimization and meta-learning. However, the formulas for these derivatives often involve case-by-case tedious mathematical derivations. In this paper, we propose a unified, efficient and modular approach for implicit differentiation of optimization problems. In our approach, the user defines (in Python in the case of our implementation) a function FF capturing the optimality conditions of the problem to be differentiated. Once this is done, we leverage autodiff of FF and implicit differentiation to automatically differentiate the optimization problem. Our approach thus combines the benefits of implicit differentiation and autodiff. It is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism. We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily. We demonstrate the ease of formulating and solving bi-level optimization problems using our framework. We also showcase an application to the sensitivity analysis of molecular dynamics.Comment: V2: some corrections and link to softwar

    Optimizing resource allocation in computational sustainability: Models, algorithms and tools

    Get PDF
    The 17 Sustainable Development Goals laid out by the United Nations include numerous targets as well as indicators of progress towards sustainable development. Decision-makers tasked with meeting these targets must frequently propose upfront plans or policies made up of many discrete actions, such as choosing a subset of locations where management actions must be taken to maximize the utility of the actions. These types of resource allocation problems involve combinatorial choices and tradeoffs between multiple outcomes of interest, all in the context of complex, dynamic systems and environments. The computational requirements for solving these problems bring together elements of discrete optimization, large-scale spatiotemporal modeling and prediction, and stochastic models. This dissertation leverages network models as a flexible family of computational tools for building prediction and optimization models in three sustainability-related domain areas: 1) minimizing stochastic network cascades in the context of invasive species management; 2) maximizing deterministic demand-weighted pairwise reachability in the context of flood resilient road infrastructure planning; and 3) maximizing vertex-weighted and edge-weighted connectivity in wildlife reserve design. We use spatially explicit network models to capture the underlying system dynamics of interest in each setting, and contribute discrete optimization problem formulations for maximizing sustainability objectives with finite resources. While there is a long history of research on optimizing flows, cascades and connectivity in networks, these decision problems in the emerging field of computational sustainability involve novel objectives, new combinatorial structure, or new types of intervention actions. In particular, we formulate a new type of discrete intervention in stochastic network cascades modeled with multivariate Hawkes processes. In conjunction, we derive an exact optimization approach for the proposed intervention based on closed-form expressions of the objective functions, which is applicable in a broad swath of domains beyond invasive species, such as social networks and disease contagion. We also formulate a new variant of Steiner Forest network design, called the budget-constrained prize-collecting Steiner forest, and prove that this optimization problem possesses a specific combinatorial structure, restricted supermodularity, that allows us to design highly effective algorithms. In each of the domains, the optimization problem is defined over aspects that need to be predicted, hence we also demonstrate improved machine learning approaches for each.Ph.D
    • …
    corecore