12,469 research outputs found

    A Risk-Based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health

    Get PDF
    This brief examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based model predictive control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this brief can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm

    Competency Implications of Changing Human Resource Roles

    Get PDF
    [Excerpt] The present study examines which competencies will be necessary to perform key human resource roles over the next decade at Eastman Kodak Company. This project was a critical component of an ongoing quality process to improve organizational capability. The results establish a platform that will enable Kodak to better assess, plan, develop, and measure the capability of human resource staff

    A new and efficient intelligent collaboration scheme for fashion design

    Get PDF
    Technology-mediated collaboration process has been extensively studied for over a decade. Most applications with collaboration concepts reported in the literature focus on enhancing efficiency and effectiveness of the decision-making processes in objective and well-structured workflows. However, relatively few previous studies have investigated the applications of collaboration schemes to problems with subjective and unstructured nature. In this paper, we explore a new intelligent collaboration scheme for fashion design which, by nature, relies heavily on human judgment and creativity. Techniques such as multicriteria decision making, fuzzy logic, and artificial neural network (ANN) models are employed. Industrial data sets are used for the analysis. Our experimental results suggest that the proposed scheme exhibits significant improvement over the traditional method in terms of the time–cost effectiveness, and a company interview with design professionals has confirmed its effectiveness and significance

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Perfectionism Search Algorithm (PSA): An Efficient Meta-Heuristic Optimization Approach

    Full text link
    This paper proposes a novel population-based meta-heuristic optimization algorithm, called Perfectionism Search Algorithm (PSA), which is based on the psychological aspects of perfectionism. The PSA algorithm takes inspiration from one of the most popular model of perfectionism, which was proposed by Hewitt and Flett. During each iteration of the PSA algorithm, new solutions are generated by mimicking different types and aspects of perfectionistic behavior. In order to have a complete perspective on the performance of PSA, the proposed algorithm is tested with various nonlinear optimization problems, through selection of 35 benchmark functions from the literature. The generated solutions for these problems, were also compared with 11 well-known meta-heuristics which had been applied to many complex and practical engineering optimization problems. The obtained results confirm the high performance of the proposed algorithm in comparison to the other well-known algorithms

    Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms

    Full text link
    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks for different constraints are used as basis functions to encode the observed functional activity at a given time point. These encodings are decoded using machine learning to compare both the algorithms and their assumptions, using the time series weights to predict whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. For classifying cognitive activity, the sparse coding algorithm of L1L1 Regularized Learning consistently outperformed 4 variations of ICA across different numbers of networks and noise levels (p<<0.001). The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy. Within each algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<<0.001). The success of sparse coding algorithms may suggest that algorithms which enforce sparse coding, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA

    A Novel Engineering Approach to Modelling and Optimizing Smoking Cessation Interventions

    Get PDF
    abstract: Cigarette smoking remains a major global public health issue. This is partially due to the chronic and relapsing nature of tobacco use, which contributes to the approximately 90% quit attempt failure rate. The recent rise in mobile technologies has led to an increased ability to frequently measure smoking behaviors and related constructs over time, i.e., obtain intensive longitudinal data (ILD). Dynamical systems modeling and system identification methods from engineering offer a means to leverage ILD in order to better model dynamic smoking behaviors. In this dissertation, two sets of dynamical systems models are estimated using ILD from a smoking cessation clinical trial: one set describes cessation as a craving-mediated process; a second set was reverse-engineered and describes a psychological self-regulation process in which smoking activity regulates craving levels. The estimated expressions suggest that self-regulation more accurately describes cessation behavior change, and that the psychological self-regulator resembles a proportional-with-filter controller. In contrast to current clinical practice, adaptive smoking cessation interventions seek to personalize cessation treatment over time. An intervention of this nature generally reflects a control system with feedback and feedforward components, suggesting its design could benefit from a control systems engineering perspective. An adaptive intervention is designed in this dissertation in the form of a Hybrid Model Predictive Control (HMPC) decision algorithm. This algorithm assigns counseling, bupropion, and nicotine lozenges each day to promote tracking of target smoking and craving levels. Demonstrated through a diverse series of simulations, this HMPC-based intervention can aid a successful cessation attempt. Objective function weights and three-degree-of-freedom tuning parameters can be sensibly selected to achieve intervention performance goals despite strict clinical and operational constraints. Such tuning largely affects the rate at which peak bupropion and lozenge dosages are assigned; total post-quit smoking levels, craving offset, and other performance metrics are consequently affected. Overall, the interconnected nature of the smoking and craving controlled variables facilitate the controller's robust decision-making capabilities, even despite the presence of noise or plant-model mismatch. Altogether, this dissertation lays the conceptual and computational groundwork for future efforts to utilize engineering concepts to further study smoking behaviors and to optimize smoking cessation interventions.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Placing the poor while keeping the rich in their place

    Get PDF
    A central objective of modern US housing policy is deconcentrating poverty through "housing mobility programs" that move poor families into middle class neighborhoods. Pursuing these policies too aggressively risks inducing middle class flight, but being too cautious squanders the opportunity to help more poor families. This paper presents a stylized dynamicoptimization model that captures this tension. With base-caseparameter values, cost considerations limit mobility programs before flight becomes excessive. However, for modest departures reflecting stronger flight tendencies and/or weaker destination neighborhoods, other outcomes emerge. In particular, we find state-dependence and multiple equilibria, including both de-populated and oversized outcomes. For certain sets of parameters there exists a Skiba point that separates initial conditions for which the optimal strategy leads to substantial flight and depopulation from those for which the optimal strategy retains or even expands the middle class population. These results suggest the value of estimating middle-class neighborhoods' "carrying capacity" for absorbing mobility program placements and further modeling of dynamic response.housing policy, multiple equilibria, negative externality, optimal control, segregation, separation, Skiba point
    corecore