243 research outputs found

    Scheduling Algorithms: Challenges Towards Smart Manufacturing

    Get PDF
    Collecting, processing, analyzing, and driving knowledge from large-scale real-time data is now realized with the emergence of Artificial Intelligence (AI) and Deep Learning (DL). The breakthrough of Industry 4.0 lays a foundation for intelligent manufacturing. However, implementation challenges of scheduling algorithms in the context of smart manufacturing are not yet comprehensively studied. The purpose of this study is to show the scheduling No.s that need to be considered in the smart manufacturing paradigm. To attain this objective, the literature review is conducted in five stages using publish or perish tools from different sources such as Scopus, Pubmed, Crossref, and Google Scholar. As a result, the first contribution of this study is a critical analysis of existing production scheduling algorithms\u27 characteristics and limitations from the viewpoint of smart manufacturing. The other contribution is to suggest the best strategies for selecting scheduling algorithms in a real-world scenario

    Analyse et synthèse de mouvements théâtraux expressifs

    Get PDF
    This thesis addresses the analysis and generation of expressive movements for virtual human character. Based on previous results from three different research areas (perception of emotions and biological motion, automatic recognition of affect and computer character animation), a low-dimensional motion representation is proposed. This representation consists of the spatio-temporal trajectories of end-effectors (i.e., head, hands and feet), and pelvis. We have argued that this representation is both suitable and sufficient for characterizing the underlying expressive content in human motion, and for controlling the generation of expressive whole-body movements. In order to prove these claims, this thesis proposes: (i) A new motion capture database inspired by physical theory, which contains three categories of motion (locomotion, theatrical and improvised movements), has been built for several actors; (ii) An automatic classification framework has been designed to qualitatively and quantitatively assess the amount of emotion contained in the data. It has been shown that the proposed low-dimensional representation preserves most of the motion cues salient to the expression of affect and emotions; (iii) A motion generation system has been implemented, both for reconstructing whole-body movements from the low-dimensional representation, and for producing novel end-effector expressive trajectories. A quantitative and qualitative evaluation of the generated whole body motions shows that these motions are as expressive as the movements recorded from human actors.Cette thèse porte sur l'analyse et la génération de mouvements expressifs pour des personnages humains virtuels. Sur la base de résultats de l’état de l’art issus de trois domaines de recherche différents - la perception des émotions et du mouvement biologique, la reconnaissance automatique des émotions et l'animation de personnages virtuels - une représentation en faible dimension des mouvements constituée des trajectoires spatio-temporelles des extrémités des chaînes articulées (tête, mains et pieds) et du pelvis a été proposée. Nous avons soutenu que cette représentation est à la fois appropriée et suffisante pour caractériser le contenu expressif du mouvement humain et pour contrôler la génération de mouvements corporels expressifs. Pour étayer cette affirmation, cette thèse propose:i) une nouvelle base de données de capture de mouvements inspirée par la théorie du théâtre physique. Cette base de données contient des exemples de différentes catégories de mouvements (c'est-à-dire des mouvements périodiques, des mouvements fonctionnels, des mouvements spontanés et des séquences de mouvements théâtraux), produits avec des états émotionnels distincts (joie, tristesse, détente, stress et neutre) et interprétés par plusieurs acteurs.ii) Une étude perceptuelle et une approche basée classification automatique conçus pour évaluer qualitativement et quantitativement l'information liée aux émotions véhiculées et encodées dans la représentation proposée. Nous avons observé que, bien que de légères différences dans la performance aient été trouvées par rapport à la situation où le corps entier a été utilisé, notre représentation conserve la plupart des marqueurs de mouvement liés à l'expression de laffect et des émotions.iii) Un système de synthèse de mouvement capable : a) de reconstruire des mouvements du corps entier à partir de la représentation à faible dimension proposée et b) de produire de nouvelles trajectoires extrémités expressives (incluant la trajectoire du bassin). Une évaluation quantitative et qualitative des mouvements du corps entier générés montre que ces mouvements sont aussi expressifs que les mouvements enregistrés à partir d'acteurs humains

    A novel solution approach with ML-based pseudo-cuts for the Flight and Maintenance Planning problem

    Get PDF
    This paper deals with the long-term Military Flight and Maintenance Planning problem. In order to solve this problem efficiently, we propose a new solution approach based on a new Mixed Integer Program and the use of both valid cuts generated on the basis of initial conditions and learned cuts based on the prediction of certain characteristics of optimal or near-optimal solutions. These learned cuts are generated by training a Machine Learning model on the input data and results of 5000 instances. This approach helps to reduce the solution time with little losses in optimality and feasibility in comparison with alternative matheuristic methods. The obtained experimental results show the benefit of a new way of adding learned cuts to problems based on predicting specific characteristics of solutions.French Defense Procurement Agency of the French Ministry of Defense (DGA)

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin

    A Study of Accomodation of Prosodic and Temporal Features in Spoken Dialogues in View of Speech Technology Applications

    Get PDF
    Inter-speaker accommodation is a well-known property of human speech and human interaction in general. Broadly it refers to the behavioural patterns of two (or more) interactants and the effect of the (verbal and non-verbal) behaviour of each to that of the other(s). Implementation of thisbehavior in spoken dialogue systems is desirable as an improvement on the naturalness of humanmachine interaction. However, traditional qualitative descriptions of accommodation phenomena do not provide sufficient information for such an implementation. Therefore, a quantitativedescription of inter-speaker accommodation is required. This thesis proposes a methodology of monitoring accommodation during a human or humancomputer dialogue, which utilizes a moving average filter over sequential frames for each speaker. These frames are time-aligned across the speakers, hence the name Time Aligned Moving Average (TAMA). Analysis of spontaneous human dialogue recordings by means of the TAMA methodology reveals ubiquitous accommodation of prosodic features (pitch, intensity and speech rate) across interlocutors, and allows for statistical (time series) modeling of the behaviour, in a way which is meaningful for implementation in spoken dialogue system (SDS) environments.In addition, a novel dialogue representation is proposed that provides an additional point of view to that of TAMA in monitoring accommodation of temporal features (inter-speaker pause length and overlap frequency). This representation is a percentage turn distribution of individual speakercontributions in a dialogue frame which circumvents strict attribution of speaker-turns, by considering both interlocutors as synchronously active. Both TAMA and turn distribution metrics indicate that correlation of average pause length and overlap frequency between speakers can be attributed to accommodation (a debated issue), and point to possible improvements in SDS “turntaking” behaviour. Although the findings of the prosodic and temporal analyses can directly inform SDS implementations, further work is required in order to describe inter-speaker accommodation sufficiently, as well as to develop an adequate testing platform for evaluating the magnitude ofperceived improvement in human-machine interaction. Therefore, this thesis constitutes a first step towards a convincingly useful implementation of accommodation in spoken dialogue systems

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Knowledge-based Methods for Integrating Carbon Footprint Prediction Techniques into New Product Designs and Engineering Changes.

    Full text link
    This dissertation presents research focusing on the development of knowledge-based techniques of assessing the carbon footprint during new product creation. This research aims to transform the current time-consuming, off-line and reactive approach into an integrated proactive approach that relies on using fast estimates of sustainability generated from past computations on similar products. The developed methods address multiple challenges by leveraging the latest advancements in open standards and software capabilities from machine learning and data mining to support integration and early decision-making using generic knowledge of the product development field. Life-Cycle Assessment (LCA)-based carbon footprint calculation typically starts by analyzing the product functions. However, the lack of a semantically correct formal representation of product functions is a barrier to their effective capture and reuse. We first identified the advanced semantics that must be captured to ensure appropriate usability for reasoning with product functions. We captured them into a Function Semantics Representation that relies on the Semantic Web Rule Language, a proposed Semantic Web standard, to overcome limitations posed due to the commonly used Web Ontology Language. Several products are developed as Engineering Changes (ECs) of previous products but there is not enough data to predict the carbon footprint available before their implementation. In order to use past EC knowledge to predict for this purpose, we proposed an approach to compute similarity between ECs that overcame the challenge of the hierarchical nature of product knowledge by integrating an approach inspired from research in psychology with semantics specific to product development. We embedded this into a parallelized Ant-Colony based clustering algorithm for faster retrieval of a group of similar ECs. We are not aware of approaches to predict the carbon footprint of an EC or a proposed design right after the proposal. In order to reuse carbon footprint information from past designs and engineering changes, key parameters were determined to represent lifecycle attributes. The carbon footprint is predicted through a surrogate LCA technique developed using case-based reasoning and boosted-learning.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78846/1/scyang_1.pd

    Large Scale Kernel Methods for Fun and Profit

    Get PDF
    Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models
    • …
    corecore