1,040 research outputs found

    Worker scheduling with induced learning in a semi-on-line setting

    Get PDF
    Scheduling is a widely researched area with many interesting fields. The presented research deals with a maintenance area in which preventative maintenance and emergency jobs enter the system. Each job has varying processing time and must be scheduled. Through learning the operators are able to expand their knowledge which enables them to accomplish more tasks in a limited time. Two MINLP models have been presented, one for preventative maintenance jobs alone, and another including emergency jobs. The emergency model is semi-on-line as the arrival time is unknown. A corresponding heuristic method has also been developed to decrease the computational time of the MINLP models. The models and heuristic were tested in several areas to determine their flexibility. It has been demonstrated that the inclusion of learning has greatly improved the efficiency of the workers and of the system

    Cognitive Decay And Memory Recall During Long Duration Spaceflight

    Get PDF
    This dissertation aims to advance the efficacy of Long-Duration Space Flight (LDSF) pre-flight and in-flight training programs, acknowledging existing knowledge gaps in NASA\u27s methodologies. The research\u27s objective is to optimize the cognitive workload of LDSF crew members, enhance their neurocognitive functionality, and provide more meaningful work experiences, particularly for Mars missions.The study addresses identified shortcomings in current training and learning strategies and simulation-based training systems, focusing on areas requiring quantitative measures for astronaut proficiency and training effectiveness assessment. The project centers on understanding cognitive decay and memory loss under LDSF-related stressors, seeking to establish when such cognitive decline exceeds acceptable performance levels throughout mission phases. The research acknowledges the limitations of creating a near-orbit environment due to resource constraints and the need to develop engaging tasks for test subjects. Nevertheless, it underscores the potential impact on future space mission training and other high-risk professions. The study further explores astronaut training complexities, the challenges encountered in LDSF missions, and the cognitive processes involved in such demanding environments. The research employs various cognitive and memory testing events, integrating neuroimaging techniques to understand cognition\u27s neural mechanisms and memory. It also explores Rasmussen\u27s S-R-K behaviors and Brain Network Theory’s (BNT) potential for measuring forgetting, cognition, and predicting training needs. The multidisciplinary approach of the study reinforces the importance of integrating insights from cognitive psychology, behavior analysis, and brain connectivity research. Research experiments were conducted at the University of North Dakota\u27s Integrated Lunar Mars Analog Habitat (ILMAH), gathering data from selected subjects via cognitive neuroscience tools and Electroencephalography (EEG) recordings to evaluate neurocognitive performance. The data analysis aimed to assess brain network activations during mentally demanding activities and compare EEG power spectra across various frequencies, latencies, and scalp locations. Despite facing certain challenges, including inadequacies of the current adapter boards leading to analysis failure, the study provides crucial lessons for future research endeavors. It highlights the need for swift adaptation, continual process refinement, and innovative solutions, like the redesign of adapter boards for high radio frequency noise environments, for the collection of high-quality EEG data. In conclusion, while the research did not reveal statistically significant differences between the experimental and control groups, it furnished valuable insights and underscored the need to optimize astronaut performance, well-being, and mission success. The study contributes to the ongoing evolution of training methodologies, with implications for future space exploration endeavors

    Development of a Human Reliability Analysis (HRA) model for break scheduling management in human-intensive working activities

    Get PDF
    2016 - 2017Human factors play an inevitable role in working contexts and the occurrence of human errors impacts on system reliability and safety, equipment performance and economic results. If human fallibility contributes to majority of incidents and accidents in high-risk systems, it mainly affects the quality and productivity in low-risk systems. Due to the prevalence of human error and the huge and often costly consequences, a considerable effort has been made in the field of Human Reliability Analysis (HRA), thus arriving to develop methods with the common purpose to predict the human error probability (HEP) and to enable safer and more productive designs. The purpose of each HRA method should be the HEP quantification to reduce and prevent possible conditions of error in a working context. However, existing HRA methods do not always pursue this aim in an efficient way, focusing on the qualitative error evaluation and on high-risk contexts. Moreover, several working aspects have been considered to prevent accidents and improve human performance in human-intensive working contexts, as for example the selection of adequate work-rest policies. It is well-known that introducing breaks is a key intervention to provide recovery after fatiguing physical work, prevent the growth of accident risks, and improve human reliability and productivity for individuals engaged in either mental or physical tasks. This is a very efficient approach even if it is not widely applied. ... [edited by Author]XXX cicl

    Bee Hive Monitoring System - Solutions for the automated health monitoring

    Get PDF
    Cerca de um terço da produção global de alimentos depende da polinização das abelhas, tornando-as vitais para a economia mundial. No entanto, existem diversas ameaças à sobrevivência das espécies de abelhas, tais como incêndios florestais, stress humano induzido, subnutrição, poluição, perda de biodiversidade, agricultura intensiva e predadores como as vespas asiáticas. Destes problemas, pode-se observar um aumento da necessidade de soluções automatizadas que possam auxiliar na monitorização remota de colmeias de abelhas. O objetivo desta tese é desenvolver soluções baseadas em Aprendizagem Automática para problemas que podem ser identificados na apicultura, usando técnicas e conceitos de Deep Learning, Visão Computacional e Processamento de Sinal. Este documento descreve o trabalho da tese de mestrado, motivado pelo problema acima exposto, incluindo a revisão de literatura, análise de valor, design, planeamento de testes e validação e o desenvolvimento e estudo computacional das soluções. Concretamente, o trabalho desta tese de mestrado consistiu no desenvolvimento de soluções para três problemas – classificação da saúde de abelhas a partir de imagens e a partir de áudio, e deteção de abelhas e vespas asiáticas. Os resultados obtidos para a classificação da saúde das abelhas a partir de imagens foram significativamente satisfatórios, excedendo os que foram obtidos pela metodologia definida no trabalho base utilizado para a tarefa, que foi encontrado durante a revisão da literatura. No caso da classificação da saúde das abelhas a partir de áudio e da deteção de abelhas e vespas asiáticas, os resultados obtidos foram modestos e demonstram potencial aplicabilidade das respetivas metodologias desenvolvidas nos problemas-alvo. Pretende-se que as partes interessadas desta tese consigam obter informações, metodologias e perceções adequadas sobre o desenvolvimento de soluções de IA que possam ser integradas num sistema de monitorização da saúde de abelhas, incluindo custos e desafios inerentes à implementação das soluções. O trabalho futuro desta dissertação de mestrado consiste em melhorar os resultados dos modelos de classificação da saúde das abelhas a partir de áudio e de deteção de objetos, incluindo a publicação de artigos para obter validação pela comunidade científica.Up to one third of the global food production depends on the pollination of honey bees, making them vital for the world economy. However, between forest fires, human-induced stress, poor nutrition, pollution, biodiversity loss, intensive agriculture, and predators such as Asian Hornets, there are plenty of threats to the honey bee species’ survival. From these problems, a rise of the need for automated solutions that can aid with remote monitoring of bee hives can be observed. The goal of this thesis is to develop Machine Learning based solutions to problems that can be identified in beekeeping and apiculture, using Deep Learning, Computer Vision and Signal Processing techniques and concepts. The current document describes master thesis’ work, motivated from the above problem statement, including the literature review, value analysis, design, testing and validation planning and the development and computational study of the solutions. Specifically, this master thesis’ work consisted in developing solutions to three problems – bee health classification through images and audio, and bee and Asian wasp detection. Results obtained for the bee health classification through images were significantly satisfactory, exceeding those reported by the baseline work found during literature review. On the case of bee health classification through audio and bee and Asian wasp detection, these obtained results were modest and showcase potential applicability of the respective developed methodologies in the target problems. It is expected that stakeholders of this thesis obtain adequate information, methodologies and insights into the development of AI solutions that can be integrated in a bee health monitoring system, including inherent costs and challenges that arise with the implementation of the solutions. Future work of this master thesis consists in improving the results of the bee health classification through audio and the object detection models, including publishing of papers to seek validation by the scientific community

    Generalizability of Predictive Performance Optimizer Predictions across Learning Task Type

    Get PDF
    The purpose of my study is to understand the relationship of learning and forgetting rates estimated by a cognitive model at the level of the individual and overall task performance across similar learning tasks. Cognitive computational models are formal representations of theories that enable better understanding and prediction of dynamic human behavior in complex environments (Adner, Polos, Ryall, & Sorenson, 2009). The Predictive Performance Optimizer (PPO) is a cognitive model and training aid based in learning theory that tracks quantitative performance data and also makes predictions for future performance. It does so by estimating learning and decay rates for specific tasks and trainees. In this study, I used three learning tasks to assess individual performance and the model\u27s potential to generalize parameters and retention interval predictions at the level of the individual and across similar-type tasks. The similar-type tasks were memory recall tasks and the different-type task was a spatial learning task. I hypothesized that the raw performance scores, PPO optimized parameter estimates, and PPO predictions for each individual would be similar for two learning tasks within the same type and different for the different type learning task. Fifty-eight participants completed four training sessions, each consisting of the three tasks. I used the PPO to assess performance on task, knowledge acquisition, learning, forgetting, and retention over time. Additionally, I tested PPO generalizability by assessing fit when PPO optimized parameters for one task were applied to another. Results showed similarities in performance, PPO optimization trends, and predicted performance trends across similar task types, and differences for the different type task. As hypothesized, the results for PPO parameter generalizability and overall performance predictions were less distinct. Outcomes of this study suggest potential differences in learning and retention based on task-type designation and potential generalizability of PPO by accounting for these differences. This decreases the requirements for individual performance data on a specific task to determine training optimization scheduling

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Coarse-grained Localization of In-body Energy-harvesting Nanonodes

    Get PDF
    Dispositivos a escala nanométrica con capacidades de comunicación inalámbrica en Ter- ahertz (THz) aspiran a implementarse para aplicaciones de detección dentro del torrente sanguíneo humano. Estos dispositivos detectan biomarcadores, permiten la administración dirigida de medicamentos y mejoran el diagnóstico precoz. La introducción de la localización nanométrica guiada por flujo utiliza la comunicación basada en THz para establecer la comunicación entre nanonodos y anclas. Se espera que este enfoque localice con precisión las regiones donde ocurren los eventos mediante el tiempo de circulación del nanodispositivo en el torrente sanguíneo. Esto facilita la identificación precisa de biomarcadores de enfermedades, virus y bacterias, lo que permite una intervención dirigida y la detección temprana de diversas condiciones de salud. Para evitar los desafíos encontrados en la evaluación y estandarización de la localización tradicional, este trabajo presenta un flujo de trabajo para la evaluación del rendimiento estandarizado de la localización nanométrica guiada por flujo. El flujo de trabajo se implementa en forma de un simulador de código abierto, teniendo en cuenta la movilidad del nanodispositivo, la comunicación THz en el cuerpo con anclas externas y las restricciones relacionadas con la energía. El simulador puede generar datos que se pueden utilizar para optimizar diferentes soluciones de localización y establecer puntos de referencia de rendimiento estandarizados. La evaluación se realiza mediante una exploración del espacio de diseño. Los resultados indican que el flujo de trabajo propuesto y el simulador se pueden utilizar para capturar el rendimiento de los enfoques de localización guiados por flujo de manera que permita una comparación objetiva con otros enfoques, sentando así las bases para la evaluación estandarizada de soluciones futuras.Els dispositius a nanoescala amb capacitats de comunicació sense fils en Terahertz (THz) aspiren a implementar-se en aplicacions basades en detecció i actuació dins del torrent sanguini humà. Aquests dispositius detecten biomarcadors, permeten el lliurament precís de fàrmacs i proporcionen un diagnòstic precoç. La introducció de la localització a nanoescala guiada per flux utilitza la comunicació basada en THz per establir la comunicació entre nanonodes i ancoratges. Aquest enfocament aspira a localitzar amb precisió les regions on es produeixen els esdeveniments utilitzant el temps de circulació del nanodispositiu a cada una de les regions. Això permet la identificació precisa de biomarcadors de malalties, virus i bacteris, donant lloc a una intervenció dirigida i a la detecció precoç de diverses condicions de salut. Per evitar els inconvenients que sorgeixen en les primeres etapes d’investigació, aquest treball presenta un flux de treball per a l’avaluació estandarditzada del rendiment de la localització a nanoescala guiada per flux. El flux de treball s’implementa en forma d’un simulador de codi obert, tenint en compte la mobilitat del nanodispositiu, la comunicació THz dins del cos amb els ancoratges situats a la superfície del cos i les limitacions relacionades amb l’energia. El simulador és capaç de generar dades que després de ser processades, ens permeten obtenir una avaluació estandaritzada del sistema . L’avaluació es va realitzar en forma d’exploració espacial de disseny. Els resultats indiquen que el flux de treball proposat i el simulador es poden utilitzar per capturar el rendiment de la solució implementada d’una manera que permet la comparació objectiva amb altres enfocaments, servint d’aquesta manera com a base per a l’avaluació estandarditzada de futures solucions.Nanoscale devices with Terahertz (THz) wireless communication capabilities are envisioned for sensing and actuation-based applications within human bloodstreams. These devices detect biomarkers, enable targeted drug delivery, and improve precision diagnos- tics. The introduction of flow-guided nanoscale localization utilizes THz-based communication to establish communication between nanonodes and anchors. This approach is envisaged to accurately locate regions where events occur by using the nanodevice’s circulation duration in the bloodstream. This enables precise identification of disease biomarkers, viruses, and bacteria, facilitating targeted intervention and early detection of health conditions. To avoid the pitfalls encountered in benchmarking and standardizing traditional indoor localization, this work presents a workflow for standardized performance evaluation of flow-guided nanoscale localization. The workflow is implemented in the form of an open source simulator, considering nanodevice mobility, in-body THz communication with on- body anchors, and energy-related constraints. The simulator is able to generate raw data that can be used to streamline different flow-guided localization solutions and establish standardized performance benchmarks. The evaluation is performed in the form of a design space exploration. The results indicate that the proposed workflow and the simulator can be utilized for capturing the performance of flow-guided localization approaches in a way that allows objective comparison with other approaches serving as the foundation for standardized evaluation of future solutions

    Methods for Reducing the Spread of Misinformation on the Web

    Get PDF
    The significant growth of the internet over the past thirty years reduced the cost of access to information for anyone who has unfettered access to the internet. During this period, internet users were also empowered to create new content that could instantly reach millions of people via social media platforms like Facebook and Twitter. This transformation broke down the traditional ways mass-consumed content was distributed and ultimately ushered in the era of citizen journalism and freeform content. The unrestricted ability to create and distribute information was considered a significant triumph of freedom and liberty. However, the new modes of information exchange posed new challenges for modern societies, namely trust, integrity, and the spread of misinformation. Before the emergence of the Internet, newsrooms and editorial procedures required minimum standards for published information; today, such requirements are not necessary when posting content on social media platforms. This change led to the proliferation of information that attracts attention but lacks integrity and reliability. There are currently two broad approaches to solving the problem of information integrity on the internet; first, the revival of trusted and reliable sources of information; second, creating new mechanisms for increasing the quality of information published and spread on major social media platforms. These approaches are still in their infancy, each having its pros and cons. In this thesis, we explore the latter and develop modern machine learning methods that can help identify (un)reliable information and their sources, efficiently prioritize content requiring human fact-checking at scale, and ultimately minimize their harm to the end-users by improving the quality of the news-feeds that users access. This thesis leverages the collaborative dynamics of content creation on Wikipedia to extract a grounded measure of information and source reliability. We also develop a method capable of modifying ranking algorithms used widely on social media platforms such as Facebook and Twitter to minimize the long-term harm posed by the spread of misinformation

    Risk-based decision support system for life cycle management of industrials plants

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de ComputadoresThe objective of this thesis is to contribute for a better understanding of the decision making process in industrial plants specifically in situations with impact in the long term performance of the plant. The way decisions are made, and especially the motivations that lead to the selection of a specific course of action, are sometimes unclear and lack on justification. This is particularly critical in cases where inappropriate decisions drive to an increase on the production costs. Industrial plants are part of these cases, specifically the ones that are still lacking enhanced monitoring technologies and associated decision support systems. Maintenance has been identified as one of the critical areas regarding impact on performance. This is due to the fact that maintenance costs still represent a considerable slice of the production costs. Thus, understanding the way maintenance procedures are executed, and more important, the methods used to decide when maintenance should be developed and how, have been a concern of decision makers in industrial plants. This thesis proposes a methodology to efficiently transform the existing information on the plant behaviour into knowledge that may be used to support the decision process in maintenance activities. The development of an appropriate knowledge model relating the core aspects of the process enables the extraction of new knowledge based on the past experience. This thesis proposes also a methodology to calculate the risk associated to each maintenance situation and, based on the possible actions and on the impacts they may have in the plant costs performance, suggests the most appropriate course. The suggestion is made aiming the minimization of the life cycle costs. Results have been validated in test cases performed both at simulation and real industrial environments. The results obtained at the tests cases demonstrated the feasibility of the developed methodology as well as its adequateness and applicability in the domain of interest
    • …
    corecore