349 research outputs found

    Deep Learning-Based Robotic Perception for Adaptive Facility Disinfection

    Get PDF
    Hospitals, schools, airports, and other environments built for mass gatherings can become hot spots for microbial pathogen colonization, transmission, and exposure, greatly accelerating the spread of infectious diseases across communities, cities, nations, and the world. Outbreaks of infectious diseases impose huge burdens on our society. Mitigating the spread of infectious pathogens within mass-gathering facilities requires routine cleaning and disinfection, which are primarily performed by cleaning staff under current practice. However, manual disinfection is limited in terms of both effectiveness and efficiency, as it is labor-intensive, time-consuming, and health-undermining. While existing studies have developed a variety of robotic systems for disinfecting contaminated surfaces, those systems are not adequate for intelligent, precise, and environmentally adaptive disinfection. They are also difficult to deploy in mass-gathering infrastructure facilities, given the high volume of occupants. Therefore, there is a critical need to develop an adaptive robot system capable of complete and efficient indoor disinfection. The overarching goal of this research is to develop an artificial intelligence (AI)-enabled robotic system that adapts to ambient environments and social contexts for precise and efficient disinfection. This would maintain environmental hygiene and health, reduce unnecessary labor costs for cleaning, and mitigate opportunity costs incurred from infections. To these ends, this dissertation first develops a multi-classifier decision fusion method, which integrates scene graph and visual information, in order to recognize patterns in human activity in infrastructure facilities. Next, a deep-learning-based method is proposed for detecting and classifying indoor objects, and a new mechanism is developed to map detected objects in 3D maps. A novel framework is then developed to detect and segment object affordance and to project them into a 3D semantic map for precise disinfection. Subsequently, a novel deep-learning network, which integrates multi-scale features and multi-level features, and an encoder network are developed to recognize the materials of surfaces requiring disinfection. Finally, a novel computational method is developed to link the recognition of object surface information to robot disinfection actions with optimal disinfection parameters

    Efficient deep neural network inference for embedded systems:A mixture of experts approach

    Get PDF
    Deep neural networks (DNNs) have become one of the dominant machine learning approaches in recent years for many application domains. Unfortunately, DNNs are not well suited to addressing the challenges of embedded systems, where on-device inference on battery-powered, resource-constrained devices is often infeasible due to prohibitively long inferencing time and resource requirements. Furthermore, offloading computation into the cloud is often infeasible due to a lack of connectivity, high latency, or privacy concerns. While compression algorithms often succeed in reducing inferencing times, they come at the cost of reduced accuracy. The key insight here is that multiple DNNs, of varying runtimes and prediction capabilities, are capable of correctly making a prediction on the same input. By choosing the fastest capable DNN for each input, the average runtime can be reduced. Furthermore, the fastest capable DNN changes depending on the evaluation criterion. This thesis presents a new, alternative approach to enable efficient execution of DNN inference on embedded devices; the aim is to reduce average DNN inferencing times without a loss in accuracy. Central to the approach is a Model Selector, which dynamically determines which DNN to use for a given input, by considering the desired evaluation metric and inference time. It employs statistical machine learning to develop a low-cost predictive model to quickly select a DNN to use for a given input and the optimisation constraint. First, the approach is shown to work effectively with off-the-self pre-trained DNNs. The approach is then extended by combining typical DNN pruning techniques with statistical machine learning in order to create a set of specialised DNNs designed specifically for use with a Model Selector. Two typical DNN application domains are used during evaluation: image classification and machine translation. Evaluation is reported on a NVIDIA Jetson TX2 embedded deep learning platform, and a range of influential DNN models including convolutional and recurrent neural networks are considered. In the first instance, utilising off-the-shelf pre-trained DNNs, a 44.45% reduction in inference time with a 7.52% improvement in accuracy, over the most-capable single DNN model, is achieved for image classification. For machine translation, inference time is reduced by 25.37% over the most-capable model with little impact on the quality of the translation. Further evaluation utilising specialised DNNs did not yield an accurate premodel and produced poor results; however analysis of a perfect premodel shows the potential for faster inference times, and reduced resource requirements over utilising off-the-shelf DNNs

    MULTIDISCIPLINARY TECHNIQUES FOR THE SIMULATION OF THE CONTACT BETWEEN THE FOOT AND THE SHOE UPPER IN GAIT: VIRTUAL REALITY, COMPUTATIONAL BIOMECHANICS, AND ARTIFICIAL NEURAL NETWORKS

    Full text link
    Esta Tesis propone el uso de técnicas multidisciplinares como una alternativa viable a los procedimientos actuales de evaluación del calzado los cuales, normalmente, consumen muchos recursos humanos y técnicos. Estas técnicas son Realidad Virtual, Biomecánica Computacional y Redes Neuronales Artificiales. El marco de esta tesis es el análisis virtual del confort mecánico en el calzado, es decir, el análisis de las presiones de confort en el calzado y su principal objetivo es predecir las presiones ejercidas por el zapato sobre la superficie del pie al caminar mediante la simulación del contacto en esta interfaz. En particular, en esta tesis se ha desarrollado una aplicación software que usa el Método de los Elementos Finitos para simular la deformación del calzado. Se ha desarrollado un modelo preliminar que describe el comportamiento del corte del calzado, se ha implementado un proceso automático para el ajuste pie-zapato y se ha presentado una metodología para obtener una animación genérica del paso de cada individuo. Además, y con el fin de mejorar la aplicación desarrollada, se han propuesto nuevos modelos para simular el comportamiento del corte del calzado al caminar. Por otro lado, las Redes Neuronales Artificiales han sido aplicadas en esta tesis a la predicción de la fuerza ejercida por una esfera, que simulando un hueso, empuja a una muestra de material. Además, también han sido utilizadas para predecir las presiones ejercidas por el corte del calzado sobre la superficie del pie (presiones dorsales) en un paso completo. Las principales contribuciones de esta tesis son: el desarrollo de un innovador simulador que permitirá a los fabricantes de calzado realizar evaluaciones virtuales de las características de sus diseños sin tener que construir el prototipo real, y el desarrollo de una también innovadora herramienta que les permitirá predecir las presiones dorsales ejercidas por el calzado sobre la superficie del pie al caminar.Rupérez Moreno, MJ. (2011). MULTIDISCIPLINARY TECHNIQUES FOR THE SIMULATION OF THE CONTACT BETWEEN THE FOOT AND THE SHOE UPPER IN GAIT: VIRTUAL REALITY, COMPUTATIONAL BIOMECHANICS, AND ARTIFICIAL NEURAL NETWORKS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11235Palanci

    Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments

    Get PDF
    This book presents the collection of fifty papers which were presented in the Second International Conference on BUSINESS SUSTAINABILITY 2011 - Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments , held in Póvoa de Varzim, Portugal, from 22ndto 24thof June, 2011.The main motive of the meeting was growing awareness of the importance of the sustainability issue. This importance had emerged from the growing uncertainty of the market behaviour that leads to the characterization of the market, i.e. environment, as turbulent. Actually, the characterization of the environment as uncertain and turbulent reflects the fact that the traditional technocratic and/or socio-technical approaches cannot effectively and efficiently lead with the present situation. In other words, the rise of the sustainability issue means the quest for new instruments to deal with uncertainty and/or turbulence. The sustainability issue has a complex nature and solutions are sought in a wide range of domains and instruments to achieve and manage it. The domains range from environmental sustainability (referring to natural environment) through organisational and business sustainability towards social sustainability. Concerning the instruments for sustainability, they range from traditional engineering and management methodologies towards “soft” instruments such as knowledge, learning, and creativity. The papers in this book address virtually whole sustainability problems space in a greater or lesser extent. However, although the uncertainty and/or turbulence, or in other words the dynamic properties, come from coupling of management, technology, learning, individuals, organisations and society, meaning that everything is at the same time effect and cause, we wanted to put the emphasis on business with the intention to address primarily companies and their businesses. Due to this reason, the main title of the book is “Business Sustainability 2.0” but with the approach of coupling Management, Technology and Learning for individuals, organisations and society in Turbulent Environments. Also, the notation“2.0” is to promote the publication as a step further from our previous publication – “Business Sustainability I” – as would be for a new version of software. Concerning the Second International Conference on BUSINESS SUSTAINABILITY, its particularity was that it had served primarily as a learning environment in which the papers published in this book were the ground for further individual and collective growth in understanding and perception of sustainability and capacity for building new instruments for business sustainability. In that respect, the methodology of the conference work was basically dialogical, meaning promoting dialog on the papers, but also including formal paper presentations. In this way, the conference presented a rich space for satisfying different authors’ and participants’ needs. Additionally, promoting the widest and global learning environment and participation, in accordance with the Conference's assumed mission to promote Proactive Generative Collaborative Learning, the Conference Organisation shares/puts open to the community the papers presented in this book, as well as the papers presented on the previous Conference(s). These papers can be accessed from the conference webpage (http://labve.dps.uminho.pt/bs11). In these terms, this book could also be understood as a complementary instrument to the Conference authors’ and participants’, but also to the wider readerships’ interested in the sustainability issues. The book brought together 107 authors from 11 countries, namely from Australia, Belgium, Brazil, Canada, France, Germany, Italy, Portugal, Serbia, Switzerland, and United States of America. The authors “ranged” from senior and renowned scientists to young researchers providing a rich and learning environment. At the end, the editors hope, and would like, that this book to be useful, meeting the expectation of the authors and wider readership and serving for enhancing the individual and collective learning, and to incentive further scientific development and creation of new papers. Also, the editors would use this opportunity to announce the intention to continue with new editions of the conference and subsequent editions of accompanying books on the subject of BUSINESS SUSTAINABILITY, the third of which is planned for year 2013.info:eu-repo/semantics/publishedVersio

    Biometric authentication and identification through electrocardiogram signals

    Get PDF
    Tese de Mestrado Integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), 2021, Universidade de Lisboa, Faculdade de CiênciasO reconhecimento biométrico tem sido alvo de diversas investigações ao longo dos anos, sendo a impressão digital, a face e a iris, os traços biométricos mais explorados. Apesar do seu elevado potencial no que diz respeito a possíveis aplicações tecnológicas, alguns estudos apresentam limitações a estes traços biométricos, nomeadamente a falta de fiabilidade e praticidade num sistema biométrico. Recentemente, vários estudos exploraram o potencial do uso do electrocardiograma (ECG) como traço biométrico, por ser único e singular para cada indivíduo, e dificilmente roubado por outrem, por ser um sinal fisiológico. Nesta dissertação, foi investigada a possibilidade de usar sinais ECG como traço biométrico para sistemas de identificação e autenticação biométrica. Para tal, recorreu-se a uma base de dados pública chamada Check Your Biosignals Here initiative (CYBHi), criada com o intuito de propiciar investigações biométricas. As sessões de aquisição contaram com 63 participantes e ocorreram em dois momentos distintos separados por três meses, numa modalidade “off-the-person”, com recurso a um elétrodo na palma da mão e eletrolicras nos dedos. Os sinais da primeira aquisição correspondem, num sistema biométrico, aos dados armazenados na base de dados, enquanto que os sinais da segunda aquisição correspondem aos dados que serão identificados ou autenticados pelo sistema. Os sistemas de identificação e autenticação biométrica propostos nesta dissertação incluem diferentes fases: o pré-processamento, o processamento e a classificação. O pré-processamento consistiu na aplicação de um filtro passa-banda IIR de 4ª ordem, para eliminar ruídos e artefactos provenientes de atividade muscular e da impedância elétrica dos aparelhos de aquisição. A fase de processamento consistiu em extrair e gerar os templates biométricos, que serão os inputs dos algoritmos de classificação. Primeiramente, extraíram-se os ciclos cardíacos através do Neurokit2 disponível no Python. Para tal, foram localizados os picos R dos sinais ECG e, posteriormente, estes foram segmentados em ciclos cardíacos, com 200 amostras antes e 400 amostras depois dos picos. Com o objetivo de remover os segmentos mais ruidosos, os ciclos cardíacos foram submetidos a um algoritmo de eliminação de segmentos que consistiu em encontrar, para cada sujeito, os 20 e 60 ciclos mais próximos entre si, designados de Set 1 e Set 2, respetivamente. A partir desses dois conjuntos de ciclos, criaram-se dois tipos de templates: 1) os ciclos cardíacos, e 2) escalogramas gerados a partir dos ciclos, através da transformada de wavelet contínua, com dois tamanhos distintos: 56x56 e 224x224, denominados por Size 56 e Size 224, respetivamente. Devido ao elevado tamanho dos escalogramas, foi utilizada a analise de componentes independentes para reduzir a dimensionalidade. Assim, os sistemas biométricos propostos na presente investigação, foram testados com os conjuntos de 20 e 60 templates, quer para ciclos quer para escalogramas, de forma a avaliar o desempenho do sistema quando usados mais ou menos templates para os processos de identificação e autenticação. Os templates foram também testados com e sem normalização, para que pudessem ser analisados os benefícios deste processo. A classificação foi feita através de diferentes métodos, testados numa modalidade “entre-sessões”, isto é, os dados da 2ª aquisição, considerados os dados de teste, foram comparados com os dados da 1ª aquisição, denominados dados de treino, de forma a serem classificados. Quanto ao sistema de identificação com ciclos cardíacos, foram testados diferentes classificadores, nomeadamente LDA, kNN, DT e SVM. Para o kNN e SVM, foi feita uma otimização para encontrar o valor de “k” e os valores de γ e C, respetivamente, que permitem o sistema alcançar o melhor desempenho possível. A melhor performance foi obtida através do LDA, alcançando uma taxa de identificação de 79,37% para a melhor configuração, isto é, usando 60 ciclos normalizados. Os templates com base em escalogramas foram testados como inputs para dois métodos distintos: 1) redes neuronais e 2) algoritmo baseado em distâncias. A melhor performance foi uma taxa de identificação de 69,84%, obtida quando usados 60 escalogramas de tamanho 224, não normalizados. Deste modo, os resultados relativos a identificação provaram que utilizar mais templates (60) para identificar um indivíduo otimiza a performance do sistema biométrico, independentemente do tipo de template utilizado. Para alem disto, a normalização mostrou-se um processo essencial para a identificação com ciclos cardíacos, contudo, tal não se verificou para escalogramas. Neste estudo, demonstrou-se que a utilização de ciclos tem mais potencial para tornar um sistema de identificação biométrica eficiente, do que a utilização de escalogramas. No que diz respeito ao sistema de autenticação biométrica, foi utilizado um algoritmo baseado em distâncias, testado com os dois tipos de templates numa configuração concatenada, isto é, uma configuração na qual cada sujeito e representado por um sinal que contém uma sequência de todos os seus templates, seguidos uns dos outros. A avaliação da performance do sistema foi feita com base nos valores de taxa de autenticação e taxa de impostores, que indicam o número de indivíduos corretamente autenticados face ao número total de indivíduos, e o número de impostores autenticados face ao número total de indivíduos, respetivamente. Os ciclos cardíacos foram testados com e sem redução de dimensionalidade, sendo que a melhor performance foi obtida usando 60 ciclos não normalizados sem redução de dimensionalidade. Para esta configuração, obteve-se uma taxa de autenticação de 90,48% e uma taxa de impostores de 13,06%. Desta forma, concluiu-se que reduzir a dimensionalidade dos ciclos cardíacos prejudica o desempenho do sistema, uma vez que se perdem algumas características indispensáveis para a distinção entre sujeitos. Para os escalogramas, a melhor configuração, que corresponde ao uso de 60 escalogramas normalizados de tamanho 56, atingiu uma taxa de autenticação de 98,42% e uma taxa de impostores de 14,34%. Sendo que a dimensionalidade dos escalogramas foi reduzida com recurso a ICA, foi ainda avaliada a performance do sistema quando reduzido o número de componentes independentes. Os resultados mostraram que um número de componentes igual ao número de sujeitos otimiza o desempenho do sistema, uma vez que se verificou um decréscimo da taxa de autenticação quando reduzido o número de componentes. Assim, concluiu-se que são necessárias 63 componentes independentes para distinguir corretamente os 63 sujeitos. Para a autenticação através de ciclos cardíacos, a normalização e a redução de dimensionalidade são dois processos que degradam a performance do sistema, enquanto que, quando utilizados escalogramas, a normalização e vantajosa. Os resultados obtidos provaram ainda que, contrariamente ao que acontece para processos de identificação, a utilização de escalogramas e uma abordagem mais eficiente e eficaz para a autenticação de indivíduos, do que a utilização de ciclos. Esta investigação comprovou o potencial do ECG enquanto traço biométrico para identificação e autenticação de indivíduos, fazendo uma análise comparativa entre diferentes templates extraídos dos sinais ECG e diferentes metodologias na fase de classificação, e avaliando o desempenho do sistema em cada uma das configurações testadas. Estudos anteriores apresentaram algumas limitações, nomeadamente, o uso de aquisições “on-the-person”, ˜ que apresentam pouco potencial para serem integradas em sistemas biométricos devido à baixa praticidade, e à classificação numa modalidade “intra-sessão”, na qual os dados classificados e os dados armazenados foram adquiridos numa só sessão. Este estudo preenche essas lacunas, visto que utilizou dados adquiridos “off-the-person”, dados esses que foram testados numa modalidade “entre-sessões”. Apesar das aquisições ˜ “off-the-person” estarem sujeitas a mais ruídos e, consequentemente, dificultarem processos de identificação ou autenticação, estas abordagens são as mais adequadas para sistemas biométricos, dada a sua possível integração nas mais diversas aplicações tecnológicas. A modalidade “entre-sessões” resulta também numa pior performance relativamente a utilização de sinais de uma só sessão. No entanto, permite comprovar a estabilidade do ECG ao longo do tempo, o que é um fator indispensável para o funcionamento adequado de um sistema biométrico, uma vez que o mesmo terá que comparar diversas vezes o ECG apresentado no momento de identificação ou autenticação, com o ECG armazenado uma única vez na base de dados. Apesar dos bons resultados apresentados nesta dissertação, no futuro devem ser exploradas bases de dados que contenham mais participantes, com uma faixa etária mais alargada, incluindo participantes com diversas condições de saúde, com aquisições separadas por um período de tempo mais longo, de forma a simular o melhor possível a realidade de um sistema biométrico.Biometrics is a rapidly growing field with applications in personal identification and authentication. Over the recent years, several studies have demonstrated the potential of Electrocardiogram (ECG) to be used as a physiological signature for biometric systems. In this dissertation, the possibility of using the ECG signal as an unequivocal biometric trait for identification and authentication purposes has been presented. The ECG data used was from a publicly available database, the Check Your Biosignals Here initiative (CHBYi) database, developed for biometric purposes, containing records of 63 participants. Data was collected through an off-the-person approach, in two different moments, separated by three months, resulting in two acquisitions per subject. Signals from the first acquisition represent, in a biometric system, the data stored in the database, whereas signals from the second acquisition represent the data to be authenticated or identified. The proposed identification and authentication systems included several steps: signal pre-processing, signal processing, and classification. In the pre-processing phase, signals were filtered in order to remove noises, while the signal processing consisted of extracting and generating the biometric templates. For that, firstly, the cardiac cycles were extracted from the ECG signals, and segment elimination was performed to find the segments more similar to one another, resulting in two sets of templates, with 20 and 60 templates per participant, respectively. After that, two types of templates were generated: 1) templates based on cardiac cycles, and 2) templates based on scalograms generated from the cardiac cycles, with two different sizes, 56x56 and 224x224. Due to the large size of the scalograms, ICA was applied to reduce their dimensionality. Thus, the biometric systems were evaluated with two sets of each type of template in order to analyze the advantages of using more or fewer templates per subject, and the templates were also tested with and without normalization. For the identification system using cardiac cycles, LDA, kNN, DT, and SVM were tested as classifiers in an “across-session” modality, reaching an accuracy of 79.37% for the best model (LDA) in the best configuration (60 normalized cardiac cycles). When using scalograms, two different methodologies were tested: 1) neural network, and 2) a distance-based algorithm. The best accuracy was 69.84% for 60 not-normalized scalograms of Size 224, using NN. Thus, results suggested that the templates based on cardiac cycles are a more promising approach for identification tasks. For the authentication, a distance-based algorithm was used for both templates. Cardiac cycles were tested with and without dimensionality reduction, and the best configuration (60 not-normalized cardiac cycles without dimensionality reduction) reached an accuracy of 90.48% and an impostor score of 13.06%. For the scalograms, the best configuration (60 normalized scalograms of Size 56) reached an accuracy of 98.42% and an impostor score of 14.34%. Therefore, using scalograms for the authentication task proved to be a more efficient and accurate approach. The results from this work support the claim that ECG-based biometrics can be successfully used for personal identification and authentication. This study brings novelty by exploring different templates and methodologies in order to perform a comparative analysis and find the approaches that optimize the performance of the biometric system. Moreover, this represents a step forward towards a real-world application of an ECG-based biometric system, mainly due to the use of data from off-the-person acquisitions in an across-session modality

    Radial Basis Function Neural Network in Identifying The Types of Mangoes

    Get PDF
    Mango (Mangifera Indica L) is part of a fruit plant species that have different color and texture characteristics to indicate its type. The identification of the types of mangoes uses the manual method through direct visual observation of mangoes to be classified. At the same time, the more subjective way humans work causes differences in their determination. Therefore in the use of information technology, it is possible to classify mangoes based on their texture using a computerized system. In its completion, the acquisition process is using the camera as an image processing instrument of the recorded images. To determine the pattern of mango data taken from several samples of texture features using Gabor filters from various types of mangoes and the value of the feature extraction results through artificial neural networks (ANN). Using the Radial Base Function method, which produces weight values, is then used as a process for classifying types of mangoes. The accuracy of the test results obtained from the use of extraction methods and existing learning methods is 100%

    MeshDiffusion: Score-based Generative 3D Mesh Modeling

    Full text link
    We consider the task of generating realistic 3D shapes, which is useful for a variety of applications such as automatic scene generation and physical simulation. Compared to other 3D representations like voxels and point clouds, meshes are more desirable in practice, because (1) they enable easy and arbitrary manipulation of shapes for relighting and simulation, and (2) they can fully leverage the power of modern graphics pipelines which are mostly optimized for meshes. Previous scalable methods for generating meshes typically rely on sub-optimal post-processing, and they tend to produce overly-smooth or noisy surfaces without fine-grained geometric details. To overcome these shortcomings, we take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes. Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization. We demonstrate the effectiveness of our model on multiple generative tasks.Comment: Published in ICLR 2023 (Spotlight, Notable-top-25%
    corecore