228 research outputs found

    Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications

    Get PDF
    With the advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors, new opportunities are emerging for applying deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge. This can facilitate the advancement of the medical Internet of Things (IoT) systems and Point of Care (PoC) devices. In this paper, we provide a tutorial describing how various technologies ranging from emerging memristive devices, to established Field Programmable Gate Arrays (FPGAs), and mature Complementary Metal Oxide Semiconductor (CMOS) technology can be used to develop efficient DL accelerators to solve a wide variety of diagnostic, pattern recognition, and signal processing problems in healthcare. Furthermore, we explore how spiking neuromorphic processors can complement their DL counterparts for processing biomedical signals. After providing the required background, we unify the sparsely distributed research on neural network and neuromorphic hardware implementations as applied to the healthcare domain. In addition, we benchmark various hardware platforms by performing a biomedical electromyography (EMG) signal processing task and drawing comparisons among them in terms of inference delay and energy. Finally, we provide our analysis of the field and share a perspective on the advantages, disadvantages, challenges, and opportunities that different accelerators and neuromorphic processors introduce to healthcare and biomedical domains. This paper can serve a large audience, ranging from nanoelectronics researchers, to biomedical and healthcare practitioners in grasping the fundamental interplay between hardware, algorithms, and clinical adoption of these tools, as we shed light on the future of deep networks and spiking neuromorphic processing systems as proponents for driving biomedical circuits and systems forward.Comment: Submitted to IEEE Transactions on Biomedical Circuits and Systems (21 pages, 10 figures, 5 tables

    Cyber-Physical Embedded Systems with Transient Supervisory Command and Control: A Framework for Validating Safety Response in Automated Collision Avoidance Systems

    Get PDF
    The ability to design and engineer complex and dynamical Cyber-Physical Systems (CPS) requires a systematic view that requires a definition of level of automation intent for the system. Since CPS covers a diverse range of systemized implementations of smart and intelligent technologies networked within a system of systems (SoS), the terms “smart” and “intelligent” is frequently used in describing systems that perform complex operations with a reduced need of a human-agent. The difference between this research and most papers in publication on CPS is that most other research focuses on the performance of the CPS rather than on the correctness of its design. However, by using both human and machine agency at different levels of automation, or autonomy, the levels of automation have profound implications and affects to the reliability and safety of the CPS. The human-agent and the machine-agent are in a tidal lock of decision-making using both feedforward and feedback information flows in similar processes, where a transient shift within the level of automation when the CPS is operating can have undesired consequences. As CPS systems become more common, and higher levels of autonomy are embedded within them, the relationship between human-agent and machine-agent also becomes more complex, and the testing methodologies for verification and validation of performance and correctness also become more complex and less clear. A framework then is developed to help the practitioner to understand the difficulties and pitfalls of CPS designs and provides guidance to test engineering design of soft computational systems using combinations of modeling, simulation, and prototyping

    Synaptic Learning for Neuromorphic Vision - Processing Address Events with Spiking Neural Networks

    Get PDF
    Das Gehirn übertrifft herkömmliche Computerarchitekturen in Bezug auf Energieeffizienz, Robustheit und Anpassungsfähigkeit. Diese Aspekte sind auch für neue Technologien wichtig. Es lohnt sich daher, zu untersuchen, welche biologischen Prozesse das Gehirn zu Berechnungen befähigen und wie sie in Silizium umgesetzt werden können. Um sich davon inspirieren zu lassen, wie das Gehirn Berechnungen durchführt, ist ein Paradigmenwechsel im Vergleich zu herkömmlichen Computerarchitekturen erforderlich. Tatsächlich besteht das Gehirn aus Nervenzellen, Neuronen genannt, die über Synapsen miteinander verbunden sind und selbstorganisierte Netzwerke bilden. Neuronen und Synapsen sind komplexe dynamische Systeme, die durch biochemische und elektrische Reaktionen gesteuert werden. Infolgedessen können sie ihre Berechnungen nur auf lokale Informationen stützen. Zusätzlich kommunizieren Neuronen untereinander mit kurzen elektrischen Impulsen, den so genannten Spikes, die sich über Synapsen bewegen. Computational Neuroscientists versuchen, diese Berechnungen mit spikenden neuronalen Netzen zu modellieren. Wenn sie auf dedizierter neuromorpher Hardware implementiert werden, können spikende neuronale Netze wie das Gehirn schnelle, energieeffiziente Berechnungen durchführen. Bis vor kurzem waren die Vorteile dieser Technologie aufgrund des Mangels an funktionellen Methoden zur Programmierung von spikenden neuronalen Netzen begrenzt. Lernen ist ein Paradigma für die Programmierung von spikenden neuronalen Netzen, bei dem sich Neuronen selbst zu funktionalen Netzen organisieren. Wie im Gehirn basiert das Lernen in neuromorpher Hardware auf synaptischer Plastizität. Synaptische Plastizitätsregeln charakterisieren Gewichtsaktualisierungen im Hinblick auf Informationen, die lokal an der Synapse anliegen. Das Lernen geschieht also kontinuierlich und online, während sensorischer Input in das Netzwerk gestreamt wird. Herkömmliche tiefe neuronale Netze werden üblicherweise durch Gradientenabstieg trainiert. Die durch die biologische Lerndynamik auferlegten Einschränkungen verhindern jedoch die Verwendung der konventionellen Backpropagation zur Berechnung der Gradienten. Beispielsweise behindern kontinuierliche Aktualisierungen den synchronen Wechsel zwischen Vorwärts- und Rückwärtsphasen. Darüber hinaus verhindern Gedächtnisbeschränkungen, dass die Geschichte der neuronalen Aktivität im Neuron gespeichert wird, so dass Verfahren wie Backpropagation-Through-Time nicht möglich sind. Neuartige Lösungen für diese Probleme wurden von Computational Neuroscientists innerhalb des Zeitrahmens dieser Arbeit vorgeschlagen. In dieser Arbeit werden spikende neuronaler Netzwerke entwickelt, um Aufgaben der visuomotorischen Neurorobotik zu lösen. In der Tat entwickelten sich biologische neuronale Netze ursprünglich zur Steuerung des Körpers. Die Robotik stellt also den künstlichen Körper für das künstliche Gehirn zur Verfügung. Auf der einen Seite trägt diese Arbeit zu den gegenwärtigen Bemühungen um das Verständnis des Gehirns bei, indem sie schwierige Closed-Loop-Benchmarks liefert, ähnlich dem, was dem biologischen Gehirn widerfährt. Auf der anderen Seite werden neue Wege zur Lösung traditioneller Robotik Probleme vorgestellt, die auf vom Gehirn inspirierten Paradigmen basieren. Die Forschung wird in zwei Schritten durchgeführt. Zunächst werden vielversprechende synaptische Plastizitätsregeln identifiziert und mit ereignisbasierten Vision-Benchmarks aus der realen Welt verglichen. Zweitens werden neuartige Methoden zur Abbildung visueller Repräsentationen auf motorische Befehle vorgestellt. Neuromorphe visuelle Sensoren stellen einen wichtigen Schritt auf dem Weg zu hirninspirierten Paradigmen dar. Im Gegensatz zu herkömmlichen Kameras senden diese Sensoren Adressereignisse aus, die lokalen Änderungen der Lichtintensität entsprechen. Das ereignisbasierte Paradigma ermöglicht eine energieeffiziente und schnelle Bildverarbeitung, erfordert aber die Ableitung neuer asynchroner Algorithmen. Spikende neuronale Netze stellen eine Untergruppe von asynchronen Algorithmen dar, die vom Gehirn inspiriert und für neuromorphe Hardwaretechnologie geeignet sind. In enger Zusammenarbeit mit Computational Neuroscientists werden erfolgreiche Methoden zum Erlernen räumlich-zeitlicher Abstraktionen aus der Adressereignisdarstellung berichtet. Es wird gezeigt, dass Top-Down-Regeln der synaptischen Plastizität, die zur Optimierung einer objektiven Funktion abgeleitet wurden, die Bottom-Up-Regeln übertreffen, die allein auf Beobachtungen im Gehirn basieren. Mit dieser Einsicht wird eine neue synaptische Plastizitätsregel namens "Deep Continuous Local Learning" eingeführt, die derzeit den neuesten Stand der Technik bei ereignisbasierten Vision-Benchmarks erreicht. Diese Regel wurde während eines Aufenthalts an der Universität von Kalifornien, Irvine, gemeinsam abgeleitet, implementiert und evaluiert. Im zweiten Teil dieser Arbeit wird der visuomotorische Kreis geschlossen, indem die gelernten visuellen Repräsentationen auf motorische Befehle abgebildet werden. Drei Ansätze werden diskutiert, um ein visuomotorisches Mapping zu erhalten: manuelle Kopplung, Belohnungs-Kopplung und Minimierung des Vorhersagefehlers. Es wird gezeigt, wie diese Ansätze, welche als synaptische Plastizitätsregeln implementiert sind, verwendet werden können, um einfache Strategien und Bewegungen zu lernen. Diese Arbeit ebnet den Weg zur Integration von hirninspirierten Berechnungsparadigmen in das Gebiet der Robotik. Es wird sogar prognostiziert, dass Fortschritte in den neuromorphen Technologien und bei den Plastizitätsregeln die Entwicklung von Hochleistungs-Lernrobotern mit geringem Energieverbrauch ermöglicht

    Architectural artificial intelligence: exploring and developing strategies, tools, and pedagogies toward the integration of deep learning in the architectural profession

    Full text link
    The growing incessance for data collection is a trend born from the basic promise of data: “save everything you can, and someday you’ll be able to figure out some use for it all” (Schneier 2016, p. 40). However, this has manifested as a plague of information overload, where “it would simply be impossible for humans to deal with all of this data” (Davenport 2014, p. 151). Especially within the field of architecture, where designers are tasked with leveraging all available sources of information to compose an informed solution. Too often, “the average designer scans whatever information [they] happen on, […] and introduces this randomly selected information into forms otherwise dreamt up in the artist’s studio of mind” (Alexander 1964, p. 4). As data accumulates— less so the “oil”, and more the “exhaust of the information age” (Schneier 2016, p. 20)—we are rapidly approaching a point where even the programmers enlisted to automate are inadequate. Yet, as the size of data warehouses increases, so too does the available computational power and the invention of clever algorithms to negotiate it. Deep learning is an exemplar. A subset of artificial intelligence, deep learning is a collection of algorithms inspired by the brain, capable of automated self-improvement, or “learning”, through observations of large quantities of data. In recent years, the rise in computational power and the access to these immense databases have fostered the proliferation of deep learning to almost all fields of endeavour. The application of deep learning in architecture not only has the potential to resolve the issue of rising complexity, but introduce a plethora of new tools at the architect’s disposal, such as computer vision, natural language processing, and recommendation systems. Already, we are starting to see its impact on the field of architecture. Which raises the following questions: what is the current state of deep learning adoption in architecture, how can one better facilitate its integration, and what are the implications for doing so? This research aims to answer those questions through an exploration of strategies, tools, and pedagogies for the integration of deep learning in the architectural profession

    The selection and evaluation of a sensory technology for interaction in a warehouse environment

    Get PDF
    In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed

    Multiobjective global surrogate modeling, dealing with the 5-percent problem

    Get PDF
    When dealing with computationally expensive simulation codes or process measurement data, surrogate modeling methods are firmly established as facilitators for design space exploration, sensitivity analysis, visualization, prototyping and optimization. Typically the model parameter (=hyperparameter) optimization problem as part of global surrogate modeling is formulated in a single objective way. Models are generated according to a single objective (accuracy). However, this requires an engineer to determine a single accuracy target and measure upfront, which is hard to do if the behavior of the response is unknown. Likewise, the different outputs of a multi-output system are typically modeled separately by independent models. Again, a multiobjective approach would benefit the domain expert by giving information about output correlation and enabling automatic model type selection for each output dynamically. With this paper the authors attempt to increase awareness of the subtleties involved and discuss a number of solutions and applications. In particular, we present a multiobjective framework for global surrogate model generation to help tackle both problems and that is applicable in both the static and sequential design (adaptive sampling) case

    Detection of organs in CT images using Neural Networks

    Get PDF
    Táto práca sa zaoberá výskumom zobrazovacích metód v medicíne, klasických prístupov k segmentácii obrázkov, CT a konvolučným neuronovým sietiam. Praktickou časťou je implementácia architektúry 3D UNet pre segmentáciu chrbtice a jednotlivých stavcov z CT obrázkov a jej porovnanie s jej 2D verziou.This thesis contains research of the field of medical imaging, classical methods of image segmentation, computed tomography and convolutional neural networks. The practical part involves implementation of an architecture of 3D UNet for segmentation of the spine and specific vertebrae from CT scans. Furthermore, this architecture is compared to its 2D counterpart

    A Systematic Review of Deep Learning Approaches to Educational Data Mining

    Get PDF
    Educational Data Mining (EDM) is a research field that focuses on the application of data mining, machine learning, and statistical methods to detect patterns in large collections of educational data. Different machine learning techniques have been applied in this field over the years, but it has been recently that Deep Learning has gained increasing attention in the educational domain. Deep Learning is a machine learning method based on neural network architectures with multiple layers of processing units, which has been successfully applied to a broad set of problems in the areas of image recognition and natural language processing. This paper surveys the research carried out in Deep Learning techniques applied to EDM, from its origins to the present day. The main goals of this study are to identify the EDM tasks that have benefited from Deep Learning and those that are pending to be explored, to describe the main datasets used, to provide an overview of the key concepts, main architectures, and configurations of Deep Learning and its applications to EDM, and to discuss current state-of-the-art and future directions on this area of research

    A framework for automation of data recording, modelling, and optimal statistical control of production lines

    Get PDF
    Unarguably, the automation of data collection and subsequent statistical treatment enhance the quality of industrial management systems. The rise of accessible digital technologies has enabled the introduction of the Industry 4.0 pillars in Cariri local companies. Particularly, such practice positively contributes to the triple bottom line of sustainable development: People, Environment, and Economy. The present work aims to provide a general automated framework for data recording and statistical control of conveyor belts in production lines. The software has been developed in three layers: graphical user interface, in PHP language; database collection, search, and safeguard, in MySQL; computational statistics, in R; and hardware control, in C. The computational statistics are based on the combination of artificial neural nets and autoregressive integrated and moving average models, via minimal variance method. The hardware components are composed by open source hardware as Arduino based boards and modular or industrial sensors. Specifically, the embedded system is designed to constantly monitor and record a number of measurable characteristics of the conveyor belts (e.g. electric consumption and temperature), via a number of sensors, allowing both the computation of statistical control metrics and the evaluation of the quality of the production system. As a case study, the project makes use of a laminated limestone production line, located at the Mineral Technology Center, Nova Olinda, Ceará state, Brazil.Indiscutivelmente, a automação da coleta de dados e o subsequente tratamento estatístico aumentam a qualidade dos sistemas de gestão industrial. O surgimento de tecnologias digitais acessíveis possibilitou a introdução dos pilares da Indústria 4.0 nas empresas locais do Cariri. Particularmente, tal prática contribui positivamente para o triplo resultado do desenvolvimento sustentável: Pessoas, Meio Ambiente e Economia. O presente trabalho tem como objetivo fornecer um Framework geral automatizado para registro de dados e controle estatístico de esteiras transportadoras em linhas de produção. O software foi desenvolvido em três camadas: interface gráfica do usuário, em linguagem PHP; coleta, pesquisa e proteção de banco de dados em MySQL; estatística computacional, em R; e controle de hardware, em C. As estatísticas computacionais são baseadas na combinação de redes neurais artificiais e modelos autorregressivos integrados e de média móvel, via método de mínima variância. Os componentes de hardware são compostos por hardware open source como placas baseadas em Arduino e sensores modulares ou industriais. Especificamente, o sistema embarcado é projetado para monitorar e registrar constantemente uma série de características mensuráveis das esteiras transportadoras (por exemplo, consumo elétrico e temperatura), por meio de uma série de sensores, permitindo tanto o cálculo de métricas de controle estatístico quanto a avaliação da qualidade do sistema de produção. Como estudo de caso, o projeto utiliza uma linha de produção de calcário laminado, localizada no Centro de Tecnologia Mineral, Nova Olinda, Ceará, Brasil
    • …
    corecore