7,770 research outputs found

    Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces

    Get PDF
    Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen Geräten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur für Menschen mit neurologischen Verletzungen entwickelt, sondern auch für ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfänglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser Bemühungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein großes Potenzial für eine Vielzahl von Anwendungen, auch für weniger stark eingeschränkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hängt jedoch auch von der Verfügbarkeit zuverlässiger BCI-Hardware ab, die den Einsatz in der realen Welt gewährleistet. Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was Flexibilität und Effizienz bei der EEG-Signalverarbeitung gewährleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewährleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschließlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller Mobilität. Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die Flexibilität des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die für verschiedene BCI-Anwendungen erforderlich ist. Darüber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung für mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht. Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte Leistungsfähigkeit und Ausstattung für ein mobiles BCI. Es erfüllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg für eine schnelle Übertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf für die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard für BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application. The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors. The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies. Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    The development of bioinformatics workflows to explore single-cell multi-omics data from T and B lymphocytes

    Full text link
    The adaptive immune response is responsible for recognising, containing and eliminating viral infection, and protecting from further reinfection. This antigen-specific response is driven by T and B cells, which recognise antigenic epitopes via highly specific heterodimeric surface receptors, termed T-cell receptors (TCRs) and B cell receptors (BCRs). The theoretical diversity of the receptor repertoire that can be generated via homologous recombination of V, D and J genes is large enough (>1015 unique sequences) that virtually any antigen can be recognised. However, only a subset of these are generated within the human body, and how they succeed in specifically recognising any pathogen(s) and distinguishing these from self-proteins remains largely unresolved. The recent advances in applying single-cell genomics technologies to simultaneously measure the clonality, surface phenotype and transcriptomic signature of pathogen- specific immune cells have significantly improved understanding of these questions. Single-cell multi-omics permits the accurate identification of clonally expanded populations, their differentiation trajectories, the level of immune receptor repertoire diversity involved in the response and the phenotypic and molecular heterogeneity. This thesis aims to develop a bioinformatic workflow utilising single-cell multi-omics data to explore, quantify and predict the clonal and transcriptomic signatures of the human T-cell response during and following viral infection. In the first aim, a web application, VDJView, was developed to facilitate the simultaneous analysis and visualisation of clonal, transcriptomic and clinical metadata of T and B cell multi-omics data. The application permits non-bioinformaticians to perform quality control and common analyses of single-cell genomics data integrated with other metadata, thus permitting the identification of biologically and clinically relevant parameters. The second aim pertains to analysing the functional, molecular and immune receptor profiles of CD8+ T cells in the acute phase of primary hepatitis C virus (HCV) infection. This analysis identified a novel population of progenitors of exhausted T cells, and lineage tracing revealed distinct trajectories with multiple fates and evolutionary plasticity. Furthermore, it was observed that high-magnitude IFN-γ CD8+ T-cell response is associated with the increased probability of viral escape and chronic infection. Finally, in the third aim, a novel analysis is presented based on the topological characteristics of a network generated on pathogen-specific, paired-chain, CD8+ TCRs. This analysis revealed how some cross-reactivity between TCRs can be explained via the sequence similarity between TCRs and that this property is not uniformly distributed across all pathogen-specific TCR repertoires. Strong correlations between the topological properties of the network and the biological properties of the TCR sequences were identified and highlighted. The suite of workflows and methods presented in this thesis are designed to be adaptable to various T and B cell multi-omic datasets. The associated analyses contribute to understanding the role of T and B cells in the adaptive immune response to viral-infection and cancer

    Backpropagation Beyond the Gradient

    Get PDF
    Automatic differentiation is a key enabler of deep learning: previously, practitioners were limited to models for which they could manually compute derivatives. Now, they can create sophisticated models with almost no restrictions and train them using first-order, i. e. gradient, information. Popular libraries like PyTorch and TensorFlow compute this gradient efficiently, automatically, and conveniently with a single line of code. Under the hood, reverse-mode automatic differentiation, or gradient backpropagation, powers the gradient computation in these libraries. Their entire design centers around gradient backpropagation. These frameworks are specialized around one specific task—computing the average gradient in a mini-batch. This specialization often complicates the extraction of other information like higher-order statistical moments of the gradient, or higher-order derivatives like the Hessian. It limits practitioners and researchers to methods that rely on the gradient. Arguably, this hampers the field from exploring the potential of higher-order information and there is evidence that focusing solely on the gradient has not lead to significant recent advances in deep learning optimization. To advance algorithmic research and inspire novel ideas, information beyond the batch-averaged gradient must be made available at the same level of computational efficiency, automation, and convenience. This thesis presents approaches to simplify experimentation with rich information beyond the gradient by making it more readily accessible. We present an implementation of these ideas as an extension to the backpropagation procedure in PyTorch. Using this newly accessible information, we demonstrate possible use cases by (i) showing how it can inform our understanding of neural network training by building a diagnostic tool, and (ii) enabling novel methods to efficiently compute and approximate curvature information. First, we extend gradient backpropagation for sequential feedforward models to Hessian backpropagation which enables computing approximate per-layer curvature. This perspective unifies recently proposed block- diagonal curvature approximations. Like gradient backpropagation, the computation of these second-order derivatives is modular, and therefore simple to automate and extend to new operations. Based on the insight that rich information beyond the gradient can be computed efficiently and at the same time, we extend the backpropagation in PyTorch with the BackPACK library. It provides efficient and convenient access to statistical moments of the gradient and approximate curvature information, often at a small overhead compared to computing just the gradient. Next, we showcase the utility of such information to better understand neural network training. We build the Cockpit library that visualizes what is happening inside the model during training through various instruments that rely on BackPACK’s statistics. We show how Cockpit provides a meaningful statistical summary report to the deep learning engineer to identify bugs in their machine learning pipeline, guide hyperparameter tuning, and study deep learning phenomena. Finally, we use BackPACK’s extended automatic differentiation functionality to develop ViViT, an approach to efficiently compute curvature information, in particular curvature noise. It uses the low-rank structure of the generalized Gauss-Newton approximation to the Hessian and addresses shortcomings in existing curvature approximations. Through monitoring curvature noise, we demonstrate how ViViT’s information helps in understanding challenges to make second-order optimization methods work in practice. This work develops new tools to experiment more easily with higher-order information in complex deep learning models. These tools have impacted works on Bayesian applications with Laplace approximations, out-of-distribution generalization, differential privacy, and the design of automatic differentia- tion systems. They constitute one important step towards developing and establishing more efficient deep learning algorithms

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Simultaneous Multiparametric and Multidimensional Cardiovascular Magnetic Resonance Imaging

    Get PDF
    No abstract available

    Blueprint of a Molecular Spin Quantum Processor

    Get PDF
    The implementation of a universal quantum processor still poses fundamental issues related to error mitigation and correction, which demand to investigate also platforms and computing schemes alternative to the main stream. A possibility is offered by employing multi-level logical units (qudits), naturally provided by molecular spins. Here we present the blueprint of a Molecular Spin Quantum Processor consisting of single Molecular Nanomagnets, acting as qudits, placed within superconducting resonators adapted to the size and interactions of these molecules to achieve a strong single spin to photon coupling. We show how to implement a universal set of gates in such a platform and to readout the final qudit state. Single-qudit unitaries (potentially embedding multiple qubits) are implemented by fast classical drives, while a novel scheme is introduced to obtain two-qubit gates via resonant photon exchange. The latter is compared to the dispersive approach, finding in general a significant improvement. The performance of the platform is assessed by realistic numerical simulations of gate sequences, such as Deutsch-Josza and quantum simulation algorithms. The very good results demonstrate the feasibility of the molecular route towards a universal quantum processor.Comment: 16 pages, 11 figures. Accepted in Physical Review Applie

    In-line quality control for Zero Defect Manufacturing: design, development and uncertainty analysis of vision-based instruments for dimensional measurements at different scales

    Get PDF
    Lo scopo di questo progetto di dottorato industriale finanziato attraverso una borsa di studio della Regione Marche è stato quello di sviluppare ricerca con potenziale impatto su un settore industriale, promuovere il coinvolgimento delle fabbriche e delle imprese locali nella ricerca e innovazione svolta in collaborazione con l'università e produrre ricerca in linea con le esigenze dell'ambiente industriale, non solo a livello regionale. Quindi, attraverso la collaborazione con una torneria locale (Zannini SpA) e una piccola azienda high-tech focalizzata sull'introduzione dell'innovazione meccatronica nel settore della tornitura (Z4Tec srl), e anche grazie a una collaborazione internazionale con l'Università di Anversa, abbiamo progettato e sviluppato nuovi strumenti per il controllo qualità in linea, basati su tecnologie senza contatto, in particolare tecnologie elettro-ottiche. Portando anche l'attenzione sull'importanza di prendere in considerazione l'incertezza, poiché è fondamentale nel processo decisionale basato sui dati che sono alla base di una strategia di Zero Defect Manufacturing. Infatti, la scarsa qualità delle misure può pregiudicare la qualità dei dati. In particolare, questo lavoro presenta due strumenti di misura che sono stati progettati e sviluppati con lo scopo di effettuare controllo qualità in linea di produzione e l’incertezza di misura di ogni strumento è stata analizzata in confronto ad altri strumenti presenti sul mercato. Nella parte finale di questo lavoro si è valutata l’incertezza di un profilometro a triangolazione di linea laser. Pertanto, la ricerca condotta in questa tesi può essere organizzata in due obiettivi principali: lo sviluppo di nuovi sistemi di misura dimensionale basati sulla visione da implementare in linea di produzione e l'analisi dell'incertezza di questi strumenti di misura. Per il primo obiettivo ci siamo concentrati su due tipi di misure dimensionali imposte dall'industria manifatturiera: macroscopiche (misure in mm) e microscopiche (misure in µm). Per le misure macroscopiche l'obiettivo era il controllo in linea della qualità dimensionale di pezzi torniti attraverso la profilometria ottica telecentrica. Il campione da ispezionare è stato posto tra l'illuminatore e l'obiettivo per ottenere la proiezione dell'ombra del campione. Le misure sono state eseguite mediante analisi grafica dell'immagine. Abbiamo discusso le disposizioni meccaniche mirate a ottimizzare le immagini acquisite e i problemi che eventuali disallineamenti meccanici dei componenti potrebbero introdurre nella qualità delle immagini. Per le misure microscopiche abbiamo progettato un sistema di misurazione della rugosità superficiale basato sulla visione retroilluminata, con l'obiettivo di determinare le condizioni ottimali di imaging utilizzando la modulation transfer function e l'uso di una electrically tunable lens. Un campione tornito (un cilindro) è posto di fronte a una telecamera ed è retroilluminato da una sorgente di luce collimata; tale configurazione ottica fornisce l'immagine del bordo del campione. Per testare la sensibilità del sistema di misura è stata utilizzata una serie di campioni di acciaio torniti con diverse rugosità superficiali. Per il secondo obiettivo, le tecniche di valutazione dell'incertezza di misura utilizzate in questo lavoro sono state un'analisi dell'incertezza statistica di tipo A e un'analisi Gage R&R. Nel caso del profilometro telecentrico, l'analisi è stata eseguita in confronto con altri dispositivi presenti sul mercato con un'analisi di tipo A e una Gage R&R. L'incertezza di misura del profilometro si è rivelata sufficiente per ottenere risultati nell'intervallo di tolleranza richiesto. Per il sistema di visione retroilluminato, il confronto dei risultati è stato effettuato con altri strumenti allo stato dell'arte, con un'analisi di Tipo A. Il confronto ha mostrato che le prestazioni dello strumento retroilluminato dipendono dai valori di rugosità superficiale considerati; mentre a valori maggiori di rugosità l'offset aumenta, per valori inferiori di rugosità i risultati sono compatibili con quelli dello strumento di riferimento (a stilo). Infine, sono state valutate la ripetibilità e la riproducibilità di un profilometro a triangolazione di linea laser, attraverso uno studio Gage R&R. Ogni punto di misura è stato ispezionato da tre operatori e l'insieme dei dati è stato elaborato con un'analisi dell'incertezza di Tipo A. Successivamente, uno studio Gage R&R ha contribuito a indagare la ripetibilità, la riproducibilità e la variabilità del sistema. Questa analisi ha dimostrato un'incertezza accettabile.The purpose of this industrial PhD project financed through a scholarship from the Regione Marche was to develop research with potential impact on an industrial sector, to promote the involvement of local factories and companies in research and innovation performed jointly with the university and to produce research in line with the needs of the industrial environment, not only at regional level. Hence, through collaborating with a local turning factory (Zannini SpA) and a small high-tech company focused on introducing mechatronic innovation in the turning sector (Z4Tec srl), and also thanks to an international collaboration with the University of Antwerp, we designed and developed new instruments for in-line quality control, based on non-contact technologies, specifically electro-optical technologies. While also bringing attention to the importance of taking uncertainty into consideration, since it is pivotal in data-based decision making which are at the base of a Zero Defect Manufacturing strategy. This means that poor quality of measurements can prejudice the quality of the data. In particular, this work presents two measurement instruments that were designed and developed for the purpose of in-line quality control and the uncertainty of each of the two instruments was evaluated and analyzed in comparison with instruments already present on the market. In the last part of this work, the uncertainty of a hand-held laser-line triangulation profilometer is estimated. Hence, the research conducted in this thesis can be organized in two main objectives: the development of new vision-based dimensional measurement systems to be implemented in production line and the uncertainty analysis of these measurement instruments. For the first objective we focused on two types of dimensional measurements imposed by the manufacturing industry: macroscopic (measuring dimensions in mm) and microscopic (measuring roughness in µm). For macroscopic measurements the target was the in-production dimensional quality control of turned parts through telecentric optical profilometry. The sample to be inspected was placed between illuminator and objective in order to obtain the projection of the shadow of the sample over a white background. Dimensional measurements were then performed by means of image processing over the image obtained. We discussed the mechanical arrangements targeted to optimize images acquired as well as the main issues that eventual mechanical misalignments of components might introduce in the quality of images. For microscopic measurements we designed a backlit vision-based surface roughness measurement system with a focus on smart behaviors such as determining the optimal imaging conditions using the modulation transfer function and the use of an electrically tunable lens. A turned sample (a cylinder) is placed in front of a camera and it is backlit by a collimated source of light; such optical configuration provides the image of the edge of the sample. A set of turned steel samples with different surface roughness was used to test the sensitivity of the measurement system. For the second objective, the measurement uncertainty evaluation techniques used in this work were a Type A statistical uncertainty analysis and a Gage R&R analysis. In the case of the telecentric profilometer, the analysis was performed in comparison with other on-the-market devices with a Type A analysis and a Gage R&R analysis. The measurement uncertainty of the profilometer proved to be sufficient to obtain results within the tolerance interval required. For the backlit vision system, the comparison of the results was made with other state-of-the-art instruments, with a Type A analysis. The comparison showed that the performance of the backlit instrument depends on the values of surface roughness considered; while at larger values of roughness the offset increases, the results are compatible with the ones of the reference instrument (stylus-based) at lower values of roughness. Lastly, the repeatability and reproducibility of a laser-line triangulation profilometer were assessed, through a Gage R&R study. Each measuring point was inspected by three different operators and the data set has been, at first, processed by a Type A uncertainty analysis. Then, a Gage R&R study helped investigate repeatability, reproducibility and the system variability. This analysis showed that the presented laser-line triangulation system has an acceptable uncertainty

    Real-time Adaptive Detection and Recovery against Sensor Attacks in Cyber-physical Systems

    Get PDF
    Cyber-physical systems (CPSs) utilize computation to control physical objects in real-world environments, and an increasing number of CPS-based applications have been designed for life-critical purposes. Sensor attacks, which manipulate sensor readings to deceive CPSs into performing dangerous actions, can result in severe consequences. This urgent need has motivated significant research into reactive defense. In this dissertation, we present an adaptive detection method capable of identifying sensor attacks before the system reaches unsafe states. Once the attacks are detected, a recovery approach that we propose can guide the physical plant to a desired safe state before a safety deadline.Existing detection approaches tend to minimize detection delay and false alarms simultaneously, despite a clear trade-off between these two metrics. We argue that attack detection should dynamically balance these metrics according to the physical system\u27s current state. In line with this argument, we propose an adaptive sensor attack detection system comprising three components: an adaptive detector, a detection deadline estimator, and a data logger. This system can adapt the detection delay and thus false alarms in real-time to meet a varying detection deadline, thereby improving usability. We implement our detection system and validate it using multiple CPS simulators and a reduced-scale autonomous vehicle testbed. After identifying sensor attacks, it is essential to extend the benefits of attack detection. In this dissertation, we investigate how to eliminate the impact of these attacks and propose novel real-time recovery methods for securing CPSs. Initially, we target sensor attack recovery in linear CPSs. By employing formal methods, we are able to reconstruct state estimates and calculate a conservative safety deadline. With these constraints, we formulate the recovery problem as either a linear programming or a quadratic programming problem. By solving this problem, we obtain a recovery control sequence that can smoothly steer a physical system back to a target state set before a safe deadline and maintain the system state within the set once reached. Subsequently, to make recovery practical for complex CPSs, we adapt our recovery method for nonlinear systems and explore the use of uncorrupted sensors to alleviate uncertainty accumulation. Ultimately, we implement our approach and showcase its effectiveness and efficiency through an extensive set of experiments. For linear CPSs, we evaluate the approach using 5 CPS simulators and 3 types of sensor attacks. For nonlinear CPSs, we assess our method on 3 nonlinear benchmarks
    corecore