5,773 research outputs found

    Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces

    Get PDF
    Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen GerĂ€ten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur fĂŒr Menschen mit neurologischen Verletzungen entwickelt, sondern auch fĂŒr ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfĂ€nglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser BemĂŒhungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein großes Potenzial fĂŒr eine Vielzahl von Anwendungen, auch fĂŒr weniger stark eingeschrĂ€nkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hĂ€ngt jedoch auch von der VerfĂŒgbarkeit zuverlĂ€ssiger BCI-Hardware ab, die den Einsatz in der realen Welt gewĂ€hrleistet. Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was FlexibilitĂ€t und Effizienz bei der EEG-Signalverarbeitung gewĂ€hrleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewĂ€hrleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschließlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller MobilitĂ€t. Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die FlexibilitĂ€t des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die fĂŒr verschiedene BCI-Anwendungen erforderlich ist. DarĂŒber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung fĂŒr mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht. Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte LeistungsfĂ€higkeit und Ausstattung fĂŒr ein mobiles BCI. Es erfĂŒllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg fĂŒr eine schnelle Übertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf fĂŒr die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard fĂŒr BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application. The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors. The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies. Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability

    Reliable indoor optical wireless communication in the presence of fixed and random blockers

    Get PDF
    The advanced innovation of smartphones has led to the exponential growth of internet users which is expected to reach 71% of the global population by the end of 2027. This in turn has given rise to the demand for wireless data and internet devices that is capable of providing energy-efficient, reliable data transmission and high-speed wireless data services. Light-fidelity (LiFi), known as one of the optical wireless communication (OWC) technology is envisioned as a promising solution to accommodate these demands. However, the indoor LiFi channel is highly environment-dependent which can be influenced by several crucial factors (e.g., presence of people, furniture, random users' device orientation and the limited field of view (FOV) of optical receivers) which may contribute to the blockage of the line-of-sight (LOS) link. In this thesis, it is investigated whether deep learning (DL) techniques can effectively learn the distinct features of the indoor LiFi environment in order to provide superior performance compared to the conventional channel estimation techniques (e.g., minimum mean square error (MMSE) and least squares (LS)). This performance can be seen particularly when access to real-time channel state information (CSI) is restricted and is achieved with the cost of collecting large and meaningful data to train the DL neural networks and the training time which was conducted offline. Two DL-based schemes are designed for signal detection and resource allocation where it is shown that the proposed methods were able to offer close performance to the optimal conventional schemes and demonstrate substantial gain in terms of bit-error ratio (BER) and throughput especially in a more realistic or complex indoor environment. Performance analysis of LiFi networks under the influence of fixed and random blockers is essential and efficient solutions capable of diminishing the blockage effect is required. In this thesis, a CSI acquisition technique for a reconfigurable intelligent surface (RIS)-aided LiFi network is proposed to significantly reduce the dimension of the decision variables required for RIS beamforming. Furthermore, it is shown that several RIS attributes such as shape, size, height and distribution play important roles in increasing the network performance. Finally, the performance analysis for an RIS-aided realistic indoor LiFi network are presented. The proposed RIS configuration shows outstanding performances in reducing the network outage probability under the effect of blockages, random device orientation, limited receiver's FOV, furniture and user behavior. Establishing a LOS link that achieves uninterrupted wireless connectivity in a realistic indoor environment can be challenging. In this thesis, an analysis of link blockage is presented for an indoor LiFi system considering fixed and random blockers. In particular, novel analytical framework of the coverage probability for a single source and multi-source are derived. Using the proposed analytical framework, link blockages of the indoor LiFi network are carefully investigated and it is shown that the incorporation of multiple sources and RIS can significantly reduce the LOS coverage blockage probability in indoor LiFi systems

    Law in Books Versus Law in Action in the Landmark Shenzhen, China, Personal Bankruptcy Regime

    Get PDF
    The first personal bankruptcy regime in Mainland China celebrated its second anniversary on March 1, 2023. An empirical assessment of the law in action during these first two years reveals some troubling deviations from the early promises of the new law on the books. In the first year, a handful of judges were charged with an arduous in-person review process for over 1,000 applicants, and they accepted only twenty-five for case initiation. In the second year, initial case review was delegated to an administrative body—an important efficiency enhancement that tripled the number of opened cases. Nonetheless, most debtors continue to be dissuaded from applying at all, and hundreds of applications have been rejected, in part on the non-statutory grounds that the court is admitting only business-related cases. Moreover, the default gateway of liquidation-and-discharge has been successfully used only once, secretly relegated to “last resort” status. Debtors are effectively limited to proposing that creditors accept a payment plan offering full repayment of principal within five years, and creditors have rejected many such plans. While most admitted restructuring cases have led to confirmed plans, many of these “successes” are built on very shaky foundations that portend likely struggle and repeat default ahead. The new system, thus, seems to be operating in quite a skewed fashion: admitting only a very small fraction of applicants and offering relief on narrow and demanding grounds. This is a disappointing abandonment of the textual promise of broad, standardized relief consistent with international best practices. As national authorities in China (and elsewhere) consider adopting a country-wide personal bankruptcy law, Shenzhen’s experience illustrates the challenges of striking the right balance between relief and responsibility in personal insolvency regulation

    Online semi-supervised learning in non-stationary environments

    Get PDF
    Existing Data Stream Mining (DSM) algorithms assume the availability of labelled and balanced data, immediately or after some delay, to extract worthwhile knowledge from the continuous and rapid data streams. However, in many real-world applications such as Robotics, Weather Monitoring, Fraud Detection Systems, Cyber Security, and Computer Network Traffic Flow, an enormous amount of high-speed data is generated by Internet of Things sensors and real-time data on the Internet. Manual labelling of these data streams is not practical due to time consumption and the need for domain expertise. Another challenge is learning under Non-Stationary Environments (NSEs), which occurs due to changes in the data distributions in a set of input variables and/or class labels. The problem of Extreme Verification Latency (EVL) under NSEs is referred to as Initially Labelled Non-Stationary Environment (ILNSE). This is a challenging task because the learning algorithms have no access to the true class labels directly when the concept evolves. Several approaches exist that deal with NSE and EVL in isolation. However, few algorithms address both issues simultaneously. This research directly responds to ILNSE’s challenge in proposing two novel algorithms “Predictor for Streaming Data with Scarce Labels” (PSDSL) and Heterogeneous Dynamic Weighted Majority (HDWM) classifier. PSDSL is an Online Semi-Supervised Learning (OSSL) method for real-time DSM and is closely related to label scarcity issues in online machine learning. The key capabilities of PSDSL include learning from a small amount of labelled data in an incremental or online manner and being available to predict at any time. To achieve this, PSDSL utilises both labelled and unlabelled data to train the prediction models, meaning it continuously learns from incoming data and updates the model as new labelled or unlabelled data becomes available over time. Furthermore, it can predict under NSE conditions under the scarcity of class labels. PSDSL is built on top of the HDWM classifier, which preserves the diversity of the classifiers. PSDSL and HDWM can intelligently switch and adapt to the conditions. The PSDSL adapts to learning states between self-learning, micro-clustering and CGC, whichever approach is beneficial, based on the characteristics of the data stream. HDWM makes use of “seed” learners of different types in an ensemble to maintain its diversity. The ensembles are simply the combination of predictive models grouped to improve the predictive performance of a single classifier. PSDSL is empirically evaluated against COMPOSE, LEVELIW, SCARGC and MClassification on benchmarks, NSE datasets as well as Massive Online Analysis (MOA) data streams and real-world datasets. The results showed that PSDSL performed significantly better than existing approaches on most real-time data streams including randomised data instances. PSDSL performed significantly better than ‘Static’ i.e. the classifier is not updated after it is trained with the first examples in the data streams. When applied to MOA-generated data streams, PSDSL ranked highest (1.5) and thus performed significantly better than SCARGC, while SCARGC performed the same as the Static. PSDSL achieved better average prediction accuracies in a short time than SCARGC. The HDWM algorithm is evaluated on artificial and real-world data streams against existing well-known approaches such as the heterogeneous WMA and the homogeneous Dynamic DWM algorithm. The results showed that HDWM performed significantly better than WMA and DWM. Also, when recurring concept drifts were present, the predictive performance of HDWM showed an improvement over DWM. In both drift and real-world streams, significance tests and post hoc comparisons found significant differences between algorithms, HDWM performed significantly better than DWM and WMA when applied to MOA data streams and 4 real-world datasets Electric, Spam, Sensor and Forest cover. The seeding mechanism and dynamic inclusion of new base learners in the HDWM algorithms benefit from the use of both forgetting and retaining the models. The algorithm also provides the independence of selecting the optimal base classifier in its ensemble depending on the problem. A new approach, Envelope-Clustering is introduced to resolve the cluster overlap conflicts during the cluster labelling process. In this process, PSDSL transforms the centroids’ information of micro-clusters into micro-instances and generates new clusters called Envelopes. The nearest envelope clusters assist the conflicted micro-clusters and successfully guide the cluster labelling process after the concept drifts in the absence of true class labels. PSDSL has been evaluated on real-world problem ‘keystroke dynamics’, and the results show that PSDSL achieved higher prediction accuracy (85.3%) and SCARGC (81.6%), while the Static (49.0%) significantly degrades the performance due to changes in the users typing pattern. Furthermore, the predictive accuracies of SCARGC are found highly fluctuated between (41.1% to 81.6%) based on different values of parameter ‘k’ (number of clusters), while PSDSL automatically determine the best values for this parameter

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface

    Towards Neuromorphic Gradient Descent: Exact Gradients and Low-Variance Online Estimates for Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) are biologically-plausible models that can run on low-powered non-Von Neumann neuromorphic hardware, positioning them as promising alternatives to conventional Deep Neural Networks (DNNs) for energy-efficient edge computing and robotics. Over the past few years, the Gradient Descent (GD) and Error Backpropagation (BP) algorithms used in DNNs have inspired various training methods for SNNs. However, the non-local and the reverse nature of BP, combined with the inherent non-differentiability of spikes, represent fundamental obstacles to computing gradients with SNNs directly on neuromorphic hardware. Therefore, novel approaches are required to overcome the limitations of GD and BP and enable online gradient computation on neuromorphic hardware. In this thesis, I address the limitations of GD and BP with SNNs by proposing three algorithms. First, I extend a recent method that computes exact gradients with temporally-coded SNNs by relaxing the firing constraint of temporal coding and allowing multiple spikes per neuron. My proposed method generalizes the computation of exact gradients with SNNs and enhances the tradeoffs between performance and various other aspects of spiking neurons. Next, I introduce a novel alternative to BP that computes low-variance gradient estimates in a local and online manner. Compared to other alternatives to BP, the proposed method demonstrates an improved convergence rate and increased performance with DNNs. Finally, I combine these two methods and propose an algorithm that estimates gradients with SNNs in a manner that is compatible with the constraints of neuromorphic hardware. My empirical results demonstrate the effectiveness of the resulting algorithm in training SNNs without performing BP

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution

    Get PDF
    Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners

    Multi-Factor Key Derivation Function (MFKDF) for Fast, Flexible, Secure, & Practical Key Management

    Full text link
    We present the first general construction of a Multi-Factor Key Derivation Function (MFKDF). Our function expands upon password-based key derivation functions (PBKDFs) with support for using other popular authentication factors like TOTP, HOTP, and hardware tokens in the key derivation process. In doing so, it provides an exponential security improvement over PBKDFs with less than 12 ms of additional computational overhead in a typical web browser. We further present a threshold MFKDF construction, allowing for client-side key recovery and reconstitution if a factor is lost. Finally, by "stacking" derived keys, we provide a means of cryptographically enforcing arbitrarily specific key derivation policies. The result is a paradigm shift toward direct cryptographic protection of user data using all available authentication factors, with no noticeable change to the user experience. We demonstrate the ability of our solution to not only significantly improve the security of existing systems implementing PBKDFs, but also to enable new applications where PBKDFs would not be considered a feasible approach.Comment: To appear in USENIX Security '2

    Chinese Knitwear Brands: The need for creative design to result in global business success

    Get PDF
    Chinese cashmere knitwear companies have become suppliers of international fashion brands because of their technological excellence, advantages of raw materials and competitive prices. However, their in-house brands are steadily declining. In the past 15 years, Chinese cashmere brands have progressively lost their market share to Chinese and Western fashion brands, with a few notable exceptions. Their brands lack differentiation from other Chinese competitors, causing low price competition, which contributes to sustainability issues such as unsold stock and material/manpower waste. The decline is likely to continue as the brands serve only an ageing market, rather than attracting younger generations to their products. Chinese cashmere companies invest little in design, which is a significant limitation for improving the brands’ opportunity to become successful and sustainable businesses. This study looks for solutions from the design perspective. The research aimed to investigate what design can do to help deal with the current problems of the Chinese knitwear brands to improve their prospects for future business success. The objectives of the study were to enquire into the challenges and opportunities facing the Chinese knitwear sector, to evaluate current design practice in knitwear brands, to understand how design and brand management can be integrated to generate a sustainable brand. Research questions were developed to explore the brand and design problems, the role of design and organisational structure, what the barriers and enablers for a thriving design culture were alongside possible solutions for design improvement. A pragmatic philosophy underpinned research design, guiding the adoption of methods in response to research questions. Interviews with stakeholders from both the knitwear industry and design education were undertaken. In addition, a case study using design action research with immersive field research was developed for investigating the knitwear brand issues; furthermore, a knitwear collection was created using western design approaches to demonstrate an exemplar design process for the sector and to illustrate the differences to current Chinese design methods. The study argues the obstacles to design culture enrichment in Chinese knitwear brands was caused by their design context, lack of brand positioning, limited understanding of their consumers and business models that are not fit for purpose. An absence of experienced leadership creates unclear design direction, instead of collections centred around a theme; Chinese brands sell unconnected designs. Brands lack the distinct brand characteristics that distinguish them from their competitors. The contribution to knowledge made by this study includes the identification of the reasons for the decline in Chinese cashmere brands, an understanding of their barriers to design culture to developing good designs and it also highlights the lack of awareness of sustainability issues in the sector. The study sheds new light on the rarely acknowledged issue of how to upgrade these brands as modern business for younger consumers, and how to enrich the design culture for brand business growth within sustainable contexts. The thesis analyses in depth the causes for the decline in these brands and makes recommendations for how design can make a contribution to reversing the brands’ decline and increasing their sustainability
    • 

    corecore