61 research outputs found

    Dynamic Target Classification in Wireless Sensor Networks

    Get PDF
    Information exploitation schemes with high-accuracy and low computational cost play an important role in Wireless Sensor Networks (WSNs). This thesis studies the problem of target classification in WSNs. Specifically, due to the resource constraints and dynamic nature of WSNs, we focus on the design of the energy-efficient solutionwith high accuracy for target classification in WSNs. Feature extraction and classification are two intertwined components in pattern recognition. Our hypothesis is that for each type of target, there exists an optimal set of features in conjunction with a specific classifier, which can yield the best performance in terms of classification accuracy using least amount of computation, measured by the number of features used. Our objective is to find such an optimal combination of features and classifiers. Our study is in the context of applications deployed in a wireless sensor network (WSN) environment, composed of large number of small-size sensors with their own processing, sensing and networking capabilities powered by onboard battery supply. Due to the extremely limited resources on each sensor platform, the decision making is prune to fault, making sensor fusion a necessity. We present a concept, referred to as dynamic target classification in WSNs. The main idea is to dynamically select the optimal combination of features and classifiers based on the probability that the target to be classified might belong to a certain category. We use two data sets to validate our hypothesis and derive the optimal combination sets by minimizing a cost function. We apply the proposed algorithm to a scenario of collaborative target classification among a group of sensors which are selected using information based sensor selection rule in WSNs. Experimental results show that our approach can significantly reduce the computational time while at the same time, achieve better classification accuracy without using any fusion algorithm, compared with traditional classification approaches, making it a viable solution in practice

    Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses

    Full text link
    The ongoing deployment of the fifth generation (5G) wireless networks constantly reveals limitations concerning its original concept as a key driver of Internet of Everything (IoE) applications. These 5G challenges are behind worldwide efforts to enable future networks, such as sixth generation (6G) networks, to efficiently support sophisticated applications ranging from autonomous driving capabilities to the Metaverse. Edge learning is a new and powerful approach to training models across distributed clients while protecting the privacy of their data. This approach is expected to be embedded within future network infrastructures, including 6G, to solve challenging problems such as resource management and behavior prediction. This survey article provides a holistic review of the most recent research focused on edge learning vulnerabilities and defenses for 6G-enabled IoT. We summarize the existing surveys on machine learning for 6G IoT security and machine learning-associated threats in three different learning modes: centralized, federated, and distributed. Then, we provide an overview of enabling emerging technologies for 6G IoT intelligence. Moreover, we provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, including backdoor attacks, adversarial examples, combined attacks, poisoning attacks, Sybil attacks, byzantine attacks, inference attacks, and dropping attacks. In addition, we provide a comprehensive and detailed taxonomy and a side-by-side comparison of the state-of-the-art defense methods against edge learning vulnerabilities. Finally, as new attacks and defense technologies are realized, new research and future overall prospects for 6G-enabled IoT are discussed

    A Closed-Loop Bidirectional Brain-Machine Interface System For Freely Behaving Animals

    Get PDF
    A brain-machine interface (BMI) creates an artificial pathway between the brain and the external world. The research and applications of BMI have received enormous attention among the scientific community as well as the public in the past decade. However, most research of BMI relies on experiments with tethered or sedated animals, using rack-mount equipment, which significantly restricts the experimental methods and paradigms. Moreover, most research to date has focused on neural signal recording or decoding in an open-loop method. Although the use of a closed-loop, wireless BMI is critical to the success of an extensive range of neuroscience research, it is an approach yet to be widely used, with the electronics design being one of the major bottlenecks. The key goal of this research is to address the design challenges of a closed-loop, bidirectional BMI by providing innovative solutions from the neuron-electronics interface up to the system level. Circuit design innovations have been proposed in the neural recording front-end, the neural feature extraction module, and the neural stimulator. Practical design issues of the bidirectional neural interface, the closed-loop controller and the overall system integration have been carefully studied and discussed.To the best of our knowledge, this work presents the first reported portable system to provide all required hardware for a closed-loop sensorimotor neural interface, the first wireless sensory encoding experiment conducted in freely swimming animals, and the first bidirectional study of the hippocampal field potentials in freely behaving animals from sedation to sleep. This thesis gives a comprehensive survey of bidirectional BMI designs, reviews the key design trade-offs in neural recorders and stimulators, and summarizes neural features and mechanisms for a successful closed-loop operation. The circuit and system design details are presented with bench testing and animal experimental results. The methods, circuit techniques, system topology, and experimental paradigms proposed in this work can be used in a wide range of relevant neurophysiology research and neuroprosthetic development, especially in experiments using freely behaving animals

    Intelligent Control and Protection Methods for Modern Power Systems Based on WAMS

    Get PDF

    Programming techniques for efficient and interoperable software defined radios

    Get PDF
    Recently, Software-Dened Radios (SDRs) has became a hot research topic in wireless communications eld. This is jointly due to the increasing request of reconfigurable and interoperable multi-standard radio systems able to learn from their surrounding environment and efficiently exploit the available frequency spectrum resources, so realizing the cognitive radio paradigm, and to the availability of reprogrammable hardware architectures providing the computing power necessary to meet the tight real-time constraints typical of the state-of-art wideband communications standards. Most SDR implementations are based on mixed architectures in which Field Programmable Gate Arrays (FPGA), Digital Signal Processors (DSP) and General Purpose Processors (GPP) coexist. GPP-based solutions, even if providing the highest level of flexibility, are typically avoided because of their computational inefficiency and power consumption. Starting from these assumptions, this thesis tries to jointly face two of the main important issues in GPP-based SDR systems: the computational efficiency and the interoperability capacity. In the first part, this thesis presents the potential of a novel programming technique, named Memory Acceleration (MA), in which the memory resources typical of GPP-based systems are used to assist central processor in executing real-time signal processing operations. This technique, belonging to the classical computer-science optimization techniques known as Space-Time trade-offs, defines novel algorithmic methods to assist developers in designing their software-defined signal processing algorithms. In order to show its applicability some "real-world" case studies are presented together with the acceleration factor obtained. In the second part of the thesis, the interoperability issue in SDR systems is also considered. Existing software architectures, like the Software Communications Architecture (SCA), abstract the hardware/software components of a radio communications chain using a middleware like CORBA for providing full portability and interoperability to the implemented chain, called waveform in the SCA parlance. This feature is paid in terms of computational overhead introduced by the software communications middleware and this is one of the reasons why GPP-based architecture are generally discarded also for the implementation of narrow-band SCA-compliant communications standards. In this thesis we briefly analyse SCA architecture and an open-source SCA-compliant framework, ie. OSSIE, and provide guidelines to enable component-based multithreading programming and CPU affinity in that framework. We also detail the implementation of a real-time SCA-compliant waveform developed inside this modified framework, i.e. the VHF analogue aeronautical communications transceiver. Finally, we provide the proof of how it is possible to implement an efficient and interoperable real-time wideband SCA-compliant waveform, i.e. the AeroMACS waveform, on a GPP-based architecture by merging the acceleration factor provided by MA technique and the interoperability feature ensured by SCA architecture
    • …
    corecore