803 research outputs found

    Multitasking versus multiplexing: Toward a normative account of limitations in the simultaneous execution of control-demanding behaviors

    Get PDF
    Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few—and usually only a single—control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing; it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a trade-off between the flexibility and efficiency of representation (“multiplexing”) and the simultaneous engagement of different processing pathways (“multitasking”). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes.National Science Foundation (U.S.). Graduate Research Fellowship Progra

    Decentralized Federated Learning: Fundamentals, State-of-the-art, Frameworks, Trends, and Challenges

    Full text link
    In the last decade, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, heightened vulnerability to system failures, and trustworthiness concerns affecting the entity responsible for the global model creation. Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation and minimizing reliance on centralized architectures. However, despite the work done in DFL, the literature has not (i) studied the main aspects differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and evaluate new solutions; and (iii) reviewed application scenarios using DFL. Thus, this article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators. Additionally, the paper at hand explores existing mechanisms to optimize critical DFL fundamentals. Then, the most relevant features of the current DFL frameworks are reviewed and compared. After that, it analyzes the most used DFL application scenarios, identifying solutions based on the fundamentals and frameworks previously defined. Finally, the evolution of existing DFL solutions is studied to provide a list of trends, lessons learned, and open challenges

    Secure Mobile Computing by Using Convolutional and Capsule Deep Neural Networks

    Get PDF
    Mobile devices are becoming smarter to satisfy modern user\u27s increasing needs better, which is achieved by equipping divers of sensors and integrating the most cutting-edge Deep Learning (DL) techniques. As a sophisticated system, it is often vulnerable to multiple attacks (side-channel attacks, neural backdoor, etc.). This dissertation proposes solutions to maintain the cyber-hygiene of the DL-Based smartphone system by exploring possible vulnerabilities and developing countermeasures. First, I actively explore possible vulnerabilities on the DL-Based smartphone system to develop proactive defense mechanisms. I discover a new side-channel attack on smartphones using the unrestricted magnetic sensor data. I demonstrate that attackers can effectively infer the Apps being used on a smartphone with an accuracy of over 80%, through training a deep Convolutional Neural Networks (CNN). Various signal processing strategies have been studied for feature extractions, including a tempogram based scheme. Moreover, by further exploiting the unrestricted motion sensor to cluster magnetometer data, the sniffing accuracy can increase to as high as 98%. To mitigate such attacks, I propose a noise injection scheme that can effectively reduce the App sniffing accuracy to only 15% and, at the same time, has a negligible effect on benign Apps. On the other hand, I leverage the DL technique to build reactive malware detection schemes. I propose an innovative approach, named CapJack, to detect in-browser malicious cryptocurrency mining activities by using the latest CapsNet technology. To the best of our knowledge, this is the first work to introduce CapsNet to the field of malware detection through system-behavioural analysis. It is particularly useful to detect malicious miners under multitasking environments where multiple applications run simultaneously. Finally, as DL itself is vulnerable to model-based attacks, I proactively explore possible attacks against the DL model. To this end, I discover a new clean label attack, named Invisible Poison, which stealthily and aggressively plants a backdoor in neural networks (NN). It converts a trigger to noise concealed inside regular images for training NN, to plant a backdoor that can be later activated by the trigger. The attack has the following distinct properties. First, it is a black-box attack, requiring zero-knowledge about the target NN model. Second, it employs \invisible poison to achieve stealthiness where the trigger is disguised as \noise that is therefore invisible to human, but at the same time, still remains significant in the feature space and thus is highly effective to poison training data

    Multi-task Learning-based CSI Feedback Design in Multiple Scenarios

    Full text link
    For frequency division duplex systems, the essential downlink channel state information (CSI) feedback includes the links of compression, feedback, decompression and reconstruction to reduce the feedback overhead. One efficient CSI feedback method is the Auto-Encoder (AE) structure based on deep learning, yet facing problems in actual deployments, such as selecting the deployment mode when deploying in a cell with multiple complex scenarios. Rather than designing an AE network with huge complexity to deal with CSI of all scenarios, a more realistic mode is to divide the CSI dataset by region/scenario and use multiple relatively simple AE networks to handle subregions' CSI. However, both require high memory capacity for user equipment (UE) and are not suitable for low-level devices. In this paper, we propose a new user-friendly-designed framework based on the latter multi-tasking mode. Via Multi-Task Learning, our framework, Single-encoder-to-Multiple-decoders (S-to-M), designs the multiple independent AEs into a joint architecture: a shared encoder corresponds to multiple task-specific decoders. We also complete our framework with GateNet as a classifier to enable the base station autonomously select the right task-specific decoder corresponding to the subregion. Experiments on the simulating multi-scenario CSI dataset demonstrate our proposed S-to-M's advantages over the other benchmark modes, i.e., significantly reducing the model complexity and the UE's memory consumptionComment: 31 pages, 13 figures, 10 Table

    Searching for the physical nature of intelligence in Neuromorphic Nanowire Networks

    Get PDF
    The brain’s unique information processing efficiency has inspired the development of neuromorphic, or brain-inspired, hardware in effort to reduce the power consumption of conventional Artificial Intelligence (AI). One example of a neuromorphic system is nanowire networks (NWNs). NWNs have been shown to produce conductance pathways similar to neuro-synaptic pathways in the brain, demonstrating nonlinear dynamics, as well as emergent behaviours such as memory and learning. Their synapse-like electro-chemical junctions are connected by a heterogenous neural network-like structure. This makes NWNs a unique system for realising hardware-based machine intelligence that is potentially more brain-like than existing implementations of AI. Much of the brain’s emergent properties are thought to arise from a unique structure-function relationship. The first part of the thesis establishes structural network characterisation methods in NWNs. Borrowing techniques from neuroscience, a toolkit is introduced for characterising network topology in NWNs. NWNs are found to display a ‘small-world’ structure with highly modular connections, like simple biological systems. Next, investigation of the structure-function link in NWNs occurs via implementation of machine learning benchmark tasks on varying network structures. Highly modular networks exhibit an ability to multitask, while integrated networks suffer from crosstalk interference. Finally, above findings are combined to develop and implement neuroscience-inspired learning methods and tasks in NWNs. Specifically, an adaptation of a cognitive task that tests working memory in humans is implemented. Working memory and memory consolidation are demonstrated and found to be attributable to a process similar to synaptic metaplasticity in the brain. The results of this thesis have created new research directions that warrant further exploration to test the universality of the physical nature of intelligence in inorganic systems beyond NWNs
    • 

    corecore