375,013 research outputs found

    The use of adaptive predictor filter as a trigger mechanism in simulated cosmic rays radio signals corrupted with Gaussian noise

    Get PDF
    Adaptive filtering belongs to the realm of learning algorithms, widely used in our daily life in the context of machine learning, artificial intelligence, pattern recognition, etc. It is formally defined as a self-designing device with time-varying parameters that are adjusted recursively in accordance with the input data. The trigger mechanism is a central task in experiments using antennas to detect cosmic rays as it selects a cosmic- ray induced signal among all the voltages traces events that reach the antennas. This work presents the efficiency of a trigger mechanism developed using the adaptive predictor filter technique, whose capability is well known for time series prediction usage. This technique is independent of an external detector, using only the online temporal field recorded by the antennas in a simulated data set and Gaussian noise

    Privacy-Preserving Load Forecasting via Personalized Model Obfuscation

    Full text link
    The widespread adoption of smart meters provides access to detailed and localized load consumption data, suitable for training building-level load forecasting models. To mitigate privacy concerns stemming from model-induced data leakage, federated learning (FL) has been proposed. This paper addresses the performance challenges of short-term load forecasting models trained with FL on heterogeneous data, emphasizing privacy preservation through model obfuscation. Our proposed algorithm, Privacy Preserving Federated Learning (PPFL), incorporates personalization layers for localized training at each smart meter. Additionally, we employ a differentially private mechanism to safeguard against data leakage from shared layers. Simulations on the NREL ComStock dataset corroborate the effectiveness of our approach

    Quality-optimized predictive analytics

    Get PDF
    On-line statistical and machine learning analytic tasks over large- scale contextual data streams coming from e.g., wireless sensor networks, Inter- net of Things environments, have gained high popularity nowadays due to their significance in knowledge extraction, regression and classification tasks, and, more generally, in making sense from large-scale streaming data. The quality of the received contextual information, however, impacts predictive analytics tasks especially when dealing with uncertain data, outliers data, and data con- taining missing values. Low quality of received contextual data significantly spoils the progressive inference and on-line statistical reasoning tasks, thus, bias is introduced in the induced knowledge, e.g., classification and decision making. To alleviate such situation, which is not so rare in real time contextual information processing systems, we propose a progressive time-optimized data quality-aware mechanism, which attempts to deliver contextual information of high quality to predictive analytics engines by progressively introducing a certain controlled delay. Such a mechanism progressively delivers high qual- ity data as much as possible, thus eliminating possible biases in knowledge extraction and predictive analysis tasks. We propose an analytical model for this mechanism and show the benefits stem from this approach through com- prehensive experimental evaluation and comparative assessment with quality- unaware methods over real sensory multivariate contextual data

    Compressive Learning with Privacy Guarantees

    Get PDF
    International audienceThis work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, from which the learning task is then performed. We show that a simple perturbation of this mechanism with additive noise is sufficient to satisfy differential privacy, a well established formalism for defining and quantifying the privacy of a random mechanism. We combine this with a feature subsampling mechanism, which reduces the computational cost without damaging privacy. The framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis (PCA), for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions

    Comparison of the Efficacy of two Anticonvulsants, Phenytoin and Valproate to Improve PCP and d-amphetamine Induced Deficits in a Reversal Learning Task in the Rat

    Get PDF
    Recent studies in our laboratory have shown that PCP (phencyclidine) and d-amphetamine induce a cognitive deficit in rats, in a paradigm of potential relevance for the pathology of schizophrenia. Atypical, but not classical antipsychotics and the anticonvulsant, lamotrigine have been shown to prevent a selective reversal learning deficit induced by PCP. In contrast, only haloperidol reversed the d-amphetamine-induced deficit. The present study aimed to explore the ability of two anticonvulsants with differing mechanism of action, valproate and phenytoin to attenuate the cognitive deficits induced by PCP and d-amphetamine in the reversal learning paradigm. PCP at 1.5 mg/kg and d-amphetamine at 0.5 mg/kg both produced a selective and significant reduction in performance of the reversal phase with no effect on the initial phase of the task in female-hooded Lister rats. Valproate (25–200 mg/kg) and phenytoin (25–50 mg/kg) had no effect on performance when administered alone. Valproate (100–200 mg/kg), whose principle action is thought to be the enhancement of GABA transmission, was unable to prevent the cognitive deficit induced by either PCP or d-amphetamine. Conversely, phenytoin (50 mg/kg), a use-dependent sodium channel inhibitor, significantly prevented the deficit induced by PCP, but not d-amphetamine. These results add to our earlier work with lamotrigine, and suggest that sodium channel blockade may be a mechanism by which some anticonvulsant drugs can prevent the PCP-induced deficit. These data have implications for the use of anticonvulsant drugs in the treatment of cognitive or psychotic disorders

    The Closeness of In-Context Learning and Weight Shifting for Softmax Regression

    Full text link
    Large language models (LLMs) are known for their exceptional performance in natural language processing, making them highly effective in many human life-related or even job-related tasks. The attention mechanism in the Transformer architecture is a critical component of LLMs, as it allows the model to selectively focus on specific input parts. The softmax unit, which is a key part of the attention mechanism, normalizes the attention scores. Hence, the performance of LLMs in various NLP tasks depends significantly on the crucial role played by the attention mechanism with the softmax unit. In-context learning, as one of the celebrated abilities of recent LLMs, is an important concept in querying LLMs such as ChatGPT. Without further parameter updates, Transformers can learn to predict based on few in-context examples. However, the reason why Transformers becomes in-context learners is not well understood. Recently, several works [ASA+22,GTLV22,ONR+22] have studied the in-context learning from a mathematical perspective based on a linear regression formulation minxAxb2\min_x\| Ax - b \|_2, which show Transformers' capability of learning linear functions in context. In this work, we study the in-context learning based on a softmax regression formulation minxexp(Ax),1n1exp(Ax)b2\min_{x} \| \langle \exp(Ax), {\bf 1}_n \rangle^{-1} \exp(Ax) - b \|_2 of Transformer's attention mechanism. We show the upper bounds of the data transformations induced by a single self-attention layer and by gradient-descent on a 2\ell_2 regression loss for softmax prediction function, which imply that when training self-attention-only Transformers for fundamental regression tasks, the models learned by gradient-descent and Transformers show great similarity

    Intelligent learning diversity mechanism for unmanned aerial vehicles applications

    Get PDF
    The increased use of drones and aerial vehicles in applications poses challenges of airspace safety for aviation organizations. It is important to ensure the safety of the airspace when a significant number of unmanned aerial vehicles are deployed by civilian users. A solution that meets this requirement is important to promote innovation in the commercialization of air space for civilian users deploying unmanned aerial vehicle. The discussion in this paper proposes a mechanism that uses artificial intelligence to address this challenge. The proposed mechanism utilizes a low altitude platform (LAP) and entities in terrestrial wireless networks. The low altitude platform (LAP) observes, develops insights and training data (with human aid). The training data is used to develop learning mechanisms which determine the suitable unmanned aerial vehicles flight parameters in different scenarios. The use of the LAP reduces the burden of communicating with terrestrial base stations. The unmanned aerial vehicles have a reduced altitude between the LAPs in comparison to terrestrial base stations. This reduces the free space path loss and rain-induced attenuation. The performance benefit of the proposed mechanism in comparison to existing solution is examined via MATLAB simulations. Evaluation shows that the proposed mechanism reduces the network access costs by up to 90% on average. The proposed mechanism also increases available flight power and improves airspace safety by 37.3% and up to 53.2% on average respectively. Keywords: Autonomous unmanned aerial vehicles, Intelligence Paradigm; Aviation Safety, Capital Constrained Aviation Organizations
    corecore