563 research outputs found

    A solid state spin-wave quantum memory for time-bin qubits

    Get PDF
    We demonstrate the first solid-state spin-wave optical quantum memory with on-demand read-out. Using the full atomic frequency comb scheme in a \PrYSO crystal, we store weak coherent pulses at the single-photon level with a signal to noise ratio >10> 10. Narrow-band spectral filtering based on spectral hole burning in a second \PrYSO crystal is used to filter out the excess noise created by control pulses to reach an unconditional noise level of (2.0±0.3)×10−3(2.0 \pm 0.3) \times10^{-3} photons per pulse. We also report spin-wave storage of photonic time-bin qubits with conditional fidelities higher than a measure and prepare strategy, demonstrating that the spin-wave memory operates in the quantum regime. This makes our device the first demonstration of a quantum memory for time-bin qubits, with on demand read-out of the stored quantum information. These results represent an important step for the use of solid-state quantum memories in scalable quantum networks.Comment: 10 pages, 10 figure

    Deep Learning with Long Short-Term Memory for Time Series Prediction

    Full text link
    Time series prediction can be generalized as a process that extracts useful information from historical records and then determines future values. Learning long-range dependencies that are embedded in time series is often an obstacle for most algorithms, whereas Long Short-Term Memory (LSTM) solutions, as a specific kind of scheme in deep learning, promise to effectively overcome the problem. In this article, we first give a brief introduction to the structure and forward propagation mechanism of the LSTM model. Then, aiming at reducing the considerable computing cost of LSTM, we put forward the Random Connectivity LSTM (RCLSTM) model and test it by predicting traffic and user mobility in telecommunication networks. Compared to LSTM, RCLSTM is formed via stochastic connectivity between neurons, which achieves a significant breakthrough in the architecture formation of neural networks. In this way, the RCLSTM model exhibits a certain level of sparsity, which leads to an appealing decrease in the computational complexity and makes the RCLSTM model become more applicable in latency-stringent application scenarios. In the field of telecommunication networks, the prediction of traffic series and mobility traces could directly benefit from this improvement as we further demonstrate that the prediction accuracy of RCLSTM is comparable to that of the conventional LSTM no matter how we change the number of training samples or the length of input sequences.Comment: 9 pages, 5 figures, 14 reference

    Memory for time

    Full text link
    The brain maintains a record of recent events including information about the time at which events were experienced. We review behavioral and neuro-physiological evidence as well as computational models to better understand memory for time. Neurophysiologically, populations of neurons that record the time of recent events have been observed in many brain regions. Time cells fire in long sequences after a triggering event demonstrating memory for the past. Populations of exponentially-decaying neurons record past events at many delays by decaying at different rates. Both kinds of representations record distant times with less temporal resolution. The work reviewed here converges on the idea that the brain maintains a representation of past events along a scale-invariant compressed timeline.Accepted manuscrip

    An Attention Free Long Short-Term Memory for Time Series Forecasting

    Full text link
    Deep learning is playing an increasingly important role in time series analysis. We focused on time series forecasting using attention free mechanism, a more efficient framework, and proposed a new architecture for time series prediction for which linear models seem to be unable to capture the time dependence. We proposed an architecture built using attention free LSTM layers that overcome linear models for conditional variance prediction. Our findings confirm the validity of our model, which also allowed to improve the prediction capacity of a LSTM, while improving the efficiency of the learning task

    Sigmoid Activation-Based Long Short-Term Memory for Time Series Data Classification

    Get PDF
    With the enhanced usage of Artificial Intelligence (AI) driven applications, the researchers often face challenges in improving the accuracy of the data classification models, while trading off the complexity. In this paper, we address the classification of time series data using the Long Short-Term Memory (LSTM) network while focusing on the activation functions. While the existing activation functions such as sigmoid and tanh are used as LSTM internal activations, the customizability of these activations stays limited. This motivates us to propose a new family of activation functions, called log-sigmoid, inside the LSTM cell for time series data classification, and analyze its properties. We also present the use of a linear transformation (e.g., log tanh) of the proposed log-sigmoid activation as a replacement of the traditional tanh function in the LSTM cell. Both the cell activation as well as recurrent activation functions inside the LSTM cell are modified with log-sigmoid activation family while tuning the log bases. Further, we report a comparative performance analysis of the LSTM model using the proposed and the state-of-the-art activation functions on multiple public time-series databases

    Utilization of Historic Information in an Optimisation Task

    Get PDF
    One of the basic components of a discrete model of motor behavior and decision making, which describes tracking and supervisory control in unitary terms, is assumed to be a filtering mechanism which is tied to the representational principles of human memory for time-series information. In a series of experiments subjects used the time-series information with certain significant limitations: there is a range-effect; asymmetric distributions seem to be recognized, but it does not seem to be possible to optimize performance based on skewed distributions. Thus there is a transformation of the displayed data between the perceptual system and representation in memory involving a loss of information. This rules out a number of representational principles for time-series information in memory and fits very well into the framework of a comprehensive discrete model for control of complex systems, modelling continuous control (tracking), discrete responses, supervisory behavior and learning

    Disambiguating past events: accurate source memory for time and context depends on different retrieval processes

    Get PDF
    Participant payment was provided by the School of Psychology and Neuroscience ResPay scheme.Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans’ memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a ‘Remember-Know’ memory task. Experiment 1 (n = 28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n = 24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval – a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework.PostprintPeer reviewe

    The effects of aging on location-based and distance-based processes in memory for time

    Get PDF
    • …
    corecore