155,998 research outputs found

    Live demonstration: Neuro-inspired system for realtime vision tilt correction

    Get PDF
    Correcting digital images tilt needs huge quantities of memory, high computational resources, and use to take a considerable amount of time. This demonstration shows how a spikes-based silicon retina dynamic vision sensor (DVS) tilt can corrected in real time using a commercial accelerometer. DVS output is a stream of spikes codified using the address-event representation (AER). Event-based processing is focused on change in real time DVS output addresses. Taking into account this DVS feature, we present an AER based layer able to correct in real time the DVS tilt, using a high speed algorithmic mapping layer and introducing a minimum latency in the system. A co-design platform (the AER-Robot platform), based into a Xilinx Spartan 3 FPGA and an 8051 USB microcontroller, has been used to implement the system

    Interference in Language Processing Reflects Direct-Access Memory Retrieval: Evidence from Drift-Diffusion Modeling

    Get PDF
    Many studies on memory retrieval in language processing have identified similarity-based interference as a key determinant of comprehension. The broad consensus is that similarity-based interference reflects erroneous retrieval of a non-target item that matches some of the retrieval cues. However, the mechanisms responsible for such effects remain debated. Activation-based models of retrieval (e.g., Lewis & Vasishth, 2005) claim that any differences in processing difficulty due to interference in standard RT measures and judgments reflect differences in the speed of retrieval (i.e., the amount of time it takes to retrieve a memory item). But this claim is inconsistent with empirical data showing that retrieval time is constant due to the use of a direct-access procedure (e.g., McElree, 2000, 2006). According to direct-access accounts, differences in judgments or RTs due to interference arise from differences in the quality or availability of the candidate memory representations, rather than differences in retrieval speed. To adjudicate between these accounts, we employed a novel methodology that combined a high-powered (N = 200) twoalternative forced-choice study on interference effects with drift diffusion modeling to disassociate the effects of retrieval speed and representation quality. Results showed that the presence of a distractor that matched some of the retrieval cues lowered asymptotic accuracy, reflecting an effect of representation quality, but did not affect retrieval speed, consistent with a direct-access procedure. These results suggest that the differences observed in RTs and judgment studies reflect differences in the ease of integrating the retrieved item back into the current processing stream, rather than differences in retrieval speed

    GreedyDual-Join: Locality-Aware Buffer Management for Approximate Join Processing Over Data Streams

    Full text link
    We investigate adaptive buffer management techniques for approximate evaluation of sliding window joins over multiple data streams. In many applications, data stream processing systems have limited memory or have to deal with very high speed data streams. In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the sliding windows. Therefore, a stream buffer management policy is needed in that case. We show that the buffer replacement policy is an important determinant of the quality of the produced results. To that end, we propose GreedyDual-Join (GDJ) an adaptive and locality-aware buffering technique for managing these buffers. GDJ exploits the temporal correlations (at both long and short time scales), which we found to be prevalent in many real data streams. We note that our algorithm is readily applicable to multiple data streams and multiple joins and requires almost no additional system resources. We report results of an experimental study using both synthetic and real-world data sets. Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques

    Scalable and fault-tolerant data stream processing on multi-core architectures

    Get PDF
    With increasing data volumes and velocity, many applications are shifting from the classical “process-after-store” paradigm to a stream processing model: data is produced and consumed as continuous streams. Stream processing captures latency-sensitive applications as diverse as credit card fraud detection and high-frequency trading. These applications are expressed as queries of algebraic operations (e.g., aggregation) over the most recent data using windows, i.e., finite evolving views over the input streams. To guarantee correct results, streaming applications require precise window semantics (e.g., temporal ordering) for operations that maintain state. While high processing throughput and low latency are performance desiderata for stateful streaming applications, achieving both poses challenges. Computing the state of overlapping windows causes redundant aggregation operations: incremental execution (i.e., reusing previous results) reduces latency but prevents parallelization; at the same time, parallelizing window execution for stateful operations with precise semantics demands ordering guarantees and state access coordination. Finally, streams and state must be recovered to produce consistent and repeatable results in the event of failures. Given the rise of shared-memory multi-core CPU architectures and high-speed networking, we argue that it is possible to address these challenges in a single node without compromising window semantics, performance, or fault-tolerance. In this thesis, we analyze, design, and implement stream processing engines (SPEs) that achieve high performance on multi-core architectures. To this end, we introduce new approaches for in-memory processing that address the previous challenges: (i) for overlapping windows, we provide a family of window aggregation techniques that enable computation sharing based on the algebraic properties of aggregation functions; (ii) for parallel window execution, we balance parallelism and incremental execution by developing abstractions for both and combining them to a novel design; and (iii) for reliable single-node execution, we enable strong fault-tolerance guarantees without sacrificing performance by reducing the required disk I/O bandwidth using a novel persistence model. We combine the above to implement an SPE that processes hundreds of millions of tuples per second with sub-second latencies. These results reveal the opportunity to reduce resource and maintenance footprint by replacing cluster-based SPEs with single-node deployments.Open Acces

    Case report: Neural timing deficits prevalent in developmental disorders, aging, and concussions remediated rapidly by movement discrimination exercises

    Get PDF
    BackgroundThe substantial evidence that neural timing deficits are prevalent in developmental disorders, aging, and concussions resulting from a Traumatic Brain Injury (TBI) is presented.ObjectiveWhen these timing deficits are remediated using low-level movement-discrimination training, then high-level cognitive skills, including reading, attention, processing speed, problem solving, and working memory improve rapidly and effectively.MethodsIn addition to the substantial evidence published previously, new evidence based on a neural correlate, MagnetoEncephalography physiological recordings, on an adult dyslexic, and neuropsychological tests on this dyslexic subject and an older adult were measured before and after 8-weeks of contrast sensitivity-based left–right movement-discrimination exercises were completed.ResultsThe neuropsychological tests found large improvements in reading, selective and sustained attention, processing speed, working memory, and problem-solving skills, never before found after such a short period of training. Moreover, these improvements were found 4 years later for older adult. Substantial MEG signal increases in visual Motion, Attention, and Memory/Executive Control Networks were observed following training on contrast sensitivity-based left–right movement-discrimination. Improving the function of magnocells using figure/ground movement-discrimination at both low and high levels in dorsal stream: (1) improved both feedforward and feedback pathways to modulate attention by enhancing coupled theta/gamma and alpha/gamma oscillations, (2) is adaptive, and (3) incorporated cycles of feedback and reward at multiple levels.ConclusionWhat emerges from multiple studies is the essential role of timing deficits in the dorsal stream that are prevalent in developmental disorders like dyslexia, in aging, and following a TBI. Training visual dorsal stream function at low levels significantly improved high-level cognitive functions, including processing speed, selective and sustained attention, both auditory and visual working memory, problem solving, and reading fluency. A paradigm shift for treating cognitive impairments in developmental disorders, aging, and concussions is crucial. Remediating the neural timing deficits of low-level dorsal pathways, thereby improving both feedforward and feedback pathways, before cognitive exercises to improve specific cognitive skills provides the most rapid and effective methods to improve cognitive skills. Moreover, this adaptive training with substantial feedback shows cognitive transfer to tasks not trained on, significantly improving a person’s quality of life rapidly and effectively

    High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Get PDF
    This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC) based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms
    corecore