2,343 research outputs found

    Phase ambiguity resolution for offset QPSK modulation systems

    Get PDF
    A demodulator for Offset Quaternary Phase Shift Keyed (OQPSK) signals modulated with two words resolves eight possible combinations of phase ambiguity which may produce data error by first processing received I(sub R) and Q(sub R) data in an integrated carrier loop/symbol synchronizer using a digital Costas loop with matched filters for correcting four of eight possible phase lock errors, and then the remaining four using a phase ambiguity resolver which detects the words to not only reverse the received I(sub R) and Q(sub R) data channels, but to also invert (complement) the I(sub R) and/or Q(sub R) data, or to at least complement the I(sub R) and Q(sub R) data for systems using nontransparent codes that do not have rotation direction ambiguity

    An Error-Correcting Line Code for a HEP Rad-Hard Multi-GigaBit Optical Link

    Get PDF
    This paper presents a line encoding scheme designed for the GBT ASIC, a transceiver under development for a multigigabit optical link upgrade of the TTC system. A general overview of issues related to optical links placed in radiation environments is given, and the required properties of the line code discussed. A scheme that preserves the DC-balance of the line and allows forward error correction is proposed. It is implemented through the concatenation of scrambling, a Reed-Solomon error-correction scheme and the addition of an error-tolerant DC-balanced header. The properties of the code are verified for two different interleaving options, which achieve different error correction capability at different implementation costs. One of the two options was implemented in a fully digital ASIC fabricated in a 0.13 μm CMOS technology, and ASIC implementation details and test results are reported

    Temporal sampling in vision and the implications for dyslexia

    No full text
    It has recently been suggested that dyslexia may manifest as a deficit in the neural synchrony underlying language-based codes (Goswami, 2011), such that the phonological deficits apparent in dyslexia occur as a consequence of poor synchronisation of oscillatory brain signals to the sounds of language. There is compelling evidence to support this suggestion, and it provides an intriguing new development in understanding the aetiology of dyslexia. It is undeniable that dyslexia is associated with poor phonological coding, however, reading is also a visual task, and dyslexia has also been associated with poor visual coding, particularly visuo-spatial sensitivity. It has been hypothesized for some time that specific frequency oscillations underlie visual perception. Although little research has been done looking specifically at dyslexia and cortical frequency oscillations, it is possible to draw on converging evidence from visual tasks to speculate that similar deficits could occur in temporal frequency oscillations in the visual domain in dyslexia. Thus, here the plausibility of a visual correlate of the Temporal Sampling Framework is considered, leading to specific hypotheses and predictions for future research. A common underlying neural mechanism in dyslexia, may subsume qualitatively different manifestations of reading difficulty, which is consistent with the heterogeneity of the disorder, and may open the door for a new generation of exciting research

    Brain at work : time, sparseness and superposition principles

    Get PDF
    Abstract : Many studies explored mechanisms through which the brain encodes sensory inputs allowing a coherent behavior. The brain could identify stimuli via a hierarchical stream of activity leading to a cardinal neuron responsive to one particular object. The opportunity to record from numerous neurons offered investigators the capability of examining simultaneously the functioning of many cells. These approaches suggested encoding processes that are parallel rather than serial. Binding the many features of a stimulus may be accomplished through an induced synchronization of cell’s action potentials. These interpretations are supported by experimental data and offer many advantages but also several shortcomings. We argue for a coding mechanism based on a sparse synchronization paradigm. We show that synchronization of spikes is a fast and efficient mode to encode the representation of objects based on feature bindings. We introduce the view that sparse synchronization coding presents an interesting venue in probing brain encoding mechanisms as it allows the functional establishment of multilayered and time-conditioned neuronal networks or multislice networks. We propose a model based on integrate-and-fire spiking neurons

    Context-Sensitive Binding by the Laminar Circuits of V1 and V2: A Unified Model of Perceptual Grouping, Attention, and Orientation Contrast

    Full text link
    A detailed neural model is presented of how the laminar circuits of visual cortical areas V1 and V2 implement context-sensitive binding processes such as perceptual grouping and attention. The model proposes how specific laminar circuits allow the responses of visual cortical neurons to be determined not only by the stimuli within their classical receptive fields, but also to be strongly influenced by stimuli in the extra-classical surround. This context-sensitive visual processing can greatly enhance the analysis of visual scenes, especially those containing targets that are low contrast, partially occluded, or crowded by distractors. We show how interactions of feedforward, feedback and horizontal circuitry can implement several types of contextual processing simultaneously, using shared laminar circuits. In particular, we present computer simulations which suggest how top-down attention and preattentive perceptual grouping, two processes that are fundamental for visual binding, can interact, with attentional enhancement selectively propagating along groupings of both real and illusory contours, thereby showing how attention can selectively enhance object representations. These simulations also illustrate how attention may have a stronger facilitatory effect on low contrast than on high contrast stimuli, and how pop-out from orientation contrast may occur. The specific functional roles which the model proposes for the cortical layers allow several testable neurophysiological predictions to be made. The results presented here simulate only the boundary grouping system of adult cortical architecture. However we also discuss how this model contributes to a larger neural theory of vision which suggests how intracortical and intercortical feedback help to stabilize development and learning within these cortical circuits. Although feedback plays a key role, fast feedforward processing is possible in response to unambiguous information. Model circuits are capable of synchronizing quickly, but context-sensitive persistence of previous events can influence how synchrony develops. Although these results focus on how the interblob cortical processing stream controls boundary grouping and attention, related modeling of the blob cortical processing stream suggests how visible surfaces are formed, and modeling of the motion stream suggests how transient responses to scenic changes can control long-range apparent motion and also attract spatial attention.Defense Advanced Research Projects agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI 94-01659, IRI 97-20333); ONR (N00014-92-J-1309, N00014-95-1-0657

    FleXR: A System Enabling Flexibly Distributed Extended Reality

    Full text link
    Extended reality (XR) applications require computationally demanding functionalities with low end-to-end latency and high throughput. To enable XR on commodity devices, a number of distributed systems solutions enable offloading of XR workloads on remote servers. However, they make a priori decisions regarding the offloaded functionalities based on assumptions about operating factors, and their benefits are restricted to specific deployment contexts. To realize the benefits of offloading in various distributed environments, we present a distributed stream processing system, FleXR, which is specialized for real-time and interactive workloads and enables flexible distributions of XR functionalities. In building FleXR, we identified and resolved several issues of presenting XR functionalities as distributed pipelines. FleXR provides a framework for flexible distribution of XR pipelines while streamlining development and deployment phases. We evaluate FleXR with three XR use cases in four different distribution scenarios. In the results, the best-case distribution scenario shows up to 50% less end-to-end latency and 3.9x pipeline throughput compared to alternatives.Comment: 11 pages, 11 figures, conference pape

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Synchronizing sound from different devices over a TCP network

    Get PDF
    Nowadays, we can send audio on the Internet for multiples uses like telephony, broadcast audio or teleconferencing. The issue comes when you need to synchronize the sound from different sources because the network where we are going to work could lose packets and introduce delay in the delivery. This can also come because the sound cards could be work in different speeds. In this project, we will work with two computers emitting sound (one will simulate the left channel (mono) of a stereo signal, and the other the right channel) and connected with a third computer by a TCP network. The last computer must get the sound from both computers and reproduce it in a speaker properly (without delay). So, basically, the main goal of the project is to synchronize multi-track sound over a network. TCP networks introduce latency into data transfers. Streaming audio suffers from two problems: a delay and an offset between the channels. This project explores the causes of latency, investigates the affect of the inter-channel offset and proposes a solution to synchronize the received channels. In conclusion, a good synchronization of the sound is required in a time when several audio applications are being developed. When two devices are ready to send audio over a network, this multi-track sound will arrive at the third computer with an offset giving a negative effect to the listener. This project has dealt with this offset achieving a good synchronization of the multitrack sound getting a good effect on the listener. This was achieved thanks to the division of the project into several steps having constantly a good vision of the problem, a good scalability and having controlled the latency at all times. As we can see in the chapter 4 of the project, a lack of synchronization over c. 100μs is audible to the listener. RESUMEN. A día de hoy, podemos transmitir audio a través de Internet por varios motivos como pueden ser: una llamada telefónica, una emisión de audio o una teleconferencia. El problema viene cuando necesitas sincronizar ese sonido producido por los diferentes orígenes ya que la red a la que nos vamos a conectar puede perder los paquetes y/o introducir un retardo en las entregas de los mismos. Así mismo, estos retardos también pueden venir producidos por las diferentes velocidades a las que trabajan las tarjetas de sonido de cada dispositivo. En este proyecto, se ha trabajado con dos ordenadores emitiendo sonido de manera intermitente (uno se encargará de simular el canal izquierdo (mono) de la señal estéreo emitida, y el otro del canal derecho), estando conectados a través de una red TCP a un tercer ordenador, el cual debe recibir el sonido y reproducirlo en unos altavoces adecuadamente y sin retardo (deberá juntar los dos canales y reproducirlo como si de estéreo de tratara). Así, el objetivo principal de este proyecto es el de encontrar la manera de sincronizar el sonido producido por los dos ordenadores y escuchar el conjunto en unos altavoces finales. Las redes TCP introducen latencia en la transferencia de datos. El streaming de audio emitido a través de una red de este tipo puede sufrir dos grandes contratiempos: retardo y offset, los dos existentes en las comunicaciones entre ambos canales. Este proyecto se centra en las causas de ese retardo, investiga el efecto que provoca el offset entre ambos canales y propone una solución para sincronizar los canales en el dispositivo receptor. Para terminar, una buena sincronización del sonido es requerida en una época donde las aplicaciones de audio se están desarrollando continuamente. Cuando los dos dispositivos estén preparados para enviar audio a través de la red, la señal de sonido multi-canal llegará al tercer ordenador con un offset añadido, por lo que resultará en una mala experiencia en la escucha final. En este proyecto se ha tenido que lidiar con ese offset mencionado anteriormente y se ha conseguido una buena sincronización del sonido multi-canal obteniendo un buen efecto en la escucha final. Esto ha sido posible gracias a una división del proyecto en diversas etapas que proporcionaban la facilidad de poder solucionar los errores en cada paso dando una importante visión del problema y teniendo controlada la latencia en todo momento. Como se puede ver en el capítulo 4 del proyecto, la falta de sincronización sobre una diferencia de 100μs entre dos canales (offset) empieza a ser audible en la escucha final
    corecore