58,846 research outputs found

    Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex

    Get PDF
    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10–20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30–40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys

    The biological cost of consciousness

    Get PDF
    Some philosophers maintain that consciousness as subjective experience has no biological function. However, conscious brain events seem very different from unconscious ones. The cortex and thalamus support the reportable qualitative contents of consciousness. Subcortical structures like the cerebellum do not. Likewise, attended sensory stimuli are typically reportable as conscious, while memories of those stimuli are not so reportable until they are specifically recalled. 

Reports of conscious experiences in normal humans always involve subjectivity and an implicit observing ego. Unconscious brain events are not reportable, even under optimal conditions of report. While there are claimed exceptions to these points, they are rare or poorly validated. 

Normal consciousness also implies high availability (rapid conscious access) of the questions routinely asked of neurological patients in the Mental Status Examination, such as common sense features of personal identity, time, place, and social context. Along with “current concerns,” recent conscious contents, and the like, these contents correspond to high frequency items in working memory. While working memory contents are not immediately conscious, they can be rapidly re-called to consciousness. 

The anatomy and physiology of reportable conscious sensorimotor contents are ultraconserved over perhaps 200 million years of mammalian evolution. By comparison, full-fledged language is thought to arise some 100,000 years ago in homo sapiens, while writing, which enables accel-erated cultural development, dates between 2.5 and 6 millennia. Contrary to some claims, therefore, conscious waking precedes language by hundreds of millions of years. 

Like other major adaptations, conscious and unconscious brain events have distinctive biological pros and cons. These involve information processing efficiency, metabolic costs and benefits, and behavioral pros and cons. The well known momentary limited capacity of conscious contents is an example of an information processing cost, while the very large and energy-hungry corticothalamic system makes costly metabolic demands. 

After a century of scientific neglect, fundamental concepts like “conscious,” “unconscious,” “voluntary” and “non-voluntary” are still vitally important, because they refer to major biopsychological phenomena that otherwise are difficult to discuss. 
&#xa

    Evaluation of GPU/CPU Co-Processing Models for JPEG 2000 Packetization

    Get PDF
    With the bottom-line goal of increasing the throughput of a GPU-accelerated JPEG 2000 encoder, this paper evaluates whether the post-compression rate control and packetization routines should be carried out on the CPU or on the GPU. Three co-processing models that differ in how the workload is split among the CPU and GPU are introduced. Both routines are discussed and algorithms for executing them in parallel are presented. Experimental results for compressing a detail-rich UHD sequence to 4 bits/sample indicate speed-ups of 200x for the rate control and 100x for the packetization compared to the single-threaded implementation in the commercial Kakadu library. These two routines executed on the CPU take 4x as long as all remaining coding steps on the GPU and therefore present a bottleneck. Even if the CPU bottleneck could be avoided with multi-threading, it is still beneficial to execute all coding steps on the GPU as this minimizes the required device-to-host transfer and thereby speeds up the critical path from 17.2 fps to 19.5 fps for 4 bits/sample and to 22.4 fps for 0.16 bits/sample

    Real-Time Data Processing With Lambda Architecture

    Get PDF
    Data has evolved immensely in recent years, in type, volume and velocity. There are several frameworks to handle the big data applications. The project focuses on the Lambda Architecture proposed by Marz and its application to obtain real-time data processing. The architecture is a solution that unites the benefits of the batch and stream processing techniques. Data can be historically processed with high precision and involved algorithms without loss of short-term information, alerts and insights. Lambda Architecture has an ability to serve a wide range of use cases and workloads that withstands hardware and human mistakes. The layered architecture enhances loose coupling and flexibility in the system. This a huge benefit that allows understanding the trade-offs and application of various tools and technologies across the layers. There has been an advancement in the approach of building the LA due to improvements in the underlying tools. The project demonstrates a simplified architecture for the LA that is maintainable

    Conceptual Model for Communication

    Get PDF
    A variety of idealized models of communication systems exist, and all may have something in common. Starting with Shannons communication model and ending with the OSI model, this paper presents progressively more advanced forms of modeling of communication systems by tying communication models together based on the notion of flow. The basic communication process is divided into different spheres (sources, channels, and destinations), each with its own five interior stages, receiving, processing, creating, releasing, and transferring of information. The flow of information is ontologically distinguished from the flow of physical signals, accordingly, Shannons model, network based OSI models, and TCP IP are redesigned.Comment: 13 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS November 2009, ISSN 1947 5500, http://sites.google.com/site/ijcsis

    Avoiding unseen obstacles : Subcortical vision is not sufficient to maintain normal obstacle avoidance behaviour during reaching

    Get PDF
    Acknowledgement This work was funded by the RS MacDonald Charitable Trust (awarded to C. Hesse in June 2013). T. Schenk was supported by a grant from the German Research Council (DFG – SCHE 735/3-1). The authors would like to thank Dr Stefanie Biehl for her valuable advice on lesion localisation based on the CT and MRI scans of the patients. We would also like to thank all the patients for taking part in our experiments and for giving up so much of their free time.Peer reviewedPostprin

    Visual processing of words in a patient with visual form agnosia: A behavioural and fMRI study

    Get PDF
    Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to “guess” remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area. We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F

    Streams of events and performance of queuing systems: The basic anatomy of arrival/departure processes, when focus is set on autocorrelation

    Get PDF
    Judging from the vast number of articles in the field of queuing simulation, that assumes i.i.d. in one or more of the stochastic processes used to model the situation at hand, often without much validation, it seems that sequence independence must be a very basic property of many real life situation or at least a very sound approximation. However, on the other hand, most actual decision making is based upon information taken from the past - where else! In fact the only real alternative that comes into my mind is to let a pair of dices fully and completely rule behaviour, but I wonder if such a decision setup is that widespread in consequent use anywhere. So, how come that sequence independence is so relatively popular in describing real system processes? I can only think of three possible explanations to this dilemma - (1) either sequence dependence is present, but is mostly not of a very significant nature or (2) aggregate system behaviour is in general very different from just the summing-up (even for finite sets of micro-behavioural patterns) and/or (3) it is simply a wrong assumption that in many cases is chosen by mere convention or plain convenience. It is evident that before choosing some arrival processes for some simulation study a thorough preliminary analysis has to be undertaken in order to uncover the basic time series nature of the interacting processes. Flexible methods for generating streams of autocorrelated variates of any desired distributional type, such as the ARTA method or some autocorrelation extended descriptive sampling method, can then easily be applied. The results from the Livny, Melamed and Tsiolis (1993) study as well as the results from this work both indicates that system performance measures as for instance average waiting time or average time in system are significantly influenced by the taken i.i.d. versus the autocorrelations assumptions. Plus/minus 35% in performance, but most likely a worsening, is easily observed, when comparing even moderate (probably more realistic) autocorrelation assumptions with the traditionally and commonly used i.i.d. assumptions.Autocorrelation; queuing systems; TES method; ARTA method Descriptive/Selective sampling; Simulation; Job/flow-shop; performance; control
    corecore