6,768 research outputs found
Recommended from our members
Memory in autism spectrum disorder: a meta-analysis of experimental studies
To address inconsistencies in the literature on memory in Autism Spectrum Disorder (ASD), we report the first ever meta-analysis of short-term (STM) and episodic long-term (LTM) memory in ASD, evaluating the effects of type of material, type of retrieval and the role of inter-item relations. Analysis of 64 studies comparing individuals with ASD and typical development (TD) showed greater difficulties in ASD compared to TD individuals in STM (Hedges’ g=-0.53 [95%CI -0.90; -0.16], p=.005, I²=96%) compared to LTM (g=-0.30 [95%CI -0.42; -0.17], p<.00001, I²=24%), a small difficulty in verbal LTM (g=-0.21, p=.01), contrasting with a medium difficulty for visual LTM (g= -0.41, p=.0002) in ASD compared to TD individuals. We also found a general diminution in free recall compared to cued recall and recognition (LTM, free recall: g=-0.38, p<.00001, cued recall: g=-0.08, p=.58, recognition: g=-0.15, p=.16; STM, free recall: g=-0.59, p=.004, recognition: g=-0.33, p=.07). We discuss these results in terms of their relation to semantic memory. The limited diminution in verbal LTM and preserved overall recognition and cued recall (supported retrieval) may result from a greater overlap of these tasks with semantic long-term representations which are overall preserved in ASD. By contrast, difficulties in STM or free recall may result from less overlap with the semantic system or may involve additional cognitive operations and executive demands. These findings highlight the need to support STM functioning in ASD and acknowledge the potential benefit of using verbal materials at encoding and broader forms of memory support at retrieval to enhance performance
Putting some order in person memory: memory for (serial) order in impression formation
American Psychological Association (PsycINFO Classification Categories and Codes) 2340 Cognitive Processes; 2343 Learning & Memory; 3000 Social Psychology; 3040 Social Perception and CognitionThe present work examines the representation and retrieval of order information in
person memory. The study of memory for serial order has been absent from the research on
the underling memory processes of impression formation, which has been focusing
exclusively on item information. In this work we argue that our understanding of person
memory is incomplete without an account for order and item information representation and
retrieval. According to a chaining hypothesis, we predicted that the organizational processes
involved in impression formation would hinder the ability to represent order by means of
associations between items in successive positions. The first three experiments indicated,
contradicting our hypothesis, that when people form impressions they are able to represent,
retrieve and use order information for order judgements and (serial) recall. The two following
studies, experiment 4 and 5, directly manipulated the associations that were built in memory
when people formed impressions, to understand whether order information representation
was based on associations between items that appeared in successive serial positions. Results
showed that the ability to use order information was unaffected by changes in the structure of
non-serial inter-item associations, which suggests that order representation is not derived
from mere serial associations. Experiment 6, the last from the set of experiments reported
here, suggested that the representation of order information is less dependent on episodic
memory, in contrast to item information. The findings from this set of 6 experiments
suggested, firstly, that when people form impressions they are able to reconstruct serial
order (even when such order has no meaning), and secondly, that order representation in
person memory seem not to be derived from the inter-item associations formed at encoding.
Finally, an ordinal proposal for the representation and use of order in person memory is
discussed.O objectivo central do presente trabalho é o estudo da representação e recuperação da
informação de ordem em memória de pessoas. A memória de ordem serial tem permanecido
fora da investigação sobre os processos mnésicos subjacentes à formação de impressões,
investigação esta que se tem centrado exclusivamente na informação de item. Argumentamos
que o conhecimento sobre memória de pessoas não pode ser completo sem que haja uma
compreensão dos processos envolvidos na representação e recuperação da informação de
ordem. De acordo com a hipótese de chaining, os processos que caracterizam a formação de
impressões prejudicam o estabelecimento de associações entre itens em posições sucessivas,
interferindo com a representação da informação de ordem. As três primeiras experiências
sugerem, contrariamente ao esperado, que quando as pessoas formam impressões estão a
representar informação de ordem, que pode ser utilizada em tarefas de julgamento e
recordação. Nas experiências 4 e 5 manipulámos directamente as associações que se formam
durante a codificação, quando as pessoas formam impressões, tentando perceber se a
representação de ordem se basearia em associações entre itens em posições seriais sucessivas.
Os resultados indicam que, independentemente da mudança na densidade associativa da rede,
a capacidade de os participantes acederem e utilizarem informação de ordem não é afectada.
Estes dados sugerem que a representação da informação não acontece pela mera associação
de itens em posições sucessivas. A experiência 6 sugere que a representação da informação
de ordem, em contraste com a informação de item, depende menos da memória episódica.
Este conjunto de resultados sugere (i) que quando as pessoas formam impressões são capazes
de reconstruir a ordem e (ii) que a representação da informação de ordem em memória de
pessoas não é dependente das associações que se estabelecem entre os itens, durante a
codificação. Finalmente, uma proposta ordinal para a representação e recuperação da ordem
em memória de pessoas é discutida.The present work was sponsored by a Doctoral Grant (Ref. SFRH/BD/23748/2005) of
the Science and Technology Foundation (FCT), Portugal, and the Program POCI2010, which
is funded by the Portuguese Ministry of Science, Technology and Higher Education, and the European Social Fund (Community Support Framework III)
Short term memory
The eight experiments reported in this thesis are designed to investigate the idea that in verbal short-term memory (STM) material decays over time and this decay is prevented by rehearsal. It follows that the capacity of STM when measured in words should be inversely proportional to the time taken to rehearse the words. Consequently, subjects should be able to recall more short duration words than long duration words. In contrast to this hypothesis is the idea that the capacity of STM is a fixed number of chunks, where chunks are a structural characteristic of the material.
The first four experiments are designed so that these alternative hypotheses produce conflicting predictions and, in all cases, the hypotheses derived from decay theory are supported. It is shown that serial recall performance is very well predicted by the time taken to say the words and that the relationship between word duration and recall is of the type predicted by decay theory.
The second set of experiments are based on the assumption that both STM and long-term memory (LTM) contribute to performance in serial recall tasks. The purpose of the experiments is to determine whether it is the STM or LTM component that is sensitive to word duration. It is predicted, in line with a decay theory of forgetting in STM, that the STM component is sensitive to word duration. The experiments are designed to produce sizable contributions from both stores in order to test this hypothesis. The results support the hypothesis in showing that variables known to affect STM, such as acoustic similarity, interact with word duration, while variables known to affect LTM, such as repeated presentations of the same list, show no
such interaction.
The results are interpreted in terms of decay theory and the different versions of this theory that have been proposed are considered. It is concluded that while no version of the theory is completely adequate, there is no evidence that invalidates the central assumptions, viz. that in STM items are forgotten by decay and that one of the functions of rehearsal is to prevent this decay
The Sequence of Standard and Target in Pairwise Magnitude Comparisons
The present research introduces the effect of the presentation order of target and standard in paired magnitude comparisons on comparison performance. So far, this effect has been overlooked by most of the domains of psychological research on comparative thinking. The standard-target-sequence-effect (STSE) was demonstrated in eight out of eleven experiments (N = 1,018) presented in the work at hand. Participants repetitively performed simple magnitude comparisons of two objects (e.g. one digit numbers or geometric shapes) in various economic and social contexts. Results revealed a stable performance advantage (in terms of speed and accuracy) for trials in which the standard stimulus was encountered before the to be judged target stimulus. In three experiments the STSE could not be observed, most likely because of the relative spatial and temporal positions of stimuli. The diverse findings and experimental set ups are discussed as well as the underlying mechanism, the interaction of the STSE with the SNARC effect for numerical comparisons (Dehaene, Dupoux & Mehler, 1990; Dehaene, Bossini & Giraux, 1993; Fisher, Castel, Dodd & Pratt, 2003) and the ascending order advantage in magnitude judgement tasks (Turconi, Campbell & Seron, 2006; Müller & Schwarz, 2008; Schroeder, Nuerk & Plewnia, 2017). The effect of the order of target and standard on comparison processes had been mentioned in signal detection and stimuli discrimination tasks in psychophysics (so called Type B Effect, e.g. Dijas & Ulrich, 2014), while social and cognitive psychologists’ research on judgements of similarity and contrast have provided inconsistent results for the influence of the sequence of standard and target on the comparison process (e.g. Tversky, 1978; Agostinelli, Sherman, Fazio & Hearst, 1986). Researchers on symbolic pairwise comparisons did not report such an effect at all. The research on the STSE outlined in the work at hand contributes to an interdisciplinary understanding of order effects of target and standard as well as to the debate on the origins of order effects in general and on the basic principles of comparative thinking
Droplet digital PCR quantifies host inflammatory transcripts in feces reliably and reproducibly
AbstractThe gut is the most extensive, interactive, and complex interface between the human host and the environment and therefore a critical site of immunological activity. Non-invasive methods to assess the host response in this organ are currently lacking. Feces are the available analyte which have been in proximity to the gut tissue.We applied a method of concentrating host transcripts from fecal specimens using a existing bead-based affinity separation method for nucleic acids and quantified transcripts using droplet digital PCR (ddPCR) to determine the copy numbers of a variety of key transcripts in the gut immune system. ddPCR compartmentalizes the reaction in a small aqueous droplet suspended in oil, and counts droplets as either fluorescent or non-fluorescent. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used to normalize transcript concentration.This method was applied to 799 fecal samples from rural Malawian children, and over 20,000 transcript concentrations were quantified. Host mRNA was detected in >99% samples, a threshold for target detection was established at an average expression of 0.02 copies target/GAPDH, above which correlation coefficient between duplicate measurements is >0.95. Quantities of transcript detected using ddPCR were greater than standard qPCR. Fecal sample preservation at the time of collection did not require immediate freezing or the addition of buffers or enzymes. Measurements of transcripts encoding immunoactive proteins correlated with a measure of gut inflammation in the study children, thereby substantiating their relevance. This method allows investigators to interrogate gene expression in the gut
Side information exploitation, quality control and low complexity implementation for distributed video coding
Distributed video coding (DVC) is a new video coding methodology that shifts the highly complex motion search components from the encoder to the decoder, such a video coder would have a great advantage in encoding speed and it is still able to achieve similar rate-distortion performance as the conventional coding solutions. Applications include wireless video sensor networks, mobile video cameras and wireless video surveillance, etc. Although many progresses have been made in DVC over the past ten years, there is still a gap in RD performance between conventional video coding solutions and DVC. The latest development of DVC is still far from standardization and practical use. The key problems remain in the areas such as accurate and efficient side information generation and refinement, quality control between Wyner-Ziv frames and key frames, correlation noise modelling and decoder complexity, etc.
Under this context, this thesis proposes solutions to improve the state-of-the-art side information refinement schemes, enable consistent quality control over decoded frames during coding process and implement highly efficient DVC codec.
This thesis investigates the impact of reference frames on side information generation and reveals that reference frames have the potential to be better side information than the extensively used interpolated frames. Based on this investigation, we also propose a motion range prediction (MRP) method to exploit reference frames and precisely guide the statistical motion learning process. Extensive simulation results show that choosing reference frames as SI performs competitively, and sometimes even better than interpolated frames. Furthermore, the proposed MRP method is shown to significantly reduce the decoding complexity without degrading any RD performance.
To minimize the block artifacts and achieve consistent improvement in both subjective and objective quality of side information, we propose a novel side information synthesis framework working on pixel granularity. We synthesize the SI at pixel level to minimize the block artifacts and adaptively change the correlation noise model according to the new SI. Furthermore, we have fully implemented a state-of-the-art DVC decoder with the proposed framework using serial and parallel processing technologies to identify bottlenecks and areas to further reduce the decoding complexity, which is another major challenge for future practical DVC system deployments. The performance is evaluated based on the latest transform domain DVC codec and compared with different standard codecs. Extensive experimental results show substantial and consistent rate-distortion gains over standard video codecs and significant speedup over serial implementation.
In order to bring the state-of-the-art DVC one step closer to practical use, we address the problem of distortion variation introduced by typical rate control algorithms, especially in a variable bit rate environment. Simulation results show that the proposed quality control algorithm is capable to meet user defined target distortion and maintain a rather small variation for sequence with slow motion and performs similar to fixed quantization for fast motion sequence at the cost of some RD performance.
Finally, we propose the first implementation of a distributed video encoder on a Texas Instruments TMS320DM6437 digital signal processor. The WZ encoder is
efficiently implemented, using rate adaptive low-density-parity-check accumulative (LDPCA) codes, exploiting the hardware features and optimization techniques to improve the overall performance. Implementation results show that the WZ encoder is able to encode at 134M instruction cycles per QCIF frame on a TMS320DM6437 DSP running at 700MHz. This results in encoder speed 29 times faster than non-optimized encoder implementation. We also implemented a highly efficient DVC decoder using both serial and parallel technology based on a PC-HPC (high performance cluster) architecture, where the encoder is running in a general purpose PC and the decoder is running in a multicore HPC. The experimental results show that the parallelized decoder can achieve about 10 times speedup under various bit-rates and GOP sizes compared to the serial implementation and significant RD gains with regards to the state-of-the-art DISCOVER codec
Faster than thought: Detecting sub-second activation sequences with sequential fMRI pattern analysis
- …