828 research outputs found
Image database system for glaucoma diagnosis support
Tato prĂĄce popisuje pĆehled standardnĂch a pokroÄilĂœch metod pouĆŸĂvanĂœch k diagnose glaukomu v rannĂ©m stĂĄdiu. Na zĂĄkladÄ teoretickĂœch poznatkĆŻ je implementovĂĄn internetovÄ orientovanĂœ informaÄnĂ systĂ©m pro oÄnĂ lĂ©kaĆe, kterĂœ mĂĄ tĆi hlavnĂ cĂle. PrvnĂm cĂlem je moĆŸnost sdĂlenĂ osobnĂch dat konkrĂ©tnĂho pacienta bez nutnosti posĂlat tato data internetem. DruhĂœm cĂlem je vytvoĆit ĂșÄet pacienta zaloĆŸenĂœ na kompletnĂm oÄnĂm vyĆĄetĆenĂ. PoslednĂm cĂlem je aplikovat algoritmus pro registraci intenzitnĂho a barevnĂ©ho fundus obrazu a na jeho zĂĄkladÄ vytvoĆit internetovÄ orientovanou tĆi-dimenzionĂĄlnĂ vizualizaci optickĂ©ho disku. Tato prĂĄce je souÄĂĄsti DAAD spoluprĂĄce mezi Ăstavem BiomedicĂnskĂ©ho InĆŸenĂœrstvĂ, VysokĂ©ho UÄenĂ TechnickĂ©ho v BrnÄ, OÄnĂ klinikou v Erlangenu a Ăstavem InformaÄnĂch TechnologiĂ, Friedrich-Alexander University, Erlangen-Nurnberg.This master thesis describes a conception of standard and advanced eye examination methods used for glaucoma diagnosis in its early stage. According to the theoretical knowledge, a web based information system for ophthalmologists with three main aims is implemented. The first aim is the possibility to share medical data of a concrete patient without sending his personal data through the Internet. The second aim is to create a patient account based on a complete eye examination procedure. The last aim is to improve the HRT diagnostic method with an image registration algorithm for the fundus and intensity images and create an optic nerve head web based 3D visualization. This master thesis is a part of project based on DAAD co-operation between Department of Biomedical Engineering, Brno University of Technology, Eye Clinic in Erlangen and Department of Computer Science, Friedrich-Alexander University, Erlangen-Nurnberg.
Flexible Hardware Architectures for Retinal Image Analysis
RĂSUMĂ
Des millions de personnes autour du monde sont touchĂ©es par le diabĂšte. Plusieurs complications oculaires telle que la rĂ©tinopathie diabĂ©tique sont causĂ©es par le diabĂšte, ce qui peut conduire Ă une perte de vision irrĂ©versible ou mĂȘme la cĂ©citĂ© si elles ne sont pas traitĂ©es. Des examens oculaires complets et rĂ©guliers par les ophtalmologues sont nĂ©cessaires pour une dĂ©tection prĂ©coce des maladies et pour permettre leur traitement. Comme solution prĂ©ventive, un protocole de dĂ©pistage impliquant l'utilisation d'images numĂ©riques du fond de l'Ćil a Ă©tĂ© adoptĂ©. Cela permet aux ophtalmologistes de surveiller les changements sur la rĂ©tine pour dĂ©tecter toute prĂ©sence d'une maladie oculaire. Cette solution a permis d'obtenir des examens rĂ©guliers, mĂȘme pour les populations des rĂ©gions Ă©loignĂ©es et dĂ©favorisĂ©es. Avec la grande quantitĂ© d'images rĂ©tiniennes obtenues, des techniques automatisĂ©es pour les traiter sont devenues indispensables. Les techniques automatisĂ©es de dĂ©tection des maladies des yeux ont Ă©tĂ© largement abordĂ©es par la communautĂ© scientifique. Les techniques dĂ©veloppĂ©es ont atteint un haut niveau de maturitĂ©, ce qui a permis entre autre le dĂ©ploiement de solutions en tĂ©lĂ©mĂ©decine.
Dans cette thÚse, nous abordons le problÚme du traitement de volumes élevés d'images rétiniennes dans un temps raisonnable dans un contexte de dépistage en télémédecine. Ceci est requis pour permettre l'utilisation pratique des techniques développées dans le contexte clinique. Dans cette thÚse, nous nous concentrons sur deux étapes du pipeline de traitement des images rétiniennes. La premiÚre étape est l'évaluation de la qualité de l'image rétinienne. La deuxiÚme étape est la segmentation des vaisseaux sanguins rétiniens.
LâĂ©valuation de la qualitĂ© des images rĂ©tinienne aprĂšs acquisition est une tĂąche primordiale au bon fonctionnement de tout systĂšme de traitement automatique des images de la rĂ©tine. Le rĂŽle de cette Ă©tape est de classifier les images acquises selon leurs qualitĂ©s, et demander une nouvelle acquisition en cas dâimage de mauvaise qualitĂ©. Plusieurs algorithmes pour Ă©valuer la qualitĂ© des images rĂ©tiniennes ont Ă©tĂ© proposĂ©s dans la littĂ©rature. Cependant, mĂȘme si l'accĂ©lĂ©ration de cette tĂąche est requise en particulier pour permettre la crĂ©ation de systĂšmes mobiles de capture d'images rĂ©tiniennes, ce sujet n'a pas encore Ă©tĂ© abordĂ© dans la littĂ©rature. Dans cette thĂšse, nous ciblons un algorithme qui calcule les caractĂ©ristiques des images pour permettre leur classification en mauvaise, moyenne ou bonne qualitĂ©. Nous avons identifiĂ© le calcul des caractĂ©ristiques de l'image comme une tĂąche rĂ©pĂ©titive qui nĂ©cessite une accĂ©lĂ©ration. Nous nous sommes intĂ©ressĂ©s plus particuliĂšrement Ă lâaccĂ©lĂ©ration de lâalgorithme dâencodage Ă longueur de sĂ©quence (Run-Length Matrix â RLM). Nous avons proposĂ© une premiĂšre implĂ©mentation complĂštement logicielle mise en Ćuvre sous forme dâun systĂšme embarquĂ© basĂ© sur la technologie Zynq de Xilinx. Pour accĂ©lĂ©rer le calcul des caractĂ©ristiques, nous avons conçu un co-processeur capable de calculer les caractĂ©ristiques en parallĂšle implĂ©mentĂ© sur la logique programmable du FPGA Zynq. Nous avons obtenu une accĂ©lĂ©ration de 30,1 Ă pour la tĂąche de calcul des caractĂ©ristiques de lâalgorithme RLM par rapport Ă son implĂ©mentation logicielle sur la plateforme Zynq.
La segmentation des vaisseaux sanguins rĂ©tiniens est une tĂąche clĂ© dans le pipeline du traitement des images de la rĂ©tine. Les vaisseaux sanguins et leurs caractĂ©ristiques sont de bons indicateurs de la santĂ© de la rĂ©tine. En outre, leur segmentation peut Ă©galement aider Ă segmenter les lĂ©sions rouges, indicatrices de la rĂ©tinopathie diabĂ©tique. Plusieurs techniques de segmentation des vaisseaux sanguins rĂ©tiniens ont Ă©tĂ© proposĂ©es dans la littĂ©rature. Des architectures matĂ©rielles ont Ă©galement Ă©tĂ© proposĂ©es pour accĂ©lĂ©rer certaines de ces techniques. Les architectures existantes manquent de performances et de flexibilitĂ© de programmation, notamment pour les images de haute rĂ©solution. Dans cette thĂšse, nous nous sommes intĂ©ressĂ©s Ă deux techniques de segmentation du rĂ©seau vasculaire rĂ©tinien, la technique du filtrage adaptĂ© et la technique des opĂ©rateurs de ligne. La technique de filtrage adaptĂ© a Ă©tĂ© ciblĂ©e principalement en raison de sa popularitĂ©. Pour cette technique, nous avons proposĂ© deux architectures diffĂ©rentes, une architecture matĂ©rielle personnalisĂ©e mise en Ćuvre sur FPGA et une architecture basĂ©e sur un ASIP. L'architecture matĂ©rielle personnalisĂ©e a Ă©tĂ© optimisĂ©e en termes de surface et de dĂ©bit de traitement pour obtenir des performances supĂ©rieures par rapport aux implĂ©mentations existantes dans la littĂ©rature. Cette implĂ©mentation est plus efficace que toutes les implĂ©mentations existantes en termes de dĂ©bit. Pour l'architecture basĂ©e sur un processeur Ă jeu dâinstructions spĂ©cialisĂ© (Application-Specific Instruction-set Processor â ASIP), nous avons identifiĂ© deux goulets d'Ă©tranglement liĂ©s Ă l'accĂšs aux donnĂ©es et Ă la complexitĂ© des calculs de l'algorithme. Nous avons conçu des instructions spĂ©cifiques ajoutĂ©es au chemin de donnĂ©es du processeur. L'ASIP a Ă©tĂ© rendu 7.7 Ă plus rapide par rapport Ă son architecture de base.
La deuxiĂšme technique pour la segmentation des vaisseaux sanguins est l'algorithme dĂ©tecteur de ligne multi-Ă©chelle (Multi-Scale Ligne Detector â MSLD). L'algorithme MSLD est choisi en raison de ses performances et de son potentiel Ă dĂ©tecter les petits vaisseaux sanguins. Cependant, l'algorithme fonctionne en multi-Ă©chelle, ce qui rend lâalgorithme gourmand en mĂ©moire. Pour rĂ©soudre ce problĂšme et permettre l'accĂ©lĂ©ration de son exĂ©cution, nous avons proposĂ© un algorithme efficace en terme de mĂ©moire, conçu et implĂ©mentĂ© sur FPGA. L'architecture proposĂ©e a rĂ©duit de façon drastique les exigences de lâalgorithme en terme de mĂ©moire en rĂ©utilisant les calculs et la co-conception logicielle/matĂ©rielle.
Les deux architectures matĂ©rielles proposĂ©es pour la segmentation du rĂ©seau vasculaire rĂ©tinien ont Ă©tĂ© rendues flexibles pour pouvoir traiter des images de basse et de haute rĂ©solution. Ceci a Ă©tĂ© rĂ©alisĂ© par le dĂ©veloppement d'un compilateur spĂ©cifique capable de gĂ©nĂ©rer une description HDL de bas niveau de l'algorithme Ă partir d'un ensemble de paramĂštres. Le compilateur nous a permis dâoptimiser les performances et le temps de dĂ©veloppement. Dans cette thĂšse, nous avons introduit deux architectures qui sont, au meilleur de nos connaissances, les seules capables de traiter des images Ă la fois de basse et de haute rĂ©solution.----------ABSTRACT
Millions of people all around the world are affected by diabetes. Several ocular complications such as diabetic retinopathy are caused by diabetes, which can lead to irreversible vision loss or even blindness if not treated. Regular comprehensive eye exams by eye doctors are required to detect the diseases at earlier stages and permit their treatment. As a preventing solution, a screening protocol involving the use of digital fundus images was adopted. This allows eye doctors to monitor changes in the retina to detect any presence of eye disease. This solution made regular examinations widely available, even to populations in remote and underserved areas. With the resulting large amount of retinal images, automated techniques to process them are required. Automated eye detection techniques are largely addressed by the research community, and now they reached a high level of maturity, which allows the deployment of telemedicine solutions.
In this thesis, we are addressing the problem of processing a high volume of retinal images in a reasonable time. This is mandatory to allow the practical use of the developed techniques in a clinical context. In this thesis, we focus on two steps of the retinal image pipeline. The first step is the retinal image quality assessment. The second step is the retinal blood vessel segmentation.
The evaluation of the quality of the retinal images after acquisition is a primary task for the proper functioning of any automated retinal image processing system. The role of this step is to classify the acquired images according to their quality, which will allow an automated system to request a new acquisition in case of poor quality image. Several algorithms to evaluate the quality of retinal images were proposed in the literature. However, even if the acceleration of this task is required, especially to allow the creation of mobile systems for capturing retinal images, this task has not yet been addressed in the literature. In this thesis, we target an algorithm that computes image features to allow their classification to bad, medium or good quality. We identified the computation of image features as a repetitive task that necessitates acceleration. We were particularly interested in accelerating the Run-Length Matrix (RLM) algorithm. We proposed a first fully software implementation in the form of an embedded system based on Xilinx's Zynq technology. To accelerate the features computation, we designed a co-processor able to compute the features in parallel, implemented on the programmable logic of the Zynq FPGA. We achieved an acceleration of 30.1Ă over its software implementation for the features computation part of the RLM algorithm.
Retinal blood vessel segmentation is a key task in the pipeline of retinal image processing. Blood vessels and their characteristics are good indicators of retina health. In addition, their segmentation can also help to segment the red lesions, indicators of diabetic retinopathy. Several techniques have been proposed in the literature to segment retinal blood vessels. Hardware architectures have also been proposed to accelerate blood vessel segmentation. The existing architectures lack in terms of performance and programming flexibility, especially for high resolution images. In this thesis, we targeted two techniques, matched filtering and line operators. The matched filtering technique was targeted mainly because of its popularity. For this technique, we proposed two different architectures, a custom hardware architecture implemented on FPGA, and an Application Specific Instruction-set Processor (ASIP) based architecture. The custom hardware architecture area and timing were optimized to achieve higher performances in comparison to existing implementations. Our custom hardware implementation outperforms all existing implementations in terms of throughput. For the ASIP based architecture, we identified two bottlenecks related to data access and computation intensity of the algorithm. We designed two specific instructions added to the processor datapath. The ASIP was made 7.7Ă more efficient in terms of execution time compared to its basic architecture.
The second technique for blood vessel segmentation is the Multi-Scale Line Detector (MSLD) algorithm. The MSLD algorithm is selected because of its performance and its potential to detect small blood vessels. However, the algorithm works at multiple scales which makes it memory intensive. To solve this problem and allow the acceleration of its execution, we proposed a memory-efficient algorithm designed and implemented on FPGA. The proposed architecture reduces drastically the memory requirements of the algorithm by reusing the computations and SW/HW co-design.
The two hardware architectures proposed for retinal blood vessel segmentation were made flexible to be able to process low and high resolution images. This was achieved by the development of a specific compiler able to generate low-level HDL descriptions of the algorithm from a set of the algorithm parameters. The compiler enabled us to optimize performance and development time. In this thesis, we introduce two novel architectures which are, to the best of our knowledge, the only ones able to process both low and high resolution images
Psychophysical investigations of visual density discrimination
Work in spatial vision is reviewed and a new effect of spatial averaging is reported. This shows that dot separation discriminations are improved if the cue is represented in the intervals within a collection of dots arranged in a lattice, compared to simple 2 dot separation discriminations. This phenomenon may be related to integrative processes that mediate texture density estimation.
Four models for density discrimination are described. One involves measurements of spatial filter outputs. Computer simulations show that in principle, density cues can be encoded by a system of four DOG filters with peak sensitivities spanning a range of 3 octaves.
Alternative models involve operations performed over representations in which spatial features are made explicit. One of these involves estimations of numerosity or coverage of the texture elements. Another involves averaging of the interval values between adjacent elements. A neural model for measuring the relevant intervals is described.
It is argued that in principle the input to a density processor does not require the full sequence of operations in the MIRAGE transformation (eg. Watt and Morgan 1985). In particular, the regions of activity in the second derivative do not need to be interpreted in terms of edges, bars and blobs in order for density estimation to commence. This also implies that explicit coding of texture elements may be unnecessary.
Data for density discrimination in regular and random dot patterns are reported. These do not support the coverage and counting models and observed performance shows significant departures from predictions based on an analysis of the statistics of the interval distribution in the stimuli. But this result can be understood in relation to other factors in the interval averaging process, and there is empirical support for the hypothesized method for measuring the intervals.
Other experiments show that density is scaled according to stimulus size and possibly perceived depth. It is also shown that information from density analysis can be combined with size estimations to produce highly accurate discriminations of image expansion or object depth changes
Data compression techniques applied to high resolution high frame rate video technology
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended
RGB-D Scene Representations for Prosthetic Vision
This thesis presents a new approach to scene representation for prosthetic vision. Structurally salient information from the scene is conveyed through the prosthetic vision display. Given the low resolution and dynamic range of the display, this enables robust identification and reliable interpretation of key structural features that are missed when using standard appearance-based scene representations. Specifically, two different types of salient structure are investigated: salient edge structure, for depiction of scene shape to the user; and salient object structure, for emulation of biological attention deployment when viewing a scene. This thesis proposes and evaluates novel computer vision algorithms for extracting salient edge and salient object structure from RGB-D input.
Extraction of salient edge structure from the scene is first investigated through low-level analysis of surface shape. Our approach is based on the observation that regions of irregular surface shape, such as the boundary between the wall and the floor, tend to be more informative of scene structure than uniformly shaped regions. We detect these surface irregularities through multi-scale analysis of iso-disparity contour orientations, providing a real time method that robustly identifies important scene structure. This approach is then extended by using a deep CNN to learn high level information for distinguishing salient edges from structural texture. A novel depth input encoding called the depth surface descriptor (DSD) is presented, which better captures scene geometry that corresponds to salient edges, improving the learned model. These methods provide robust detection of salient edge structure in the scene. The detection of salient object structure is first achieved by noting that salient objects often have contrasting shape from their surroundings. Contrasting shape in the depth image is captured through the proposed histogram of surface orientations (HOSO) feature. This feature is used to modulate depth and colour contrast in a saliency detection framework, improving the precision of saliency seed regions and through this the accuracy of the final detection. After this, a novel formulation of structural saliency is introduced based on the angular measure of local background enclosure (LBE). This formulation addresses fundamental limitations of depth contrast methods and is not reliant on foreground depth contrast in the scene. Saliency is instead measured through the degree to which a candidate patch exhibits foreground structure.
The effectiveness of the proposed approach is evaluated through both standard datasets as well as user studies that measure the contribution of structure-based representations. Our methods are found to more effectively measure salient structure in the scene than existing methods. Our approach results in improved performance compared to standard methods during practical use of an implant display
Dynamic and Integrative Properties of the Primary Visual Cortex
The ability to derive meaning from complex, ambiguous sensory input requires the integration of information over both space and time, as well as cognitive mechanisms to dynamically shape that integration. We have studied these processes in the primary visual cortex (V1), where neurons have been proposed to integrate visual inputs along a geometric pattern known as the association field (AF). We first used cortical reorganization as a model to investigate the role that a specific network of V1 connections, the long-range horizontal connections, might play in temporal and spatial integration across the AF. When retinal lesions ablate sensory information from portions of the visual field, V1 undergoes a process of reorganization mediated by compensatory changes in the network of horizontal collaterals. The reorganization accompanies the brainĂąâŹâąs amazing ability to perceptually ĂąâŹĆfill-inĂąâŹ, or ĂąâŹĆseeĂąâŹ, the lost visual input. We developed a computational model to simulate cortical reorganization and perceptual fill-in mediated by a plexus of horizontal connections that encode the AF. The model reproduces the major features of the perceptual fill-in reported by human subjects with retinal lesions, and it suggests that V1 neurons, empowered by their horizontal connections, underlie both perceptual fill-in and normal integrative mechanisms that are crucial to our visual perception. These results motivated the second prong of our work, which was to experimentally study the normal integration of information in V1. Since psychophysical and physiological studies suggest that spatial interactions in V1 may be under cognitive control, we investigated the integrative properties of V1 neurons under different cognitive states. We performed extracellular recordings from single V1 neurons in macaques that were trained to perform a delayed-match-to-sample contour detection task. We found that the ability of V1 neurons to summate visual inputs from beyond the classical receptive field (cRF) imbues them with selectivity for complex contour shapes, and that neuronal shape selectivity in V1 changed dynamically according to the shapes monkeys were cued to detect. Over the population, V1 encoded subsets of the AF, predicted by the computational model, that shifted as a function of the monkeysĂąâŹâą expectations. These results support the major conclusions of the theoretical work; even more, they reveal a sophisticated mode of form processing, whereby the selectivity of the whole network in V1 is reshaped by cognitive state
Recommended from our members
Visual recognition of objects : behavioral, computational, and neurobiological aspects
I surveyed work on visual object recognition and perception. In animals, vision has been studied mainly on the behavioral and neurobiological levels. Behavioral data typically show what the visual system, by itself or together with the rest of the organism, is capable of. They show, for example, that humans can recognie objects regardless of size and position, but that rotated objects pose problems. Important insights into the organization of behavior have also been provided by people who suffered localized brain damage. We have learned that the brain is divided into areas subserving different and relatively well-defined behaviors. The visual system itself is also organized in different subsystems; the visual cortex alone contains nearly twenty maps of the visual field. And individual neurons respond selectively to visual stimuli, e.g., the orientation of line segments, color, direction of motion, and, most intriguingly, faces. The question is how the actions of all these neurons produce the behavior we observe. How do neurons represent the shape of objects such that they can be recognized? Before we can answer the question, we have to understand the computational aspect of shape representation, the nature of the problem as it were. Many methods for representing shape have been explored, mainly by computer scientists, but so far no satisfactory answers have been found
Analysis of Retinal Image Data to Support Glaucoma Diagnosis
Fundus kamera je ĆĄiroce dostupnĂ© zobrazovacĂ zaĆĂzenĂ, kterĂ© umoĆŸĆuje relativnÄ rychlĂ© a nenĂĄkladnĂ© vyĆĄetĆenĂ zadnĂho segmentu oka â sĂtnice. Z tÄchto dĆŻvodĆŻ se mnoho vĂœzkumnĂœch pracoviĆĄĆ„ zamÄĆuje prĂĄvÄ na vĂœvoj automatickĂœch metod diagnostiky nemocĂ sĂtnice s vyuĆŸitĂm fundus fotografiĂ. Tato dizertaÄnĂ prĂĄce analyzuje souÄasnĂœ stav vÄdeckĂ©ho poznĂĄnĂ v oblasti diagnostiky glaukomu s vyuĆŸitĂm fundus kamery a navrhuje novou metodiku hodnocenĂ vrstvy nervovĂœch vlĂĄken (VNV) na sĂtnici pomocĂ texturnĂ analĂœzy. Spolu s touto metodikou je navrĆŸena metoda segmentace cĂ©vnĂho ĆeÄiĆĄtÄ sĂtnice, jakoĆŸto dalĆĄĂ hodnotnĂœ pĆĂspÄvek k souÄasnĂ©mu stavu ĆeĆĄenĂ© problematiky. Segmentace cĂ©vnĂho ĆeÄiĆĄtÄ rovnÄĆŸ slouĆŸĂ jako nezbytnĂœ krok pĆedchĂĄzejĂcĂ analĂœzu VNV. Vedle toho prĂĄce publikuje novou volnÄ dostupnou databĂĄzi snĂmkĆŻ sĂtnice se zlatĂœmi standardy pro ĂșÄely hodnocenĂ automatickĂœch metod segmentace cĂ©vnĂho ĆeÄiĆĄtÄ.Fundus camera is widely available imaging device enabling fast and cheap examination of the human retina. Hence, many researchers focus on development of automatic methods towards assessment of various retinal diseases via fundus images. This dissertation summarizes recent state-of-the-art in the field of glaucoma diagnosis using fundus camera and proposes a novel methodology for assessment of the retinal nerve fiber layer (RNFL) via texture analysis. Along with it, a method for the retinal blood vessel segmentation is introduced as an additional valuable contribution to the recent state-of-the-art in the field of retinal image processing. Segmentation of the blood vessels also serves as a necessary step preceding evaluation of the RNFL via the proposed methodology. In addition, a new publicly available high-resolution retinal image database with gold standard data is introduced as a novel opportunity for other researches to evaluate their segmentation algorithms.
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149â164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
- âŠ