623 research outputs found

    Registration and variability of side scan sonar imagery

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Ocean Engineer at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution August 1988This thesis presents the results of several experiments performed on side scan sonar equipment and imagery with the aim of characterizing the acoustic variability of side scan sonar imagery and applying this information to image rectification and registration. A static test tank experiment is presented which analyzes the waveform, power spectral density, and temporal variability of the transmitted waveform. The results of a second static experiment conducted from the Woods Hole Oceanographic Institution Pier in Woods Hole, Massachusetts permit determination of the distribution and moments of intensity fluctuations of echoes from objects imaged in side scan sonograms. This experiment also characterizes temporal and spatial coherence of intensity fluctuations. A third experiment is presented in which a side scan sonar towfish images the bottom adjacent to the pier while running along an underwater track which reduces towfish instability. Imagery from this experiment is used to develop a rectification and registration algorithm for side scan sonat images. Preliminary image processing is described and examples presented, followed by favorable results for automated image rectification and registration.Massachusetts Commonwealth Centers of Excellence, Marine Imaging Systems, and The National Science Foundation for funding this researc

    Drift and stabilization of cortical response selectivity

    Get PDF
    Synaptic turnover and long term functional stability are two seemingly contradicting features of neuronal networks, which show varying expressions across different brain regions. Recent studies have shown, how both of these are strongly expressed in the hippocampus, raising the question how this can be reconciled within a biological network. In this work, I use a data set of neuron activity from mice behaving within a virtual environment recorded over up to several months to extend and develop methods, showing how the activity of hundreds of neurons per individual animal can be reliably tracked and characterized. I employ these methods to analyze network- and individual neuron behavior during the initial formation of a place map from the activity of individual place cells while the animal learns to navigate in a new environment, as well as during the condition of a constant environment over several weeks. In a published study included in this work, we find that map formation is driven by selective stabilization of place cells coding for salient regions, with distinct characteristics for neurons coding for landmark, reward, or other locations. Strikingly, we find that in mice lacking Shank2, an autism spectrum disorder (ASD)-linked gene encoding an excitatory postsynaptic scaffold protein, a characteristic overrepresentation of visual landmarks is missing while the overrepresentation of reward location remains intact, suggesting different underlying mechanisms in the stabilization. In the condition of a constant environment, I find how turnover dynamics largely decouple from the location of a place field and are governed by a strong decorrelation of population activity on short time scales (hours to days), followed by long-lasting correlations (days to months) above chance level. In agreement with earlier studies, I find a slow, constant drift in the population of active neurons, while – contrary to earlier results – place fields within the active population are assumed approximately randomly. Place field movement across days is governed by periods of stability around an anchor position, interrupted by random, long-range relocation. The data does not suggest the existence of populations of neurons showing distinct properties of stability, but rather shows a continuous range from highly unstable to very stable functional- and non-functional activity. Average timescales of reliable contributions to the neural code are on the order of few days, in agreement with earlier reported timescales of synaptic turnover in the hippocampus.2021-08-0

    Feature-based object tracking in maritime scenes.

    Get PDF
    A monitoring of presence, location and activity of various objects on the sea is essential for maritime navigation and collision avoidance. Mariners normally rely on two complementary methods of the monitoring: radar and satellite-based aids and human observation. Though radar aids are relatively accurate at long distances, their capability of detecting small, unmanned or non-metallic craft that generally do not reflect radar waves sufficiently enough, is limited. The mariners, therefore, rely in such cases on visual observations. The visual observation is often facilitated by using cameras overlooking the sea that can also provide intensified infra-red images. These systems or nevertheless merely enhance the image and the burden of the tedious and error-prone monitoring task still rests with the operator. This thesis addresses the drawbacks of both methods by presenting a framework consisting of a set of machine vision algorithms that facilitate the monitoring tasks in maritime environment. The framework detects and tracks objects in a sequence of images captured by a camera mounted either on a board of a vessel or on a static platform over-looking the sea. The detection of objects is independent of their appearance and conditions such as weather and time of the day. The output of the framework consists of locations and motions of all detected objects with respect to a fixed point in the scene. All values are estimated in real-world units, i. e. location is expressed in metres and velocity in knots. The consistency of the estimates is maintained by compensating for spurious effects such as vibration of the camera. In addition, the framework continuously checks for predefined events such as collision threats or area intrusions, raising an alarm when any such event occurs. The development and evaluation of the framework is based on sequences captured under conditions corresponding to a designated application. The independence of the detection and tracking on the appearance of the sceneand objects is confirmed by a final cross-validation of the framework on previously unused sequences. Potential applications of the framework in various areas of maritime environment including navigation, security, surveillance and others are outlined. Limitations to the presented framework are identified and possible solutions suggested. The thesis concludes with suggestions to further directions of the research presented

    Surface-guided computing to analyze subcellular morphology and membrane-associated signals in 3D

    Full text link
    Signal transduction and cell function are governed by the spatiotemporal organization of membrane-associated molecules. Despite significant advances in visualizing molecular distributions by 3D light microscopy, cell biologists still have limited quantitative understanding of the processes implicated in the regulation of molecular signals at the whole cell scale. In particular, complex and transient cell surface morphologies challenge the complete sampling of cell geometry, membrane-associated molecular concentration and activity and the computing of meaningful parameters such as the cofluctuation between morphology and signals. Here, we introduce u-Unwrap3D, a framework to remap arbitrarily complex 3D cell surfaces and membrane-associated signals into equivalent lower dimensional representations. The mappings are bidirectional, allowing the application of image processing operations in the data representation best suited for the task and to subsequently present the results in any of the other representations, including the original 3D cell surface. Leveraging this surface-guided computing paradigm, we track segmented surface motifs in 2D to quantify the recruitment of Septin polymers by blebbing events; we quantify actin enrichment in peripheral ruffles; and we measure the speed of ruffle movement along topographically complex cell surfaces. Thus, u-Unwrap3D provides access to spatiotemporal analyses of cell biological parameters on unconstrained 3D surface geometries and signals.Comment: 49 pages, 10 figure

    Knowledge Based Measurement Of Enhancing Brain Tissue In Anisotropic Mr Imagery

    Get PDF
    Medical Image Analysis has emerged as an important field in the computer vision community. In this thesis, two important issues in medical imaging are addressed and a solution for each is derived and synergistically combined as one coherent system. Firstly, a novel approach is proposed for High Resolution Volume (HRV) construction by combining different frequency components at multiple levels, which are separated by using a multi-resolution pyramid structure. Current clinical imaging protocols make use of multiple orthogonal low resolution scans to measure the size of the tumor. The highly anisotropic data result in difficulty and even errors in tumor assessment. In previous approaches, simple interpolation has been used to construct HRVs from multiple low resolution volumes (LRVs), which fail when large inter-plane spacing is present. In our approach, Laplacian pyramids containing band-pass contents are first computed from registered LRVs. The Laplacian images are expanded in their low resolution axes separately and then fused at each level. A Gaussian pyramid is recovered from the fused Laplacian pyramid, where a volume at the bottom level of the Gaussian pyramid is the constructed HRV. The effectiveness of the proposed approach is validated by using simulated images. The method has also been applied to real clinical data and promising experimental results are demonstrated. Secondly, a new knowledge-based framework to automatically quantify the volume of enhancing tissue in brain MR images is proposed. Our approach provides an objective and consistent way to evaluate disease progression and assess the treatment plan. In our approach, enhanced regions are first located by comparing the difference between the aligned set of pre- and post-contrast T1 MR images. Since some normal tissues may also become enhanced by the administration of Gd-DTPA, using the intensity difference alone may not be able to distinguish normal tissue from the tumor. Thus, we propose a new knowledge-based method employing knowledge of anatomical structures from a probabilistic brain atlas and the prior distribution of brain tumor to identify the real enhancing tissue. Our approach has two main advantages. i) The results are invariant to the image contrast change due to the usage of the probabilistic knowledge-based framework. ii) Using the segmented regions instead of independent pixels facilitates an approach that is much less sensitive to small registration errors and image noise. The obtained results are compared to the ground truth for validation and it is shown that the proposed method can achieve accurate and consistent measurements

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    Optimización en GPU de algoritmos para la mejora del realce y segmentación en imágenes hepáticas

    Get PDF
    This doctoral thesis deepens the GPU acceleration for liver enhancement and segmentation. With this motivation, detailed research is carried out here in a compendium of articles. The work developed is structured in three scientific contributions, the first one is based upon enhancement and tumor segmentation, the second one explores the vessel segmentation and the last is published on liver segmentation. These works are implemented on GPU with significant speedups with great scientific impact and relevance in this doctoral thesis The first work proposes cross-modality based contrast enhancement for tumor segmentation on GPU. To do this, it takes target and guidance images as an input and enhance the low quality target image by applying two dimensional histogram approach. Further it has been observed that the enhanced image provides more accurate tumor segmentation using GPU based dynamic seeded region growing. The second contribution is about fast parallel gradient based seeded region growing where static approach has been proposed and implemented on GPU for accurate vessel segmentation. The third contribution describes GPU acceleration of Chan-Vese model and cross-modality based contrast enhancement for liver segmentation

    Karakterizacija predkliničnega tumorskega ksenograftnega modela z uporabo multiparametrične MR

    Full text link
    Introduction: In small animal studies multiple imaging modalities can be combined to complement each other in providing information on anatomical structure and function. Non-invasive imaging studies on animal models are used to monitor progressive tumor development. This helps to better understand the efficacy of new medicines and prediction of the clinical outcome. The aim was to construct a framework based on longitudinal multi-modal parametric in vivo imaging approach to perform tumor tissue characterization in mice. Materials and Methods: Multi-parametric in vivo MRI dataset consisted of T1-, T2-, diffusion and perfusion weighted images. Image set of mice (n=3) imaged weekly for 6 weeks was used. Multimodal image registration was performed based on maximizing mutual information. Tumor region of interested was delineated in weeks 2 to 6. These regions were stacked together, and all modalities combined were used in unsupervised segmentation. Clustering methods, such as K-means and Fuzzy C-means together with blind source separation technique of non-negative matrix factorization were tested. Results were visually compared with histopathological findings. Results: Clusters obtained with K-means and Fuzzy C-means algorithm coincided with T2 and ADC maps per levels of intensity observed. Fuzzy C-means clusters and NMF abundance maps reported most promising results compared to histological findings and seem as a complementary way to asses tumor microenvironment. Conclusions: A workflow for multimodal MR parametric map generation, image registration and unsupervised tumor segmentation was constructed. Good segmentation results were achieved, but need further extensive histological validation.Uvod Eden izmed pomembnih stebrov znanstvenih raziskav v medicinski diagnostiki predstavljajo eksperimenti na živalih v sklopu predkliničnih študij. V teh študijah so eksperimenti izvedeni za namene odkrivanja in preskušanja novih terapevtskih metod za zdravljenje človeških bolezni. Rak jajčnikov je eden izmed glavnih vzrokov smrti kot posledica rakavih obolenj. Potreben je razvoj novih, učinkovitejših metod, da bi lahko uspešneje kljubovali tej bolezni. Časovno okno aplikacije novih terapevtikov je ključni dejavnik uspeha raziskovane terapije. Tumorska fiziologija se namreč razvija med napredovanjem bolezni. Eden izmed ciljev predkliničnih študij je spremljanje razvoja tumorskega mikro-okolja in tako določiti optimalno časovno okno za apliciranje razvitega terapevtika z namenom doseganja maksimalne učinkovitosti. Slikovne modalitete so kot raziskovalno orodje postale izjemno popularne v biomedicinskih in farmakoloških raziskavah zaradi svoje neinvazivne narave. Predklinične slikovne modalitete imajo nemalo prednosti pred tradicionalnim pristopom. Skladno z raziskovalno regulativo, tako za spremljanje razvoja tumorja skozi daljši čas ni potrebno žrtvovati živali v vmesnih časovnih točkah. Sočasno lahko namreč s svojim nedestruktivnim in neinvazivnim pristopom poleg anatomskih informacij podajo tudi molekularni in funkcionalni opis preučevanega subjekta. Za dosego slednjega so običajno uporabljene različne slikovne modalitete. Pogosto se uporablja kombinacija več slikovnih modalitet, saj so medsebojno komplementarne v podajanju željenih informacij. V sklopu te naloge je predstavljeno ogrodje za procesiranje različnih modalitet magnetno resonančnih predkliničnih modelov z namenom karakterizacije tumorskega tkiva. Metodologija V študiji Belderbos, Govaerts, Croitor Sava in sod. [1] so z uporabo magnetne resonance preučevali določitev optimalnega časovnega okna za uspešno aplikacijo novo razvitega terapevtika. Poleg konvencionalnih magnetno resonančnih slikovnih metod (T1 in T2 uteženo slikanje) sta bili uporabljeni tudi perfuzijsko in difuzijsko uteženi tehniki. Zajem slik je potekal tedensko v obdobju šest tednov. Podatkovni seti, uporabljeni v predstavljenem delu, so bili pridobljeni v sklopu omenjene raziskave. Ogrodje za procesiranje je narejeno v okolju Matlab (MathWorks, verzija R2019b) in omogoča tako samodejno kot ročno procesiranje slikovnih podatkov. V prvem koraku je pred generiranjem parametričnih map uporabljenih modalitet, potrebno izluščiti parametre uporabljenih protokolov iz priloženih tekstovnih datotek in zajete slike pravilno razvrstiti glede na podano anatomijo. Na tem mestu so slike tudi filtrirane in maskirane. Filtriranje je koristno za izboljšanje razmerja med koristnim signalom (slikanim živalskim modelom) in ozadjem, saj je skener za zajem slik navadno podvržen različnim izvorom slikovnega šuma. Uporabljen je bil filter ne-lokalnih povprečij Matlab knjižnice za procesiranje slik. Prednost maskiranja se potrdi v naslednjem koraku pri generiranju parametričnih map, saj se ob primerno maskiranem subjektu postopek bistveno pospeši z mapiranjem le na želenem področju. Za izdelavo parametričnih map je uporabljena metoda nelinearnih najmanjših kvadratov. Z modeliranjem fizikalnih pojavov uporabljenih modalitet tako predstavimo preiskovan živalski model z biološkimi parametri. Le-ti se komplementarno dopolnjujejo v opisu fizioloških lastnosti preučevanega modela na ravni posameznih slikovnih elementov. Ključen gradnik v uspešnem dopolnjevanju informacij posameznih modalitet je ustrezna poravnava parametričnih map. Posamezne modalitete so zajete zaporedno, ob različnih časih. Skeniranje vseh modalitet posamezne živali skupno traja več kot eno uro. Med zajemom slik tako navkljub uporabi anestetikov prihaja do majhnih premikov živali. V kolikor ti premiki niso pravilno upoštevani, prihaja do napačnih interpretacij skupnih informacij večih modalitet. Premiki živali znotraj modalitet so bili modelirani kot toge, med različnimi modalitetami pa kot afine preslikave. Poravnava slik je izvedena z lastnimi Matlab funkcijami ali z uporabo funkcij iz odprtokodnega ogrodja za procesiranje slik Elastix. Z namenom karakterizacije tumorskega tkiva so bile uporabljene metode nenadzorovanega razčlenjevanja. Bistvo razčlenjevanja je v združevanju posameznih slikovnih elementov v segmente. Elementi si morajo biti po izbranem kriteriju dovolj medsebojno podobni in se hkrati razlikovati od elementov drugih segmentov. Za razgradnjo so bile izbrane tri metode: metoda K-tih povprečij, kot ena izmed enostavnejšihmetoda mehkih C-tih povprečij, s prednostjo mehke razčlenitvein kot zadnja, nenegativna matrična faktorizacija. Slednja ponuja pogled na razčlenitev tkiva kot produkt tipičnih več-modalnih značilk in njihove obilice za vsak posamezni slikovni element. Za potrditev izvedenega razčlenjevanja z omenjenimi metodami je bila izvedena vizualna primerjava z rezultati histopatološke analize. Rezultati Na ustvarjene parametrične mape je imela poravnava slik znotraj posameznih modalitet velik vpliv. Zaradi dolgotrajnega zajema T1 uteženih slik nemalokrat prihaja do premikov živali, kar brez pravilne poravnave slik negativno vpliva na mapiranje modalitet in kasnejšo segmentacijo slik. Generirane mape imajo majhno odstopanje od tistih, narejenih s standardno uporabljenimi odprtokodnimi programi. Klastri pridobljeni z metodama K-tih in mehkih C-tih povprečij dobro sovpadajo z razčlenbami glede na njihovo inteziteto pri T2 in ADC mapah. Najobetavnejše rezultate po primerjavi s histološkimi izsledki podajata metoda mehkih C-povprečij in nenegativna matrična faktorizacija. Njuni segmentaciji se dopolnjujeta v razlagi tumorskega mikro-okolja. Zaključek Z izgradnjo ogrodja za procesiranje slik magnetne resonance in segmentacijo tumorskega tkiva je bil cilj magistrske naloge dosežen. Zasnova ogrodja omogoča poljubno dodajanje drugih modalitet in uporabo drugih živalskih modelov. Rezultati razčlenitve tumorskega tkiva so obetavni, vendar je potrebna nadaljna primerjava z rezultati histopatološke analize. Možna nadgradnja je izboljšanje robustnosti poravnave slik z uporabo modela netoge (elastične) preslikave. Prav tako je smiselno preizkusiti dodatne metode nenadzorovane segmentacije in dobljene rezultate primerjati s tukaj predstavljenimi

    Adaptive Methods for Color Vision Impaired Users

    Get PDF
    Color plays a key role in the understanding of the information in computer environments. It happens that about 5% of the world population is affected by color vision deficiency (CVD), also called color blindness. This visual impairment hampers the color perception, ending up by limiting the overall perception that CVD people have about the surrounding environment, no matter it is real or virtual. In fact, a CVD individual may not distinguish between two different colors, what often originates confusion or a biased understanding of the reality, including web environments, whose web pages are plenty of media elements like text, still images, video, sprites, and so on. Aware of the difficulties that color-blind people may face in interpreting colored contents, a significant number of recoloring algorithms have been proposed in the literature with the purpose of improving the visual perception of those people somehow. However, most of those algorithms lack a systematic study of subjective assessment, what undermines their validity, not to say usefulness. Thus, in the sequel of the research work behind this Ph.D. thesis, the central question that needs to be answered is whether recoloring algorithms are of any usefulness and help for colorblind people or not. With this in mind, we conceived a few preliminary recoloring algorithms that were published in conference proceedings elsewhere. Except the algorithm detailed in Chapter 3, these conference algorithms are not described in this thesis, though they have been important to engender those presented here. The first algorithm (Chapter 3) was designed and implemented for people with dichromacy to improve their color perception. The idea is to project the reddish hues onto other hues that are perceived more regularly by dichromat people. The second algorithm (Chapter 4) is also intended for people with dichromacy to improve their perception of color, but its applicability covers the adaptation of text and image, in HTML5- compliant web environments. This enhancement of color contrast of text and imaging in web pages is done while keeping the naturalness of color as much as possible. Also, to the best of our knowledge, this is the first web recoloring approach targeted to dichromat people that takes into consideration both text and image recoloring in an integrated manner. The third algorithm (Chapter 5) primarily focuses on the enhancement of some of the object contours in still images, instead of recoloring the pixels of the regions bounded by such contours. Enhancing contours is particularly suited to increase contrast in images, where we find adjacent regions that are color indistinguishable from dichromat’s point of view. To our best knowledge, this is one of the first algorithms that take advantage of image analysis and processing techniques for region contours. After accurate subjective assessment studies for color-blind people, we concluded that the CVD adaptation methods are useful in general. Nevertheless, each method is not efficient enough to adapt all sorts of images, that is, the adequacy of each method depends on the type of image (photo-images, graphical representations, etc.). Furthermore, we noted that the experience-based perceptual learning of colorblind people throughout their lives determines their visual perception. That is, color adaptation algorithms must satisfy requirements such as color naturalness and consistency, to ensure that dichromat people improve their visual perception without artifacts. On the other hand, CVD adaptation algorithms should be object-oriented, instead of pixel-oriented (as typically done), to select judiciously pixels that should be adapted. This perspective opens an opportunity window for future research in color accessibility in the field of in human-computer interaction (HCI).A cor desempenha um papel fundamental na compreensão da informação em ambientes computacionais. Porém, cerca de 5% da população mundial é afetada pela deficiência de visão de cor (ou Color Vision Deficiency (CVD), do Inglês), correntemente designada por daltonismo. Esta insuficiência visual dificulta a perceção das cores, o que limita a perceção geral que os indivíduos têm sobre o meio, seja real ou virtual. Efetivamente, um indivíduo com CVD vê como iguais cores que são diferentes, o que origina confusão ou uma compreensão distorcida da realidade, assim como dos ambientes web, onde existe uma abundância de conteúdos média coloridos, como texto, imagens fixas e vídeo, entre outros. Com o intuito de mitigar as dificuldades que as pessoas com CVD enfrentam na interpretação de conteúdos coloridos, tem sido proposto na literatura um número significativo de algoritmos de recoloração, que têm como o objetivo melhorar, de alguma forma, a perceção visual de pessoas com CVD. Porém, a maioria desses trabalhos carece de um estudo sistemático de avaliação subjetiva, o que põe em causa a sua validação, se não mesmo a sua utilidade. Assim, a principal questão à qual se pretende responder, como resultado do trabalho de investigação subjacente a esta tese de doutoramento, é se os algoritmos de recoloração têm ou não uma real utilidade, constituindo assim uma ajuda efetiva às pessoas com daltonismo. Tendo em mente esta questão, concebemos alguns algoritmos de recoloração preliminares que foram publicados em atas de conferências. Com exceção do algoritmo descrito no Capítulo 3, esses algoritmos não são descritos nesta tese, não obstante a sua importância na conceção daqueles descritos nesta dissertação. O primeiro algoritmo (Capítulo 3) foi projetado e implementado para pessoas com dicromacia, a fim de melhorar a sua perceção da cor. A ideia consiste em projetar as cores de matiz avermelhada em matizes que são melhor percebidos pelas pessoas com os tipos de daltonismo em causa. O segundo algoritmo (Capítulo 4) também se destina a melhorar a perceção da cor por parte de pessoas com dicromacia, porém a sua aplicabilidade abrange a adaptação de texto e imagem, em ambientes web compatíveis com HTML5. Isto é conseguido através do realce do contraste de cores em blocos de texto e em imagens, em páginas da web, mantendo a naturalidade da cor tanto quanto possível. Além disso, tanto quanto sabemos, esta é a primeira abordagem de recoloração em ambiente web para pessoas com dicromacia, que trata o texto e a imagem de forma integrada. O terceiro algoritmo (Capítulo 5) centra-se principalmente na melhoria de alguns dos contornos de objetos em imagens, em vez de aplicar a recoloração aos pixels das regiões delimitadas por esses contornos. Esta abordagem é particularmente adequada para aumentar o contraste em imagens, quando existem regiões adjacentes que são de cor indistinguível sob a perspetiva dos observadores com dicromacia. Também neste caso, e tanto quanto é do nosso conhecimento, este é um dos primeiros algoritmos em que se recorre a técnicas de análise e processamento de contornos de regiões. Após rigorosos estudos de avaliação subjetiva com pessoas com daltonismo, concluiu-se que os métodos de adaptação CVD são úteis em geral. No entanto, cada método não é suficientemente eficiente para todos os tipo de imagens, isto é, o desempenho de cada método depende do tipo de imagem (fotografias, representações gráficas, etc.). Além disso, notámos que a aprendizagem perceptual baseada na experiência das pessoas daltónicas ao longo de suas vidas é determinante para perceber aquilo que vêem. Isto significa que os algoritmos de adaptação de cor devem satisfazer requisitos tais como a naturalidade e a consistência da cor, de modo a não pôr em causa aquilo que os destinatários consideram razoável ver no mundo real. Por outro lado, a abordagem seguida na adaptação CVD deve ser orientada aos objetos, em vez de ser orientada aos pixéis (como tem sido feito até ao momento), de forma a possibilitar uma seleção mais criteriosa dos pixéis que deverão ser sujeitos ao processo de adaptação. Esta perspectiva abre uma janela de oportunidade para futura investigação em acessibilidade da cor no domínio da interacção humano-computador (HCI)
    corecore