2,240 research outputs found

    Content-Based Image Enhancement for Eye Fundus Images Visualization

    Get PDF
    Un nombre croissant de maladies peuvent être détectées par l’analyse de la rétine humaine, et ce même à des stades de celles-ci où le patient pourrait ne pas encore avoir de symptômes visibles. Comme les traitements sont généralement plus intrusifs et coûteux pour les stades les plus avancés des maladies, il est préférable pour sa santé et ses finances que le patient soit soigné au plus tôt. C’est pourquoi le dépistage régulier de la population est considéré comme un des moyens les plus efficaces pour limiter le nombre de cas extrêmes. Étant donnée la quantité d’examens de la rétine que l’on voudrait alors effectuer chaque année, l’amélioration du processus de dépistage est un sujet important. Les propositions à ce sujet cherchent généralement soit à automatiser l’analyse, soit à aider les médecins à faire leur diagnostic. Ce mémoire propose d’améliorer le processus de dépistage grâce à une méthode d’amélioration des images de fond d’oeil pour faciliter leur visualisation et leur diagnostic par un examinateur humain. Les principaux problèmes que peuvent présenter les images de fond d’oeil et dont la correction peut aider à simplifier leur diagnostic sont liés à du contenu flou, des artefacts de réflexions, des défauts d’illumination et de contraste ou encore à une importante variabilité de taille, de forme ou de couleurs entre les images. L’objectif de notre travail est de proposer une méthode qui permette de mieux observer les éléments présents dans ces images afin de faciliter leur analyse tout en s’assurant que l’apparence des images reste plausible, en particulier en termes de couleurs, de telle sorte que l’examinateur ne soit pas gêné par les changements. Nous cherchons également à réduire la variabilité en couleurs entre les images en leur faisant toutes partager la même palette de couleurs. Les précédents travaux portant sur l’amélioration des images de fond d’oeil se concentrent sur la correction des artefacts dans les images comme le flou ou les problèmes d’illumination et de contraste non uniformes. Bien que les méthodes proposées améliorent la visibilité des éléments dans les images, elles ne sont généralement pas adaptées pour la visualisation par un examinateur mais plutôt pour être utilisées comme pré-traitement pour d’autres méthodes automatiques qui peuvent mettre à profit ces améliorations. En effet, l’apparence de leurs résultats sur les images couleurs a tendance à être trop différente des images habituelles et les examinateurs ont donc des difficultés à travailler sur les images produites. L’un des principaux problèmes est la disparition de certains éléments des images lors du traitement, comme par exemple la fovéa, qui peuvent ne pas être nécessaires pour certains algorithmes automatiques mais qui sont des points de repère importants pour les examinateurs. De plus, les couleurs des images produites ne sont pas naturelles, ce qui gêne les examinateurs pour faire leur v diagnostic. Nous nous intéressons alors à un autre domaine d’amélioration d’images au travers des techniques de transfert de couleurs par l’exemple qui modifient une image source pour qu’elle utilise les couleurs d’une image de référence. Cette approche est généralement utilisée pour le transfert de style ou d’ambiance entre des images naturelles et n’a pas encore été appliquée aux images de fond d’oeil. Des travaux récents proposent d’utiliser le contenu des images pour guider le transfert de couleurs et présentent donc des résultats dans lesquels les différents éléments partagent les couleurs des éléments correspondants dans l’image de référence. Comme notre objectif est de modifier uniquement l’apparence des images et non leur contenu, l’usage d’une méthode de modification des couleurs est approprié. Ce travail propose d’étendre et d’adapter au contexte des images de fond d’oeil une méthode de transfert de couleurs qui utilise les textures dans les images pour guider le transfert. L’algorithme original utilise une valeur de similarité entre les pixels calculée à l’aide d’un descripteur de textures pour guider le transfert de couleurs. Les images ainsi produites ont des couleurs proches de l’image de référence mais sont globalement trop sombres et n’utilisent pas les couleurs attendues pour le réseau vasculaire. Pour résoudre le premier problème, nous proposons d’utiliser une segmentation de la région d’intérêt pour ne plus utiliser les pixels noirs en dehors du champ de l’appareil de capture lors du transfert. La teinte globale des résultats est alors plus proche de l’image de référence mais les couleurs du réseau vasculaire restent trop différentes de la référence. C’est pourquoi nous proposons de modifier la mesure de similarité entre les pixels pour utiliser une segmentation des réseaux vasculaires afin d’augmenter la similarité entre les pixels des réseaux. Les vaisseaux dans nos résultats ont alors des couleurs plus proches de ceux de l’image de référence. Nous présentons une expérience avec des ophtalmologistes dans le but de mesurer l’effet de notre méthode sur leurs performances en termes de vitesse et de précision de diagnostic. Bien que les résultats de notre expérience montrent une accélération pour un des médecins avec les images améliorées par notre méthode, nous ne pouvons pas conclure que cette différence est le fruit de nos améliorations à cause de certains biais liés à notre protocole. Cela nous a cependant permis d’identifier les limites et les biais de notre protocole qui devront être pris en considération pour une éventuelle prochaine itération de celui-ci. La méthode proposée remplit les objectifs fixés d’augmenter la lisibilité des images et de réduire la variabilité en couleurs entre elles. De futurs travaux s’intéressant à l’utilisation d’un descripteur à plusieurs échelles pourraient permettre d’améliorer encore le transfert de couleurs, en particulier pour les plus petits éléments dans les images pour lesquels le descripteur actuel n’est pas toujours adapté. Pour améliorer le processus de dépistage, de nouvelles approches de visualisation peuvent aussi être considérées comme la mise en valeur vide régions de l’image pour guider l’oeil de l’examinateur vers les zones où il est le plus probable de trouver des éléments utiles pour son diagnostic. ----------ABSTRACT : Analyzing a patient’s retina allows to check for an increasing number of diseases and conditions, especially at their early stages when the patient may not notice any symptoms. Usually, the sooner a disease is treated the better it is, both for the patient’s health as well as their finances, as treatments tend to become more intrusive and costly the more advanced the condition is. For these reasons, regular screening of the population is a common recommendation to reduce the number of extreme cases. Considering the objective to test the maximum number of people each year, improving the screening process is important. The improvements proposed usually either automate the diagnosis or help the graders in making their diagnosis. This thesis focuses on improving the screening process by proposing an image enhancement method for eye fundus photography images visualization to make it easier for the graders to make their diagnosis. Eye fundus images present many problems whose correction could help in making their diagnosis easier such as blurred content, reflection artifacts, non-uniform luminosity and contrast as well as variability in size, shape and colors. With our method, we want to make the elements in the images more visible to facilitate their localization and recognition while maintaining a plausible appearance for the whole image, especially in terms of colors, so as to not confuse the grader. We also want to have all the images using the same color palette to reduce the variability among images. Previous works for the enhancement of eye fundus images have focused on correcting artifacts such as blur or luminosity and contrast issues. While these methods do bring improvement to the visibility of the elements in the images, they are usually better suited as preprocessing steps to other automated methods that leverage these enhancements to improve their own results. Indeed, they fail at maintaining the natural appearance of the color images and produce results that are difficult to analyze for a human expert as they do not share enough visual resemblance with regular images. In particular, they tend to make some parts of the image disappear such as the fovea which may not be necessary for some automated algorithms but that is used as a landmark by the graders. Also the colors of the resulting images are unnatural which bother the graders when they make their diagnosis. We then consider another field of image enhancement in the by-example color manipulation methods that allow to change a source image to use the colors of a given target image. This approach is usually used to change the tone or the style of regular photographs and has not yet been adapted to the context of eye fundus images. Recent works propose methods that transfer viii the colors differently depending on the content and thus produce results where elements in the resulting image share the colors of the corresponding elements in the target image. As we want to modify only the appearance and not the content of the images, using a color manipulation method is appropriate. This work expands on a color transfer method that uses the textural content in the images to guide the color transfer and adapts it to the context of eye fundus images. The original algorithm computes a similarity metric for each pixel based on a texture descriptor and uses this metric to guide the color transfer. This produces results that have colors close to the target but that are generally too dark and for which the vascular network in particular does not have the expected colors. In order to solve the first issue, we propose to use a Region Of Interest (ROI) segmentation to not take into account the black pixels outside of the camera field of view when applying the transfer. This improves the global tone of the results that is then closer to the target but the colors of the vessels still are not close enough to those of the target. We then propose a modification of the similarity metric with a segmentation of the vascular network to make pixels from the vessels to be considered more similar. This modification allows for the vessels in our results to have colors closer to those of the vessels of the target. We conduct a user study with ophthalmologists in order to measure the effect of our method on the performances of grading in terms of speed and precision. While the experiment shows an increased grading speed for one of the ophthalmologists with our enhanced images, we cannot conclude on the impact of our method on the grading performances as some biases related to our protocol prevent us from being sure of the origin of this acceleration. This however allows us to identify the limits and biases of our protocol that should be taken into consideration for a potential future iteration of the user study. The proposed method reaches both the objectives of enhancing the readability of the images and of reducing the variability in colors among images. Using a multiple scales descriptor could improve the color transfer on the smallest elements in the images for which the current descriptor is not always adapted. In order to further improve the screening process, other visualization methods could be considered such as region highlighting to guide the grader’s eyes to suspect areas in the images

    Think Unlimited and Compress Data Automatically

    Get PDF
    National audienceDeveloping an application which, when unoptimized, consumes more memory resources than physically or financially available demands a lot of expertise. In this work, we show that with the right tools and language abstractions, writing such programs for a given class of applications can stay within reach of non-expert developers. We explore the potential of a compiler-based data layout transformation from dense array to a compressed tree data structure. This transformation allows easy application prototyping, provides compression and carries information that can be used with more advanced optimization, e.g., adaptive and approximate computing techniques. We are primarily targeting partial differential equation solvers and signal processing applications. We evaluate the compression ratio and error originating from this compressed representation. We suggest multiple exploration paths to produce an automatic adaptive code transformation with compressing capabilities from the multiresolution information produced during the transformation

    Adaptive Code Refinement: A Compiler Technique and Extensions to Generate Self-Tuning Applications

    Get PDF
    International audienceCompiler high-level automatic optimization and parallelization techniques are well suited for some classes of simulation or signal processing applications, however they usually don't take into account domain-specific knowledge nor the possibility to change or to remove some computations to achieve " good enough " results. Differently, production simulation and signal processing codes have adaptive capabilities: they are designed to compute precise results only where it matters if the complete problem is not tractable or if computation time must be short. In this paper, we present a new way to provide adaptive capabilities to compute-intensive codes automatically. It relies on domain-specific knowledge provided through special pragmas by the programmer in the input code and on polyhedral compilation techniques to continuously regenerate at runtime a code that performs heavy computations only where it matters. We present experimental results on several applications where our strategy enables significant computation savings and speedup while maintaining a good precision, with a minimal effort from the programmer

    Automatic adaptive approximation for stencil computations

    Get PDF
    International audienceApproximate computing is necessary to meet deadlines in some compute-intensive applications like simulation. Building them requires a high level of expertise from the application designers as well as a significant development effort. Some application programming interfaces greatly facilitate their conception but they still heavily rely on the developer's domain-specific knowledge and require many modifications to successfully generate an approximate version of the program. In this paper we present new techniques to semi-automatically discover relevant approximate computing parameters. We believe that superior compiler-user interaction is the key to improved productivity. After pinpointing the region of interest to optimize, the developer is guided by the compiler in making the best implementation choices. Static analysis and runtime monitoring are used to infer approximation parameter values for the application. We evaluated these techniques on multiple application kernels that support approximation and show that with the help of our method, we achieve similar performance as non-assisted, hand-tuned version while requiring minimal intervention from the user

    Semi-Automatic Generation of Adaptive Codes

    Get PDF
    International audienceCompiler automatic optimization and parallelization techniques are well suited for some classes of simulation or signal processing applications, however they usually don't take into account domain-specific knowledge nor the possibility to change or to remove some computations to achieve " good enough " results. Quite differently, production simulation and signal processing codes have adaptive capabilities: they are designed to compute precise results only where it matters if the complete problem is not tractable or if computation time must be short. In this paper, we present a new way to provide adaptive capabilities to compute-intensive codes automatically. It relies on domain-specific knowledge provided through special pragmas by the programmer in the input code and on polyhedral compilation techniques to continuously regenerate at runtime a code that performs heavy computations only where it matters at every moment. We present a case study on a fluid simulation application where our strategy enables significant computation savings and speedup in the optimized portion of the application while maintaining a good precision, with a minimal effort from the programmer

    Limited effect of thermal pruning on wild blueberry crop and its root-associated microbiota

    Get PDF
    Thermal pruning was a common pruning method in the past but has progressively been replaced by mechanical pruning for economic reasons. Both practices are known to enhance and maintain high yields; however, thermal pruning was documented to have an additional sanitation effect by reducing weeds and fungal diseases outbreaks. Nevertheless, there is no clear consensus on the optimal fire intensity required to observe these outcomes. Furthermore, fire is known to alter the soil microbiome as it impacts the soil organic layer and chemistry. Thus far, no study has investigated into the effect of thermal pruning intensity on the wild blueberry microbiome in agricultural settings. This project aimed to document the effects of four gradual thermal pruning intensities on the wild blueberry performance, weeds, diseases, as well as the rhizosphere fungal and bacterial communities. A field trial was conducted using a block design where agronomic variables were documented throughout the 2-year growing period. MiSeq amplicon sequencing was used to determine the diversity as well as the structure of the bacterial and fungal communities. Overall, yield, fruit ripeness, and several other agronomical variables were not significantly impacted by the burning treatments. Soil phosphorus was the only parameter with a significant albeit temporary change (1 month after thermal pruning) for soil chemistry. Our results also showed that bacterial and fungal communities did not significantly change between burning treatments. The fungal community was dominated by ericoid mycorrhizal fungi, while the bacterial community was mainly composed of Acidobacteriales, Isosphaerales, Frankiales, and Rhizobiales. However, burning at high intensities temporarily reduced Septoria leaf spot disease in the season following thermal pruning. According to our study, thermal pruning has a limited short-term influence on the wild blueberry ecosystem but may have a potential impact on pests (notably Septoria infection), which should be explored in future studies to determine the burning frequency necessary to control this disease

    La sépulture collective mégalithique de Cabrials (Béziers, Hérault). Une petite allée sépulcrale enterrée du début du Néolithique final

    Get PDF
    La structure fouillée en novembre 2007 au lieu-dit « Cabrials » est un petit monument de la fin du Néolithique. Il a été installé dans une excavation oblongue d’environ 3 m par 1,50 m et se compose de 9 orthostates, tous retouchés, soigneusement ajustés et bloqués par de plus petites pierres. Tous les orthostates des parois sont des stèles frustes ou des éléments d’architecture remployés. Les dalles de couvertures ont été arrachées par les labours. Une seule a été retrouvée à proximité. Elle présente la même taille et forme que les autres. Par contre, il ne subsiste aucun indice relatif à une plausible signalisation.La chambre présente un plan rectangulaire de 1,50 m par 0,70 m et une hauteur d’environ 0,90 m. Son grand axe est orienté nord-ouest/sud-est. On entrait dans la chambre par le biais d’une fosse accolée à son petit côté nord-ouest, qui correspond à un couloir embryonnaire court ou, encore, au débouché d’un couloir en partie aérien. Cet accès et la chambre sont séparés par une dalle amovible, appuyée sur deux piliers. Cette configuration, chambre longue unique, enterrée, à laquelle on accède par un couloir frontal également excavé, se rapproche typologiquement d’une allée sépulcrale enterrée. Il s’agit en l’occurrence d’un très petit monument mais son caractère mégalithique est incontestable, de même que son fonctionnement collectif. Cette tombe concerne, en effet, 19 individus au minimum, dont les inhumations ont été échelonnées dans le temps. Les dépôts ont été remaniés de manière importante en au moins deux phases principales.La condamnation de la sépulture pose problème, car il s’agit d’une procédure réalisée longtemps après le dernier dépôt.Les jeunes immatures sont sur-représentés, ce qui est surprenant pour ce type de sépulture. Par ailleurs, la durée d’utilisation semble courte, ce que suggère aussi la forte cohérence typologique du mobilier, dont toutes les composantes se rapportent au Néolithique final 1. Les datations 14C évoquent de manière concordante une fréquentation située autour de 3300 B.C. Le mobilier se compose d’un grand vase de stockage issu de la sphère domestique, de quelques outils lithiques et de différents éléments dont la distribution est relativement conforme à celle observée pour de plus grandes sépultures collectives, notamment dans le nord de la France.Enfin, la chronologie de ce monument un peu antérieure au plein développement des sépultures collectives mégalithiques en Languedoc, sa taille modeste et son fonctionnement particulier évoquent des traits intermédiaires entre des petits coffres lithiques du Néolithique moyen et de plus grandes sépultures, plus longuement utilisées, du Néolithique final.The structure excavated in November 2007 at the locality “Cabrials” is a small monument dating from the end of the Neolithic. It is set in an oblong excavation of approximately 3 m by 1.50 m and is composed of 9 orthostats, all retouched, carefully fitted together and blocked by smaller stones. All the orthostats of the walls are made up of rough stelae and re-used architectural elements. The covering slabs were torn off by ploughing. Only one slab was found in the vicinity, showing the same size as the others. On the other hand, there is no indication of a plausible marker.The chamber has a rectangular plan of 1.50 m by 0.70 m and a height of approximately 0.90 m. The long axis is oriented north-west/south-east. The chamber is entered via by a pit backing onto the short north-western side, corresponding to a short embryonic passage or to a partly open-air passage. This access is separated from the chamber by a removable capstone, resting on two pillars. This configuration, consisting of a single long burial chamber with a frontal access that is also below ground level, is typologically very similar to an underground gallery grave. While the monument is very small, its megalithic character is undeniable, in the same way as its collective function. Indeed, this tomb concerns at least 19 individuals, whose burials were spread out over time. The deposits were significantly re-used in at least two main phases.The concealing of the burial chamber is problematic, because it involved a procedure carried out a long time after the last deposit.Immature young individuals are over-represented, which is surprising for this type of burial. In addition, the utilization period seems short, which is also suggested by the strong typological coherence of the goods, with all the components being related to the final Neolithic 1. The 14C dating consistently indicates use of the site around 3300 B.C. Burial goods are composed of a large storage vase derived from the domestic sphere, some lithic tools and various elements whose distribution is relatively consistent with that observed for larger collective burials, in particular in the north of France.Lastly, the age of this monument, which is slightly earlier than the full development of megalithic collective burials in Languedoc, along with its modest size and its particular mode of functioning, evoke features that are intermediate between the small cists of the Middle Neolithic and the larger burial structures of the Final Neolithic, which were used for longer periods of time

    Loss of SATB2 Occurs More Frequently Than CDX2 Loss in Colorectal Carcinoma and Identifies Particularly Aggressive Cancers in High-Risk Subgroups

    Get PDF
    BACKGROUND Special AT-rich sequence-binding protein 2 (SATB2) has emerged as an alternative immunohistochemical marker to CDX2 for colorectal differentiation. However, the distribution and prognostic relevance of SATB2 expression in colorectal carcinoma (CRC) have to be further elucidated. METHODS SATB2 expression was analysed in 1039 CRCs and correlated with clinicopathological and morphological factors, CDX2 expression as well as survival parameters within the overall cohort and in clinicopathological subgroups. RESULTS SATB2 loss was a strong prognosticator in univariate analyses of the overall cohort (p \textless 0.001 for all survival comparisons) and in numerous subcohorts including high-risk scenarios (UICC stage III/high tumour budding). SATB2 retained its prognostic relevance in multivariate analyses of these high-risk scenarios (e.g., UICC stage III: DSS: p = 0.007, HR: 1.95), but not in the overall cohort (DSS: p = 0.1, HR: 1.25). SATB2 loss was more frequent than CDX2 loss (22.2% vs. 10.2%, p \textless 0.001) and of higher prognostic relevance with only moderate overlap between SATB2/CDX2 expression groups. CONCLUSIONS SATB2 loss is able to identify especially aggressive CRCs in high-risk subgroups. While SATB2 is the prognostically superior immunohistochemical parameter compared to CDX2 in univariate analyses, it appears to be the less sensitive marker for colorectal differentiation as it is lost more frequently
    • …
    corecore