7 research outputs found

    Spline-based dense medial descriptors for lossy image compression

    Get PDF
    Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with stable and accurate B-splines for image compression. Representing medial descriptors with B-splines can not only greatly improve compression but is also an effective vector representation of raster images. A comprehensive evaluation shows that our Spline-based Dense Medial Descriptors (SDMD) method achieves much higher compression ratios at similar or even better quality to the well-known JPEG technique. We illustrate our approach with applications in generating super-resolution images and salient feature preserving image compression

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    Consensus ou fusion de segmentation pour quelques applications de détection ou de classification en imagerie

    Full text link
    Récemment, des vraies mesures de distances, au sens d’un certain critère (et possédant de bonnes propriétés asymptotiques) ont été introduites entre des résultats de partitionnement (clustering) de donnés, quelquefois indexées spatialement comme le sont les images segmentées. À partir de ces métriques, le principe de segmentation moyenne (ou consensus) a été proposée en traitement d’images, comme étant la solution d’un problème d’optimisation et une façon simple et efficace d’améliorer le résultat final de segmentation ou de classification obtenues en moyennant (ou fusionnant) différentes segmentations de la même scène estimée grossièrement à partir de plusieurs algorithmes de segmentation simples (ou identiques mais utilisant différents paramètres internes). Ce principe qui peut se concevoir comme un débruitage de données d’abstraction élevée, s’est avéré récemment une alternative efficace et très parallélisable, comparativement aux méthodes utilisant des modèles de segmentation toujours plus complexes et plus coûteux en temps de calcul. Le principe de distance entre segmentations et de moyennage ou fusion de segmentations peut être exploité, directement ou facilement adapté, par tous les algorithmes ou les méthodes utilisées en imagerie numérique où les données peuvent en fait se substituer à des images segmentées. Cette thèse a pour but de démontrer cette assertion et de présenter différentes applications originales dans des domaines comme la visualisation et l’indexation dans les grandes bases d’images au sens du contenu segmenté de chaque image, et non plus au sens habituel de la couleur et de la texture, le traitement d’images pour améliorer sensiblement et facilement la performance des méthodes de détection du mouvement dans une séquence d’images ou finalement en analyse et classification d’images médicales avec une application permettant la détection automatique et la quantification de la maladie d’Alzheimer à partir d’images par résonance magnétique du cerveau.Recently, some true metrics in a criterion sense (with good asymptotic properties) were introduced between data partitions (or clusterings) even for data spatially ordered such as image segmentations. From these metrics, the notion of average clustering (or consensus segmentation) was then proposed in image processing as the solution of an optimization problem and a simple and effective way to improve the final result of segmentation or classification obtained by averaging (or fusing) different segmentations of the same scene which are roughly estimated from several simple segmentation models (or obtained with the same model but with different internal parameters). This principle, which can be conceived as a denoising of high abstraction data, has recently proved to be an effective and very parallelizable alternative, compared to methods using ever more complex and time-consuming segmentation models. The principle of distance between segmentations, and averaging of segmentations, in a criterion sense, can be exploited, directly or easily adapted, by all the algorithms or methods used in digital imaging where data can in fact be substituted to segmented images. This thesis proposal aims at demonstrating this assertion and to present different original applications in various fields in digital imagery such as the visualization and the indexation in the image databases, in the sense of the segmented contents of each image, and no longer in the common color and texture sense, or in image processing in order to sensibly and easily improve the detection of movement in the image sequence or finally in analysis and classification in medical imaging with an application allowing the automatic detection and quantification of Alzheimer’s disease

    Quality-of-Information Aware Sensing Node Characterisation for Optimised Energy Consumption in Visual Sensor Networks

    Get PDF
    Energy consumption is one of the primary concerns in a resource constrained visual sensor network (VSN) with wireless transceiving capability. The existing VSN design solutions under particular resource constrained scenarios are application-specific, whereas the degree of sensitivity of the resource constraints varies from one application to another. This limits the implementation of the existing energy efficient solutions within a VSN node, which may be considered to be a part of a heterogeneous network. This thesis aims to resolve the energy consumption issues faced within VSNs because of their resource constrained nature by proposing energy efficient solutions for sensing nodes characterisation. The heterogeneity of image capture and processing within a VSN can be adaptively reflected with a dynamic field-of-view (FoV) realisation. This is expected to allow the implementation of a generalised energy efficient solution that will adapt with the heterogeneity of the network. In this thesis, a FoV characterisation framework is proposed, which can assist design engineers during the pre-deployment phase in developing energy efficient VSNs. The proposed FoV characterisation framework provides efficient solutions for: 1) selecting suitable sensing range; 2) maximising spatial coverage; 3) minimising the number of required nodes; and 4) adaptive task classification. The task classification scheme proposed in this thesis exploits heterogeneity of the network and leads to an optimal distribution of tasks between visual sensing nodes. Soft decision criteria is exploited, and it is observed that for a given detection reliability, the proposed FoV characterisation framework provides energy efficient solutions which can be implemented within heterogeneous networks. In the post-deployment phase, the energy efficiency of a VSN for a given level of reliability can be enhanced by reconfiguring its nodes dynamically to achieve optimal configurations. Considering the dynamic realisation of quality-of-information (QoI), a strategy is devised for selecting suitable configurations of visual sensing nodes to reduce redundant visual content prior to transmission without sacrificing the expected information retrieval reliability. By incorporating QoI awareness using peak signal-to-noise ratio-based representative metric, the distributed nature of the proposed self-reconfiguration scheme accelerates the decision making process. This thesis also proposes a unified framework for node classification and dynamic self-reconfiguration in VSNs. For a given application, the unified framework provides a feasible solution to classify and reconfigure visual sensing nodes based on their FoV by exploiting the heterogeneity of targeted QoI within the sensing region. From the results, it is observed that for the second degree of heterogeneity in targeted QoI, the unified framework outperforms its existing counterparts and results in up to 72% energy savings with as low as 94% reliability. Within the context of resource constrained VSNs, the substantial energy savings achieved by the proposed unified framework can lead to network lifetime enhancement. Moreover, the reliability analysis demonstrates suitability of the unified framework for applications that need a desired level of QoI

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Towards the development of flexible, reliable, reconfigurable, and high-performance imaging systems

    Get PDF
    Current FPGAs can implement large systems because of the high density of reconfigurable logic resources in a single chip. FPGAs are comprehensive devices that combine flexibility and high performance in the same platform compared to other platform such as General-Purpose Processors (GPPs) and Application Specific Integrated Circuits (ASICs). The flexibility of modern FPGAs is further enhanced by introducing Dynamic Partial Reconfiguration (DPR) feature, which allows for changing the functionality of part of the system while other parts are functioning. FPGAs became an important platform for digital image processing applications because of the aforementioned features. They can fulfil the need of efficient and flexible platforms that execute imaging tasks efficiently as well as the reliably with low power, high performance and high flexibility. The use of FPGAs as accelerators for image processing outperforms most of the current solutions. Current FPGA solutions can to load part of the imaging application that needs high computational power on dedicated reconfigurable hardware accelerators while other parts are working on the traditional solution to increase the system performance. Moreover, the use of the DPR feature enhances the flexibility of image processing further by swapping accelerators in and out at run-time. The use of fault mitigation techniques in FPGAs enables imaging applications to operate in harsh environments following the fact that FPGAs are sensitive to radiation and extreme conditions. The aim of this thesis is to present a platform for efficient implementations of imaging tasks. The research uses FPGAs as the key component of this platform and uses the concept of DPR to increase the performance, flexibility, to reduce the power dissipation and to expand the cycle of possible imaging applications. In this context, it proposes the use of FPGAs to accelerate the Image Processing Pipeline (IPP) stages, the core part of most imaging devices. The thesis has a number of novel concepts. The first novel concept is the use of FPGA hardware environment and DPR feature to increase the parallelism and achieve high flexibility. The concept also increases the performance and reduces the power consumption and area utilisation. Based on this concept, the following implementations are presented in this thesis: An implementation of Adams Hamilton Demosaicing algorithm for camera colour interpolation, which exploits the FPGA parallelism to outperform other equivalents. In addition, an implementation of Automatic White Balance (AWB), another IPP stage that employs DPR feature to prove the mentioned novelty aspects. Another novel concept in this thesis is presented in chapter 6, which uses DPR feature to develop a novel flexible imaging system that requires less logic and can be implemented in small FPGAs. The system can be employed as a template for any imaging application with no limitation. Moreover, discussed in this thesis is a novel reliable version of the imaging system that adopts novel techniques including scrubbing, Built-In Self Test (BIST), and Triple Modular Redundancy (TMR) to detect and correct errors using the Internal Configuration Access Port (ICAP) primitive. These techniques exploit the datapath-based nature of the implemented imaging system to improve the system's overall reliability. The thesis presents a proposal for integrating the imaging system with the Robust Reliable Reconfigurable Real-Time Heterogeneous Operating System (R4THOS) to get the best out of the system. The proposal shows the suitability of the proposed DPR imaging system to be used as part of the core system of autonomous cars because of its unbounded flexibility. These novel works are presented in a number of publications as shown in section 1.3 later in this thesis
    corecore