16 research outputs found

    Incremental refinement of image salient-point detection

    Get PDF
    Low-level image analysis systems typically detect "points of interest", i.e., areas of natural images that contain corners or edges. Most of the robust and computationally efficient detectors proposed for this task use the autocorrelation matrix of the localized image derivatives. Although the performance of such detectors and their suitability for particular applications has been studied in relevant literature, their behavior under limited input source (image) precision or limited computational or energy resources is largely unknown. All existing frameworks assume that the input image is readily available for processing and that sufficient computational and energy resources exist for the completion of the result. Nevertheless, recent advances in incremental image sensors or compressed sensing, as well as the demand for low-complexity scene analysis in sensor networks now challenge these assumptions. In this paper, we investigate an approach to compute salient points of images incrementally, i.e., the salient point detector can operate with a coarsely quantized input image representation and successively refine the result (the derived salient points) as the image precision is successively refined by the sensor. This has the advantage that the image sensing and the salient point detection can be terminated at any input image precision (e.g., bound set by the sensory equipment or by computation, or by the salient point accuracy required by the application) and the obtained salient points under this precision are readily available. We focus on the popular detector proposed by Harris and Stephens and demonstrate how such an approach can operate when the image samples are refined in a bitwise manner, i.e., the image bitplanes are received one-by-one from the image sensor. We estimate the required energy for image sensing as well as the computation required for the salient point detection based on stochastic source modeling. The computation and energy required by the proposed incremental refinement approach is compared against the conventional salient-point detector realization that operates directly on each source precision and cannot refine the result. Our experiments demonstrate the feasibility of incremental approaches for salient point detection in various classes of natural images. In addition, a first comparison between the results obtained by the intermediate detectors is presented and a novel application for adaptive low-energy image sensing based on points of saliency is presented

    Learning to compress and search visual data in large-scale systems

    Full text link
    The problem of high-dimensional and large-scale representation of visual data is addressed from an unsupervised learning perspective. The emphasis is put on discrete representations, where the description length can be measured in bits and hence the model capacity can be controlled. The algorithmic infrastructure is developed based on the synthesis and analysis prior models whose rate-distortion properties, as well as capacity vs. sample complexity trade-offs are carefully optimized. These models are then extended to multi-layers, namely the RRQ and the ML-STC frameworks, where the latter is further evolved as a powerful deep neural network architecture with fast and sample-efficient training and discrete representations. For the developed algorithms, three important applications are developed. First, the problem of large-scale similarity search in retrieval systems is addressed, where a double-stage solution is proposed leading to faster query times and shorter database storage. Second, the problem of learned image compression is targeted, where the proposed models can capture more redundancies from the training images than the conventional compression codecs. Finally, the proposed algorithms are used to solve ill-posed inverse problems. In particular, the problems of image denoising and compressive sensing are addressed with promising results.Comment: PhD thesis dissertatio

    Multiresolution analysis as an approach for tool path planning in NC machining

    Get PDF
    Wavelets permit multiresolution analysis of curves and surfaces. A complex curve can be decomposed using wavelet theory into lower resolution curves. The low-resolution (coarse) curves are similar to rough-cuts and high-resolution (fine) curves to finish-cuts in numerical controlled (NC) machining.;In this project, we investigate the applicability of multiresolution analysis using B-spline wavelets to NC machining of contoured 2D objects. High-resolution curves are used close to the object boundary similar to conventional offsetting, while lower resolution curves, straight lines and circular arcs are used farther away from the object boundary.;Experimental results indicate that wavelet-based multiresolution tool path planning improves machining efficiency. Tool path length is reduced, sharp corners are smoothed out thereby reducing uncut areas and larger tools can be selected for rough-cuts

    Compression Efficiency for Combining Different Embedded Image Compression Techniques with Huffman Encoding

    Get PDF
    This thesis presents a technique for image compression which uses the different embedded Wavelet based image coding in combination with Huffman- encoder(for further compression). There are different types of algorithms available for lossy image compression out of which Embedded Zerotree Wavelet(EZW), Set Partitioning in Hierarchical Trees (SPIHT) and Modified SPIHT algorithms are the some of the important compression techniques. EZW algorithm is based on progressive encoding to compress an image into a bit stream with increasing accuracy. The EZW encoder was originally designed to operate on 2D images, but it can also use to other dimensional signals. Progressive encoding is also called as embedded encoding. Main feature of ezw algorithm is capability of meeting an exact target bit rate with corresponding rate distortion rate(RDF). Set Partitioning in Hierarchical Trees (SPIHT) is an improved version of EZW and has become the general standard of EZW. SPIHT is a very efficient image compression algorithm that is based on the idea of coding groups of wavelet coefficients as zero trees. Since the order in which the subsets are tested for significance is important in a practical implementation the significance information is stored in three ordered lists called list of insignificant sets (LIS) list of insignificant pixels (LIP) and list of significant pixels (LSP). Modified SPIHT algorithm and the preprocessing techniques provide significant quality (both subjectively and objectively) reconstruction at the decoder with little additional computational complexity as compared to the previous techniques. This proposed method can reduce redundancy to a certain extend. Simulation results show that these hybrid algorithms yield quite promising PSNR values at low bitrates

    Development of Low Power Image Compression Techniques

    Get PDF
    Digital camera is the main medium for digital photography. The basic operation performed by a simple digital camera is, to convert the light energy to electrical energy, then the energy is converted to digital format and a compression algorithm is used to reduce memory requirement for storing the image. This compression algorithm is frequently called for capturing and storing the images. This leads us to develop an efficient compression algorithm which will give the same result as that of the existing algorithms with low power consumption. As a result the new algorithm implemented camera can be used for capturing more images then the previous one. 1) Discrete Cosine Transform (DCT) based JPEG is an accepted standard for lossy compression of still image. Quantisation is mainly responsible for the amount loss in the image quality in the process of lossy compression. A new Energy Quantisation (EQ) method proposed for speeding up the coding and decoding procedure while preserving image qu..

    Bifurcation analysis of the Topp model

    Get PDF
    In this paper, we study the 3-dimensional Topp model for the dynamicsof diabetes. We show that for suitable parameter values an equilibrium of this modelbifurcates through a Hopf-saddle-node bifurcation. Numerical analysis suggests thatnear this point Shilnikov homoclinic orbits exist. In addition, chaotic attractors arisethrough period doubling cascades of limit cycles.Keywords Dynamics of diabetes · Topp model · Reduced planar quartic Toppsystem · Singular point · Limit cycle · Hopf-saddle-node bifurcation · Perioddoubling bifurcation · Shilnikov homoclinic orbit · Chao

    A machine learning approach to statistical shape models with applications to medical image analysis

    Get PDF
    Statistical shape models have become an indispensable tool for image analysis. The use of shape models is especially popular in computer vision and medical image analysis, where they were incorporated as a prior into a wide range of different algorithms. In spite of their big success, the study of statistical shape models has not received much attention in recent years. Shape models are often seen as an isolated technique, which merely consists of applying Principal Component Analysis to a set of example data sets. In this thesis we revisit statistical shape models and discuss their construction and applications from the perspective of machine learning and kernel methods. The shapes that belong to an object class are modeled as a Gaussian Process whose parameters are estimated from example data. This formulation puts statistical shape models in a much wider context and makes the powerful inference tools from learning theory applicable to shape modeling. Furthermore, the formulation is continuous and thus helps to avoid discretization issues, which often arise with discrete models. An important step in building statistical shape models is to establish surface correspondence. We discuss an approach which is based on kernel methods. This formulation allows us to integrate the statistical shape model as an additional prior. It thus unifies the methods of registration and shape model fitting. Using Gaussian Process regression we can integrate shape constraints in our model. These constraints can be used to enforce landmark matching in the fitting or correspondence problem. The same technique also leads directly to a new solution for shape reconstruction from partial data. In addition to experiments on synthetic 2D data sets, we show the applicability of our methods on real 3D medical data of the human head. In particular, we build a 3D model of the human skull, and present its applications for the planning of cranio-facial surgeries

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways
    corecore