421 research outputs found

    Improved Stroke Detection at Early Stages Using Haar Wavelets and Laplacian Pyramid

    Get PDF
    Stroke merupakan pembunuh nomor tiga di dunia, namun hanya sedikit metode tentang deteksi dini. Oleh karena itu dibutuhkan metode untuk mendeteksi hal tersebut. Penelitian ini mengusulkan sebuah metode gabungan untuk mendeteksi dua jenis stroke secara simultan. Haar wavelets untuk mendeteksi stroke hemoragik dan Laplacian pyramid untuk mendeteksi stroke iskemik. Tahapan dalam penelitian ini terdiri dari pra proses tahap 1 dan 2, Haar wavelets, Laplacian pyramid, dan perbaikan kualitas citra. Pra proses adalah menghilangkan bagian tulang tengkorak, reduksi derau, perbaikan kontras, dan menghilangkan bagian selain citra otak. Kemudian dilakukan perbaikan citra. Selanjutnya Haar wavelet digunakan untuk ekstraksi daerah hemoragik sedangkan Laplacian pyramid untuk ekstraksi daerah iskemik. Tahapan terakhir adalah menghitung fitur Grey Level Cooccurrence Matrix (GLCM) sebagai fitur untuk proses klasifikasi. Hasil visualisasi diproses lanjut untuk ekstrasi fitur menggunakan GLCM dengan 12 fitur dan kemudian GLCM dengan 4 fitur. Untuk proses klasifikasi digunakan SVM dan KNN, sedangkan pengukuran performa menggunakan akurasi. Jumlah data hemoragik dan iskemik adalah 45 citra yang dibagi menjadi 2 bagian, 28 citra untuk pengujian dan 17 citra untuk pelatihan. Hasil akhir menunjukkan akurasi tertinggi yang dicapai menggunakan SVM adalah 82% dan KNN adalah 88%

    Generic Feasibility of Perfect Reconstruction with Short FIR Filters in Multi-channel Systems

    Full text link
    We study the feasibility of short finite impulse response (FIR) synthesis for perfect reconstruction (PR) in generic FIR filter banks. Among all PR synthesis banks, we focus on the one with the minimum filter length. For filter banks with oversampling factors of at least two, we provide prescriptions for the shortest filter length of the synthesis bank that would guarantee PR almost surely. The prescribed length is as short or shorter than the analysis filters and has an approximate inverse relationship with the oversampling factor. Our results are in form of necessary and sufficient statements that hold generically, hence only fail for elaborately-designed nongeneric examples. We provide extensive numerical verification of the theoretical results and demonstrate that the gap between the derived filter length prescriptions and the true minimum is small. The results have potential applications in synthesis FB design problems, where the analysis bank is given, and for analysis of fundamental limitations in blind signals reconstruction from data collected by unknown subsampled multi-channel systems.Comment: Manuscript submitted to IEEE Transactions on Signal Processin

    Fast Compression of Imagery with High Frequency Content

    Get PDF
    Image compression is an active area due to the many applications involving electronic media. Much research has been focused on image quality versus bit rate and/or algorithm speed. Here, we seek an effective image coder with a weighted constraint on speed. However, the compression must not taint the quality of impulsive features on the image. Moreover, the camera is operated in a mode that creates a dominant fixed pattern noise across the image array, degrading visual quality and disrupting compression performance. We propose a method that efficiently compresses such an image. We begin by characterizing and removing the fixed pattern noise from the image, thereby dramatically improving its visual quality

    WAVELET BASED DATA HIDING OF DEM IN THE CONTEXT OF REALTIME 3D VISUALIZATION (Visualisation 3D Temps-Réel à Distance de MNT par Insertion de Données Cachées Basée Ondelettes)

    No full text
    The use of aerial photographs, satellite images, scanned maps and digital elevation models necessitates the setting up of strategies for the storage and visualization of these data. In order to obtain a three dimensional visualization it is necessary to drape the images, called textures, onto the terrain geometry, called Digital Elevation Model (DEM). Practically, all these information are stored in three different files: DEM, texture and position/projection of the data in a geo-referential system. In this paper we propose to stock all these information in a single file for the purpose of synchronization. For this we have developed a wavelet-based embedding method for hiding the data in a colored image. The texture images containing hidden DEM data can then be sent from the server to a client in order to effect 3D visualization of terrains. The embedding method is integrable with the JPEG2000 coder to accommodate compression and multi-resolution visualization. Résumé L'utilisation de photographies aériennes, d'images satellites, de cartes scannées et de modèles numériques de terrains amène à mettre en place des stratégies de stockage et de visualisation de ces données. Afin d'obtenir une visualisation en trois dimensions, il est nécessaire de lier ces images appelées textures avec la géométrie du terrain nommée Modèle Numérique de Terrain (MNT). Ces informations sont en pratiques stockées dans trois fichiers différents : MNT, texture, position et projection des données dans un système géo-référencé. Dans cet article, nous proposons de stocker toutes ces informations dans un seul fichier afin de les synchroniser. Nous avons développé pour cela une méthode d'insertion de données cachées basée ondelettes dans une image couleur. Les images de texture contenant les données MNT cachées peuvent ensuite être envoyées du serveur au client afin d'effectuer une visualisation 3D de terrains. Afin de combiner une visualisation en multirésolution et une compression, l'insertion des données cachées est intégrable dans le codeur JPEG 2000

    State-of-the-Art and Trends in Scalable Video Compression with Wavelet Based Approaches

    Get PDF
    3noScalable Video Coding (SVC) differs form traditional single point approaches mainly because it allows to encode in a unique bit stream several working points corresponding to different quality, picture size and frame rate. This work describes the current state-of-the-art in SVC, focusing on wavelet based motion-compensated approaches (WSVC). It reviews individual components that have been designed to address the problem over the years and how such components are typically combined to achieve meaningful WSVC architectures. Coding schemes which mainly differ from the space-time order in which the wavelet transforms operate are here compared, discussing strengths and weaknesses of the resulting implementations. An evaluation of the achievable coding performances is provided considering the reference architectures studied and developed by ISO/MPEG in its exploration on WSVC. The paper also attempts to draw a list of major differences between wavelet based solutions and the SVC standard jointly targeted by ITU and ISO/MPEG. A major emphasis is devoted to a promising WSVC solution, named STP-tool, which presents architectural similarities with respect to the SVC standard. The paper ends drawing some evolution trends for WSVC systems and giving insights on video coding applications which could benefit by a wavelet based approach.partially_openpartially_openADAMI N; SIGNORONI. A; R. LEONARDIAdami, Nicola; Signoroni, Alberto; Leonardi, Riccard

    Wavelet Decompositions of Nonrefinable Shift Invariant Spaces

    Get PDF
    AbstractThe motivation for this work is a recently constructed family of generators of shift invariant spaces with certain optimal approximation properties, but which are not refinable in the classical sense. We try to see whether, once the classical refinability requirement is removed, it is still possible to construct meaningful wavelet decompositions of dilates of the shift invariant space that are well suited for applications

    Mining sequences in distributed sensors data for energy production.

    Get PDF
    Brief Overview of the Problem: The Environmental Protection Agency (EPA), a government funded agency, provides both legislative and judicial powers for emissions monitoring in the United States. The agency crafts laws based on self-made regulations to enforce companies to operate within the limits of the law resulting in environmentally safe operation. Specifically, power companies operate electric generating facilities under guidelines drawn-up and enforced by the EPA. Acid rain and other harmful factors require that electric generating facilities report hourly emissions recorded via a Supervisory Control and Data Acquisition (SCADA) system. SCADA is a control and reporting system that is present in all power plants consisting of sensors and control mechanisms that monitor all equipment within the plants. The data recorded by a SCADA system is collected by the EPA and allows them to enforce proper plant operation relating to emissions. This data includes a lot of generating unit and power plant specific details, including hourly generation. This hourly generation (termed grossunitload by the EPA) is the actual hourly average output of the generator on a per unit basis. The questions to be answered are do any of these units operate in tandem and do any of the units start, stop, or change operation as a result of another\u27s change in generation? These types of questions will be answered for the years of April 2002 through April 2003 for facilities that operate pipeline natural-gas-fired generating units. Purpose of Research The research conducted has dual uses if fruitful. First, the use of a local modeling between generating units would be highly profitable among energy traders. Betting that a plant will operate a unit based on another\u27s current characteristics would be sensationally profitable to energy traders. This profitability is variable due to fuel type. For instance, if the price of coal is extremely high due to shortages, the value of knowing a semioperating characteristic of two generating units is highly valuable. Second, this known characteristic can also be used in regulation and operational modeling. The second use is of great importance to government agencies. If regulatory committees can be aware of past (or current) similarities between power producers, they may be able to avoid a power struggle in a region caused by greedy traders or companies. Not considering profitable motives, the Department of Energy may use something similar to generate a model of power grid generation availability based on previous data for reliability purposes. Type of Problem: The problem tackled within this Master\u27s thesis is of multiple time series pattern recognition. This field is expansive and well studied, therefore the research performed will benefit from previously known techniques. The author has chosen to experiment with conventional techniques such as correlation, principal component analysis, and kmeans clustering for feature and eventually pattern extraction. For the primary analysis performed, the author chose to use a conventional sequence discovery algorithm. The sequence discovery algorithm has no prior knowledge of space limitations, therefore it searches over the entire space resulting in an expense but complete process. Prior to sequence discovery the author applies a uniform coding schema to the raw data, which is an adaption of a coding schema presented by Keogh. This coding and discovery process is deemed USD, or Uniform Sequence Discovery. The data is highly dimensional along with being extremely dynamic and sporadic with regards to magnitude. The energy market that demands power generation is profit and somewhat reliability driven. The obvious factors are more reliability based, for instance to keep system frequency at 60Hz, units may operate in an idle state resulting in a constant or very low value for a period of time (idle time). Also to avoid large frequency swings on the power grid, companies are require

    Genetic algorithm and tabu search approaches to quantization for DCT-based image compression

    Get PDF
    Today there are several formal and experimental methods for image compression, some of which have grown to be incorporated into the Joint Photographers Experts Group (JPEG) standard. Of course, many compression algorithms are still used only for experimentation mainly due to various performance issues. Lack of speed while compressing or expanding an image, poor compression rate, and poor image quality after expansion are a few of the most popular reasons for skepticism about a particular compression algorithm. This paper discusses current methods used for image compression. It also gives a detailed explanation of the discrete cosine transform (DCT), used by JPEG, and the efforts that have recently been made to optimize related algorithms. Some interesting articles regarding possible compression enhancements will be noted, and in association with these methods a new implementation of a JPEG-like image coding algorithm will be outlined. This new technique involves adapting between one and sixteen quantization tables for a specific image using either a genetic algorithm (GA) or tabu search (TS) approach. First, a few schemes including pixel neighborhood and Kohonen self-organizing map (SOM) algorithms will be examined to find their effectiveness at classifying blocks of edge-detected image data. Next, the GA and TS algorithms will be tested to determine their effectiveness at finding the optimum quantization table(s) for a whole image. A comparison of the techniques utilized will be thoroughly explored
    • …
    corecore