613 research outputs found

    Bio-signals compression using auto-encoder

    Get PDF
    Latest developments in wearable devices permits un-damageable and cheapest way for gathering of medical data such as bio-signals like ECG, Respiration, Blood pressure etc. Gathering and analysis of various biomarkers are considered to provide anticipatory healthcare through customized applications for medical purpose. Wearable devices will rely on size, resources and battery capacity; we need a novel algorithm to robustly control memory and the energy of the device. The rapid growth of the technology has led to numerous auto encoders that guarantee the results by extracting feature selection from time and frequency domain in an efficient way. The main aim is to train the hidden layer to reconstruct the data similar to that of input. In the previous works, to accomplish the compression all features were needed but in our proposed framework bio-signals compression using auto-encoder (BCAE) will perform task by taking only important features and compress it. By doing this it can reduce power consumption at the source end and hence increases battery life. The performance of the result comparison is done for the 3 parameters compression ratio, reconstruction error and power consumption. Our proposed work outperforms with respect to the SURF method

    Recent Developments in Video Surveillance

    Get PDF
    With surveillance cameras installed everywhere and continuously streaming thousands of hours of video, how can that huge amount of data be analyzed or even be useful? Is it possible to search those countless hours of videos for subjects or events of interest? Shouldn’t the presence of a car stopped at a railroad crossing trigger an alarm system to prevent a potential accident? In the chapters selected for this book, experts in video surveillance provide answers to these questions and other interesting problems, skillfully blending research experience with practical real life applications. Academic researchers will find a reliable compilation of relevant literature in addition to pointers to current advances in the field. Industry practitioners will find useful hints about state-of-the-art applications. The book also provides directions for open problems where further advances can be pursued

    Compressed Image Quality Measurement

    Get PDF
    The strict requirement of the Nyquist criterion imposes acquiring large amount of data. These data when converted to compressed domain can be represented by very few data points. Due to which most of the samples are ignored. So in any signal processing system efficient use of the sensors, the memory requirements and the computational cost are not optimum. This give rise to increase in power requirements, computational complexity and over use of memory storage, which indirectly increases the cost of the system. Generally the data is stored in compressed domain to reduce the memory requirements. The calculation of the compressed coefficients requires processing time, which is dependent on the number of samples acquired. In most of the Digital systems there is only requirement of estimation of parameter of signal. These parameters are generally computed in the spatial or time domain, which again requires calculation of the inverse of the compressed coefficient. Instead if we were to calculate the parameter in compressed domain itself then the time for inverse conversion would be avoided. To further reduce the time and storage requirement one can make use of CM theory. The theory states that the compressed samples acquired can be used for certain parameter estimation. It also helps in reducing number of computations required, with less error in estimation. One of such parameter to be estimated can be the quality of an image. Quality estimation is required to provide an objective score to an image. SSIM is one of the quality score under consideration of this thesis. The implementation of compressive measurement with SSIM is the main objective of this thesis. This incorporation will help in reducing the computation which will help in developing a real time system for estimation of quality for stream of data like HD video streaming. The thesis provides with statistical results in support of the developed quality estimation metric

    Fast diffusion MRI based on sparse acquisition and reconstruction for long-term population imaging

    Get PDF
    Diffusion weighted magnetic resonance imaging (dMRI) is a unique MRI modality to probe the diffusive molecular transport in biological tissue. Due to its noninvasiveness and its ability to investigate the living human brain at submillimeter scale, dMRI is frequently performed in clinical and biomedical research to study the brain’s complex microstructural architecture. Over the last decades large prospective cohort studies have been set up with the aim to gain new insights into the development and progression of brain diseases across the life span and to discover biomarkers for disease prediction and potentially prevention. To allow for diverse brain imaging using different MRI modalities, stringent scan time limits are typically imposed in population imaging. Nevertheless, population studies aim to apply advanced and thereby time consuming dMRI protocols that deliver high quality data with great potential for future analysis. To allow for time-efficient but also versatile diffusion imaging, this thesis contributes to the investigation of accelerating diffusion spectrum imaging (DSI), an advanced dMRI technique that acquires imaging data with high intra-voxel resolution of tissue microstructure. Combining state-of-the-art parallel imaging and the theory of compressed sensing (CS) enables the acceleration of spatial encoding and diffusion encoding in dMRI. In this way, the otherwise long acquisition times in DSI can be reduced significantly. In this thesis, first, suitable q-space sampling strategies and basis functions are explored that fulfill the requirements of CS theory for accurate sparse DSI reconstruction. Novel 3D q-space sample distributions are investigated for CS-DSI. Moreover, conventional CS-DSI based on the discrete Fourier transform is compared for the first time to CS-DSI based on the continuous SHORE (simple harmonic oscillator based reconstruction and estimation) basis functions. Based on these findings, a CS-DSI protocol is proposed for application in a prospective cohort study, the Rhineland Study. A pilot study was designed and conducted to evaluate the CS-DSI protocol in comparison with state-of-the-art 3-shell dMRI and dedicated protocols for diffusion tensor imaging (DTI) and for the combined hindered and restricted model of diffusion (CHARMED). Population imaging requires processing techniques preferably with low computational cost to process and analyze the acquired big data within a reasonable time frame. Therefore, a pipeline for automated processing of CS-DSI acquisitions was implemented including both in-house developed and existing state-of-the-art processing tools. The last contribution of this thesis is a novel method for automatic detection and imputation of signal dropout due to fast bulk motion during the diffusion encoding in dMRI. Subject motion is a common source of artifacts, especially when conducting clinical or population studies with children, the elderly or patients. Related artifacts degrade image quality and adversely affect data analysis. It is, thus, highly desired to detect and then exclude or potentially impute defective measurements prior to dMRI analysis. Our proposed method applies dMRI signal modeling in the SHORE basis and determines outliers based on the weighted model residuals. Signal imputation reconstructs corrupted and therefore discarded measurements from the sparse set of inliers. This approach allows for fast and robust correction of imaging artifacts in dMRI which is essential to estimate accurate and precise model parameters that reflect the diffusive transport of water molecules and the underlying microstructural environment in brain tissue.Die diffusionsgewichtete Magnetresonanztomographie (dMRT) ist ein einzigartiges MRTBildgebungsverfahren, um die Diffusionsbewegung von Wassermolekülen in biologischem Gewebe zu messen. Aufgrund der Möglichkeit Schichtbilder nicht invasiv aufzunehmen und das lebende menschliche Gehirn im Submillimeter-Bereich zu untersuchen, ist die dMRT ein häufig verwendetes Bildgebungsverfahren in klinischen und biomedizinischen Studien zur Erforschung der komplexen mikrostrukturellen Architektur des Gehirns. In den letzten Jahrzehnten wurden große prospektive Kohortenstudien angelegt, um neue Einblicke in die Entwicklung und den Verlauf von Gehirnkrankheiten über die Lebenspanne zu erhalten und um Biomarker zur Krankheitserkennung und -vorbeugung zu bestimmen. Um durch die Verwendung unterschiedlicher MRT-Verfahren verschiedenartige Schichtbildaufnahmen des Gehirns zu ermöglich, müssen Scanzeiten typischerweise stark begrenzt werden. Dennoch streben Populationsstudien die Anwendung von fortschrittlichen und daher zeitintensiven dMRT-Protokollen an, um Bilddaten in hoher Qualität und mit großem Potential für zukünftige Analysen zu akquirieren. Um eine zeiteffizente und gleichzeitig vielseitige Diffusionsbildgebung zu ermöglichen, leistet diese Dissertation Beiträge zur Untersuchung von Beschleunigungsverfahren für die Bildgebung mittels diffusion spectrum imaging (DSI). DSI ist ein fortschrittliches dMRT-Verfahren, das Bilddaten mit hoher intra-voxel Auflösung der Gewebestruktur erhebt. Werden modernste Verfahren zur parallelen MRT-Bildgebung mit der compressed sensing (CS) Theorie kombiniert, ermöglicht dies eine Beschleunigung der räumliche Kodierung und der Diffusionskodierung in der dMRT. Dadurch können die ansonsten langen Aufnahmezeiten für DSI erheblich reduziert werden. In dieser Arbeit werden zuerst geeigenete Strategien zur Abtastung des q-space sowie Basisfunktionen untersucht, welche die Anforderungen der CS-Theorie für eine korrekte Signalrekonstruktion der dünnbesetzten DSI-Daten erfüllen. Neue 3D-Verteilungen von Messpunkten im q-space werden für die Verwendung in CS-DSI untersucht. Außerdem wird konventionell auf der diskreten Fourier-Transformation basierendes CS-DSI zum ersten Mal mit einem CS-DSI Verfahren verglichen, welches kontinuierliche SHORE (simple harmonic oscillator based reconstruction and estimation) Basisfunktionen verwendet. Aufbauend auf diesen Ergebnissen wird ein CS-DSI-Protokoll zur Anwendung in einer prospektiven Kohortenstudie, der Rheinland Studie, vorgestellt. Eine Pilotstudie wurde entworfen und durchgeführt, um das CS-DSI-Protokoll im Vergleich mit modernster 3-shell-dMRT und mit dedizierten Protokollen für diffusion tensor imaging (DTI) und für das combined hindered and restricted model of diffusion (CHARMED) zu evaluieren. Populationsbildgebung erfordert Prozessierungsverfahren mit möglichst geringem Rechenaufwand, um große akquirierte Datenmengen in einem angemessenen Zeitrahmen zu verarbeiten und zu analysieren. Dafür wurde eine Pipeline zur automatisierten Verarbeitung von CS-DSI-Daten implementiert, welche sowohl eigenentwickelte als auch bereits existierende moderene Verarbeitungsprogramme enthält. Der letzte Beitrag dieser Arbeit ist eine neue Methode zur automatischen Detektion und Imputation von Signalabfall, welcher durch schnelle Bewegungen während der Diffusionskodierung in der dMRT entsteht. Bewegungen der Probanden während der dMRT-Aufnahme sind eine häufige Ursache für Bildfehler, vor allem in klinischen oder Populationsstudien mit Kindern, alten Menschen oder Patienten. Diese Artefakte vermindern die Datenqualität und haben einen negativen Einfluss auf die Datenanalyse. Daher ist es das Ziel, fehlerhafte Messungen vor der dMRI-Analyse zu erkennen und dann auszuschließen oder wenn möglich zu ersetzen. Die vorgestellte Methode verwendet die SHORE-Basis zur dMRT-Signalmodellierung und bestimmt Ausreißer mit Hilfe von gewichteten Modellresidualen. Die Datenimputation rekonstruiert die unbrauchbaren und daher verworfenen Messungen mit Hilfe der verbleibenden, dünnbesetzten Menge an Messungen. Dieser Ansatz ermöglicht eine schnelle und robuste Korrektur von Bildartefakten in der dMRT, welche erforderlich ist, um korrekte und präzise Modellparameter zu schätzen, die die Diffusionsbewegung von Wassermolekülen und die zugrundeliegende Mikrostruktur des Gehirngewebes reflektieren

    Hierarchical Objective Quality Assessment for CS Video in WMSN

    Get PDF

    Algorithms for Reconstruction of Undersampled Atomic Force Microscopy Images

    Get PDF

    Probabilistic models for structured sparsity

    Get PDF

    Generative adversarial networks review in earthquake-related engineering fields

    Get PDF
    Within seismology, geology, civil and structural engineering, deep learning (DL), especially via generative adversarial networks (GANs), represents an innovative, engaging, and advantageous way to generate reliable synthetic data that represent actual samples' characteristics, providing a handy data augmentation tool. Indeed, in many practical applications, obtaining a significant number of high-quality information is demanding. Data augmentation is generally based on artificial intelligence (AI) and machine learning data-driven models. The DL GAN-based data augmentation approach for generating synthetic seismic signals revolutionized the current data augmentation paradigm. This study delivers a critical state-of-art review, explaining recent research into AI-based GAN synthetic generation of ground motion signals or seismic events, and also with a comprehensive insight into seismic-related geophysical studies. This study may be relevant, especially for the earth and planetary science, geology and seismology, oil and gas exploration, and on the other hand for assessing the seismic response of buildings and infrastructures, seismic detection tasks, and general structural and civil engineering applications. Furthermore, highlighting the strengths and limitations of the current studies on adversarial learning applied to seismology may help to guide research efforts in the next future toward the most promising directions
    corecore