355 research outputs found

    Optical image compression and encryption methods

    No full text
    International audienceOver the years extensive studies have been carried out to apply coherent optics methods in real-time communications and image transmission. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. However, the transmitted data can be intercepted by nonauthorized people. This explains why considerable effort is being devoted at the current time to data encryption and secure transmission. In addition, only a small part of the overall information is really useful for many applications. Consequently, applications can tolerate information compression that requires important processing when the transmission bit rate is taken into account. To enable efficient and secure information exchange, it is often necessary to reduce the amount of transmitted information. In this context, much work has been undertaken using the principle of coherent optics filtering for selecting relevant information and encrypting it. Compression and encryption operations are often carried out separately, although they are strongly related and can influence each other. Optical processing methodologies, based on filtering, are described that are applicable to transmission and/or data storage. Finally, the advantages and limitations of a set of optical compression and encryption methods are discussed

    An Extended Review on Fabric Defects and Its Detection Techniques

    Get PDF
    In Textile Industry, Quality of the Fabric is the main important factor. At the initial stage, it is very essential to identify and avoid the fabrics faults/defects and hence human perception consumes lot of time and cost to reveal the fabrics faults. Now-a-days Automated Inspection Systems are very useful to decrease the fault prediction time and gives best visualizing clarity- based on computer vision and image processing techniques. This paper made an extended review about the quality parameters in the fiber-to-fabric process, fabrics defects detection terminologies applied on major three clusters of fabric defects knitting, woven and sewing fabric defects. And this paper also explains about the statistical performance measures which are used to analyze the defect detection process. Also, comparison among the methods proposed in the field of fabric defect detection

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Design of a Simulator for Neonatal Multichannel EEG: Application to Time-Frequency Approaches for Automatic Artifact Removal and Seizure Detection

    Get PDF
    The electroencephalogram (EEG) is used to noninvasively monitor brain activities; it is the most utilized tool to detect abnormalities such as seizures. In recent studies, detection of neonatal EEG seizures has been automated to assist neurophysiologists in diagnosing EEG as manual detection is time consuming and subjective; however it still lacks the necessary robustness that is required for clinical implementation. Moreover, as EEG is intended to record the cerebral activities, extra-cerebral activities external to the brain are also recorded; these are called “artifacts” and can seriously degrade the accuracy of seizure detection. Seizures are one of the most common neurologic problems managed by hospitals occurring in 0.1%-0.5% livebirths. Neonates with seizures are at higher risk for mortality and are reported to be 55-70 times more likely to have severe cerebral-palsy. Therefore, early and accurate detection of neonatal seizures is important to prevent long-term neurological damage. Several attempts in modelling the neonatal EEG and artifacts have been done, but most did not consider the multichannel case. Furthermore, these models were used to test artifact or seizure detection separately, but not together. This study aims to design synthetic models that generate clean or corrupted multichannel EEG to test the accuracy of available artifact and seizure detection algorithms in a controlled environment. In this thesis, synthetic neonatal EEG model is constructed by using; single-channel EEG simulators, head model, 21-electrodes, and propagation equations, to produce clean multichannel EEG. Furthermore, neonatal EEG artifact model is designed using synthetic signals to corrupt EEG waveforms. After that, an automated EEG artifact detection and removal system is designed in both time and time-frequency domains. Artifact detection is optimised and removal performance is evaluated. Finally, an automated seizure detection technique is developed, utilising fused and extended multichannel features along a cross-validated SVM classifier. Results show that the synthetic EEG model mimics real neonatal EEG with 0.62 average correlation, and corrupted-EEG can degrade seizure detection average accuracy from 100% to 70.9%. They also show that using artifact detection and removal enhances the average accuracy to 89.6%, and utilising the extended features enhances it to 97.4% and strengthened its robustness.لمراقبة ورصد أنشطة واشارات المخ، دون الحاجة لأي عملیات (EEG) یستخدم الرسم أو التخطیط الكھربائي للدماغ للدماغجراحیة، وھي تعد الأداة الأكثر استخداما في الكشف عن أي شذوذأو نوبات غیر طبیعیة مثل نوبات الصرع. وقد أظھرت دراسات حدیثة، أن الكشف الآلي لنوبات حدیثي الولادة، ساعد علماء الفسیولوجیا العصبیة في تشخیص الاشارات الدماغیة بشكل أكبر من الكشف الیدوي، حیث أن الكشف الیدوي یحتاج إلى وقت وجھد أكبر وھوذو فعالیة أقل بكثیر، إلا أنھ لا یزال یفتقر إلى المتانة الضروریة والمطلوبة للتطبیق السریري.علاوة على ذلك؛ فكما یقوم الرسم الكھربائي بتسجیل الأنشطة والإشارات الدماغیة الداخلیة، فھو یسجل أیضا أي نشاط أو اشارات خارجیة، مما یؤدي إلى -(artifacts) :حدوث خلل في مدى دقة وفعالیة الكشف عن النوبات الدماغیة الداخلیة، ویطلق على تلك الاشارات مسمى (نتاج صنعي) . 0.5٪ولادة حدیثة في -٪تعد نوبات الصرع من أكثر المشكلات العصبیة انتشارا،ً وھي تصیب ما یقارب 0.1المستشفیات. حیث أن حدیثي الولادة المصابین بنوبات الصرع ھم أكثر عرضة للوفاة، وكما تشیر التقاریر الى أنھم 70مرة أكثر. لذا یعد الكشف المبكر والدقیق للنوبات الدماغیة -معرضین للإصابة بالشلل الدماغي الشدید بما یقارب 55لحدیثي الولادة مھم جدا لمنع الضرر العصبي على المدى الطویل. لقد تم القیام بالعدید من المحاولات التي كانتتھدف الى تصمیم نموذج التخطیط الكھربائي والنتاج الصنعي لدماغ حدیثي الولادة, إلا أن معظمھا لم یعر أي اھتمام الى قضیة تعدد القنوات. إضافة الى ذلك, استخدمت ھذه النماذج , كل على حدة, أو نوبات الصرع. تھدف ھذه الدراسة الى تصمیم نماذج مصطنعة من شأنھا (artifact) لإختبار كاشفات النتاج الصنعيأن تولد اشارات دماغیة متعددة القنوات سلیمة أو معطلة وذلك لفحص مدى دقة فعالیة خوارزمیات الكشف عن نوبات ضمن بیئة یمكن السیطرة علیھا. (artifact) الصرع و النتاج الصنعي في ھذه الأطروحة, یتكون نموذج الرسم الكھربائي المصطنع لحدیثي الولادة من : قناة محاكاة واحده للرسم الكھربائي, نموذج رأس, 21قطب كھربائي و معادلات إنتشار. حیث تھدف جمیعھا لإنتاج إشاراة سلیمة متعدده القنوات للتخطیط عن طریق استخدام اشارات مصطنعة (artifact) الكھربائي للدماغ.علاوة على ذلك, لقد تم تصمیم نموذجالنتاج الصنعيفي نطاقالوقت و (artifact) لإتلاف الرسم الكھربائي للدماغ. بعد ذلك تم انشاء برنامج لكشف و إزالةالنتاج الصناعينطاقالوقت و التردد المشترك. تم تحسین برنامج الكشف النتاج الصناعيالى ابعد ما یمكن بینما تمت عملیة تقییم أداء الإزالة. وفي الختام تم التمكن من تطویر تقنیة الكشف الآلي عن نوبات الصرع, وذلك بتوظیف صفات مدمجة و صفات الذي تم التأكد من صحتھ. (SVM) جدیدة للقنوات المتعددة لإستخدامھا للمصنفلقد أظھرت النتائج أن نموذج الرسم الكھربائي المصطنع لحدیثي الولادة یحاكي الرسمالكھربائي الحقیقي لحدیثي الولادة بمتوسط ترابط 0.62, و أنالرسم الكھربائي المتضرر للدماغ قد یؤدي الى حدوث ھبوطفي مدى دقة متوسط الكشف عن نوبات الصرع من 100%الى 70.9%. وقد أشارت أیضا الى أن استخدام الكشف والإزالة عن النتاج الصنعي (artifact) یؤدي الى تحسن مستوى الدقة الى نسبة 89.6 %, وأن توظیف الصفات الجدیدة للقنوات المتعددة یزید من تحسنھا لتصل الى نسبة 94.4 % مما یعمل على دعم متانتھا

    Principled methods for mixtures processing

    Get PDF
    This document is my thesis for getting the habilitation à diriger des recherches, which is the french diploma that is required to fully supervise Ph.D. students. It summarizes the research I did in the last 15 years and also provides the short­term research directions and applications I want to investigate. Regarding my past research, I first describe the work I did on probabilistic audio modeling, including the separation of Gaussian and α­stable stochastic processes. Then, I mention my work on deep learning applied to audio, which rapidly turned into a large effort for community service. Finally, I present my contributions in machine learning, with some works on hardware compressed sensing and probabilistic generative models.My research programme involves a theoretical part that revolves around probabilistic machine learning, and an applied part that concerns the processing of time series arising in both audio and life sciences

    Elastic image registration using parametric deformation models

    Get PDF
    The main topic of this thesis is elastic image registration for biomedical applications. We start with an overview and classification of existing registration techniques. We revisit the landmark interpolation which appears in the landmark-based registration techniques and add some generalizations. We develop a general elastic image registration algorithm. It uses a grid of uniform B-splines to describe the deformation. It also uses B-splines for image interpolation. Multiresolution in both image and deformation model spaces yields robustness and speed. First we describe a version of this algorithm targeted at finding unidirectional deformation in EPI magnetic resonance images. Then we present the enhanced and generalized version of this algorithm which is significantly faster and capable of treating multidimensional deformations. We apply this algorithm to the registration of SPECT data and to the motion estimation in ultrasound image sequences. A semi-automatic version of the registration algorithm is capable of accepting expert hints in the form of soft landmark constraints. Much fewer landmarks are needed and the results are far superior compared to pure landmark registration. In the second part of this thesis, we deal with the problem of generalized sampling and variational reconstruction. We explain how to reconstruct an object starting from several measurements using arbitrary linear operators. This comprises the case of traditional as well as generalized sampling. Among all possible reconstructions, we choose the one minimizing an a priori given quadratic variational criterion. We give an overview of the method and present several examples of applications. We also provide the mathematical details of the theory and discuss the choice of the variational criterion to be used

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Waveform Advancements and Synchronization Techniques for Generalized Frequency Division Multiplexing

    Get PDF
    To enable a new level of connectivity among machines as well as between people and machines, future wireless applications will demand higher requirements on data rates, response time, and reliability from the communication system. This will lead to a different system design, comprising a wide range of deployment scenarios. One important aspect is the evolution of physical layer (PHY), specifically the waveform modulation. The novel generalized frequency division multiplexing (GFDM) technique is a prominent proposal for a flexible block filtered multicarrier modulation. This thesis introduces an advanced GFDM concept that enables the emulation of other prominent waveform candidates in scenarios where they perform best. Hence, a unique modulation framework is presented that is capable of addressing a wide range of scenarios and to upgrade the PHY for 5G networks. In particular, for a subset of system parameters of the modulation framework, the problem of symbol time offset (STO) and carrier frequency offset (CFO) estimation is investigated and synchronization approaches, which can operate in burst and continuous transmissions, are designed. The first part of this work presents the modulation principles of prominent 5G candidate waveforms and then focuses on the GFDM basic and advanced attributes. The GFDM concept is extended towards the use of OQAM, introducing the novel frequency-shift OQAM-GFDM, and a new low complexity model based on signal processing carried out in the time domain. A new prototype filter proposal highlights the benefits obtained in terms of a reduced out-of-band (OOB) radiation and more attractive hardware implementation cost. With proper parameterization of the advanced GFDM, the achieved gains are applicable to other filtered OFDM waveforms. In the second part, a search approach for estimating STO and CFO in GFDM is evaluated. A self-interference metric is proposed to quantify the effective SNR penalty caused by the residual time and frequency misalignment or intrinsic inter-symbol interference (ISI) and inter-carrier interference (ICI) for arbitrary pulse shape design in GFDM. In particular, the ICI can be used as a non-data aided approach for frequency estimation. Then, GFDM training sequences, defined either as an isolated preamble or embedded as a midamble or pseudo-circular pre/post-amble, are designed. Simulations show better OOB emission and good estimation results, either comparable or superior, to state-of-the-art OFDM system in wireless channels
    corecore