7,340 research outputs found

    Automatic classification of nuclear physics data via a Constrained Evolutionary Clustering approach

    Full text link
    This paper presents an automatic method for data classification in nuclear physics experiments based on evolutionary computing and vector quantization. The major novelties of our approach are the fully automatic mechanism and the use of analytical models to provide physics constraints, yielding to a fast and physically reliable classification with nearly-zero human supervision. Our method is successfully validated by using experimental data produced by stacks of semiconducting detectors. The resulting classification is highly satisfactory for all explored cases and is particularly robust to noise. The algorithm is suitable to be integrated in the online and offline analysis programs of existing large complexity detection arrays for the study of nucleus-nucleus collisions at low and intermediate energies

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Enhanced Multicarrier Techniques for Professional Ad-Hoc and Cell-Based Communications (EMPhAtiC) Document Number D3.3 Reduction of PAPR and non linearities effects

    Get PDF
    Livrable d'un projet Européen EMPHATICLike other multicarrier modulation techniques, FBMC suffers from high peak-to-average power ratio (PAPR), impacting its performance in the presence of a nonlinear high power amplifier (HPA) in two ways. The first impact is an in-band distortion affecting the error rate performance of the link. The second impact is an out-of-band effect appearing as power spectral density (PSD) regrowth, making the coexistence between FBMC based broad-band Professional Mobile Radio (PMR) systems with existing narrowband systems difficult to achieve. This report addresses first the theoretical analysis of in-band HPA distortions in terms of Bit Error Rate. Also, the out-of band impact of HPA nonlinearities is studied in terms of PSD regrowth prediction. Furthermore, the problem of PAPR reduction is addressed along with some HPA linearization techniques and nonlinearity compensation approaches

    Reliability of Trigonometric Transform-based Multi-Carrier Scheme

    Get PDF
    This work is looking for a new physical layer of a multi-carrier wireless communication system to be implemented in low complexity way, resorting to suitable fast transform. The work presents and assesses a scheme based on Discrete Trigonometric Transform with appending symmetric redundancy either in each or multiple consecutive transformed blocks. A receiver front-end filter is proposed to enforce whole symmetry in the channel impulse response, and bank of one tap filter per sub-carrier is applied as an equalizer in the transform domain. The behaviour of the transceiver is studied in the context of practical impairments like fading channel, carrier frequency offset and narrow band interference. Moreover, the performance is evaluated in contrast with the state of art methods by means of computer simulations, and it has been found that the new scheme improves robustness and reliability of communication signal, and record lower peak to average power ratio. The study demonstrates that front-end matched filter effectively performs frequency synchronization to compensate the carrier frequency offset in the received signal

    Advances in Syndrome Coding based on Stochastic and Deterministic Matrices for Steganography

    Get PDF
    Steganographie ist die Kunst der vertraulichen Kommunikation. Anders als in der Kryptographie, wo der Austausch vertraulicher Daten fĂŒr Dritte offensichtlich ist, werden die vertraulichen Daten in einem steganographischen System in andere, unauffĂ€llige Coverdaten (z.B. Bilder) eingebettet und so an den EmpfĂ€nger ĂŒbertragen. Ziel eines steganographischen Algorithmus ist es, die Coverdaten nur geringfĂŒgig zu Ă€ndern, um deren statistische Merkmale zu erhalten, und möglichst in unauffĂ€lligen Teilen des Covers einzubetten. Um dieses Ziel zu erreichen, werden verschiedene AnsĂ€tze der so genannten minimum-embedding-impact Steganographie basierend auf Syndromkodierung vorgestellt. Es wird dabei zwischen AnsĂ€tzen basierend auf stochastischen und auf deterministischen Matrizen unterschieden. Anschließend werden die Algorithmen bewertet, um Vorteile der Anwendung von Syndromkodierung herauszustellen

    ENHANCED COMPUTATION TIME FOR FAST BLOCK MATCHING ALGORITHM

    Get PDF
    Video compression is the process of reducing the amount of data required to represent digital video while preserving an acceptable video quality. Recent studies on video compression have focused on multimedia transmission, videophones, teleconferencing, high definition television (HDTV), CD-ROM storage, etc. The idea of compression techniques is to remove the redundant information that exists in the video sequences. Motion compensated predictive coding is the main coding tool for removing temporal redundancy of video sequences and it typically accounts for 50-80% of the video encoding complexity. This technique has been adopted by all of the existing international video coding standards. It assumes that the current frame can be locally modelled as a translation of the reference frames. The practical and widely method used to carry out motion compensated prediction is block matching algorithm. In this method, video frames are divided into a set of non-overlapped macroblocks; each target macroblock of the current frame is compared with the search area in the reference frame in order to find the best matching macroblock. This will carry out displacement vectors that stipulate the movement of the macroblocks from one location to another in the reference frame. Checking all these locations is called full Search, which provides the best result. However, this algorithm suffers from long computational time, which necessitates improvement. Several methods of Fast Block Matching algorithm were developed to reduce the computation complexity. This thesis focuses on two classifications: the first is called the lossless block matching algorithm process, in which the computational time required to determine the matching macroblock of the full search is decreased while the resolution of the predicted frames is the same as for the full search. The second is called the lossy block matching algorithm process, which reduces the computational complexity effectively but the search result’s quality is not the same as for the full search
    • 

    corecore