23 research outputs found

    Comparative study and performance evaluation of MC-CDMA and OFDM over AWGN and fading channels environment

    Get PDF
    Η απαίτηση για εφαρμογές υψηλής ταχύτητας μετάδοσης δεδομένων έχει αυξηθεί σημαντικά τα τελευταία χρόνια. Η πίεση των χρηστών σήμερα για ταχύτερες επικοινωνίες, ανεξαρτήτως κινητής ή σταθερής, χωρίς επιπλέον κόστος είναι μια πραγματικότητα. Για να πραγματοποιηθούν αυτές οι απαιτήσεις, προτάθηκε ένα νέο σχήμα που συνδυάζει ψηφιακή διαμόρφωση και πολλαπλές προσβάσεις, για την ακρίβεια η Πολλαπλή Πρόσβαση με διαίρεση Κώδικα Πολλαπλού Φέροντος (Multi-Carrier Code Division Multiple Access MC-CDMA). Η εφαρμογή του Γρήγορου Μετασχηματισμού Φουριέ (Fast Fourier Transform,FFT) που βασίζεται στο (Orthogonal Frequency Division Multiplexing, OFDM) χρησιμοποιεί τις περίπλοκες λειτουργίες βάσεως και αντικαθίσταται από κυματομορφές για να μειώσει το επίπεδο της παρεμβολής. Έχει βρεθεί ότι οι μετασχηματισμένες κυματομορφές (Wavelet Transform,W.T.) που βασίζονται στον Haar είναι ικανές να μειώσουν το ISI και το ICI, που προκαλούνται από απώλειες στην ορθογωνιότητα μεταξύ των φερόντων, κάτι που τις καθιστά απλούστερες για την εφαρμογή από του FFT. Επιπλέον κέρδος στην απόδοση μπορεί να επιτευχθεί αναζητώντας μια εναλλακτική λειτουργία ορθογωνικής βάσης και βρίσκοντας ένα καλύτερο μετασχηματισμό από του Φουριέ (Fourier) και τον μετασχηματισμό κυματομορφής (Wavelet Transform). Στην παρούσα εργασία, υπάρχουν τρία προτεινόμενα μοντέλα. Το 1ο, ( A proposed Model ‘1’ of OFDM based In-Place Wavelet Transform), το 2ο, A proposed Model ‘2’ based In-Place Wavelet Transform Algorithm and Phase Matrix (P.M) και το 3ο, A proposed Model ‘3’ of MC-CDMA Based on Multiwavelet Transform. Οι αποδόσεις τους συγκρίθηκαν με τα παραδοσιακά μοντέλα μονού χρήστη κάτω από διαφορετικά κανάλια (Κανάλι AWGN, επίπεδη διάλειψη και επιλεκτική διάλειψη).The demand for high data rate wireless multi-media applications has increased significantly in the past few years. The wireless user’s pressure towards faster communications, no matter whether mobile, nomadic, or fixed positioned, without extra cost is nowadays a reality. To fulfill these demands, a new scheme which combines wireless digital modulation and multiple accesses was proposed in the recent years, namely, Multicarrier-Code Division Multiple Access (MC-CDMA). The Fourier based OFDM uses the complex exponential bases functions and it is replaced by wavelets in order to reduce the level of interference. It is found that the Haar-based wavelets are capable of reducing the ISI and ICI, which are caused by the loss in orthogonality between the carriers. Further performance gains can be made by looking at alternative orthogonal basis functions and finding a better transform rather than Fourier and wavelet transform. In this thesis, there are three proposed models [Model ‘1’ (OFDM based on In-Place Wavelet Transform, Model ‘2’ (MC-CDMA based on IP-WT and Phase Matrix) and Model ‘3’ (MC-CDMA based on Multiwavelet Transform)] were created and then comparison their performances with the traditional models for single user system were compared under different channel characteristics (AWGN channel, flat fading and selective fading). The conclusion of my study as follows, the models (1) was achieved much lower bit error rates than traditional models based FFT. Therefore these models can be considered as an alternative to the conventional MC-CDMA based FFT. The main advantage of using In-Place wavelet transform in the proposed models that it does not require an additional array at each sweep such as in ordered Fast Haar wavelet transform, which makes it simpler for implementation than FFT. The model (2) gave a new algorithm based on In-Place wavelet transform with first level processing multiple by PM was proposed. The model (3) gave much lower bit error than other two models in additional to traditional models

    A Multiscale Guide to Brownian Motion

    Full text link
    We revise the Levy's construction of Brownian motion as a simple though still rigorous approach to operate with various Gaussian processes. A Brownian path is explicitly constructed as a linear combination of wavelet-based "geometrical features" at multiple length scales with random weights. Such a wavelet representation gives a closed formula mapping of the unit interval onto the functional space of Brownian paths. This formula elucidates many classical results about Brownian motion (e.g., non-differentiability of its path), providing intuitive feeling for non-mathematicians. The illustrative character of the wavelet representation, along with the simple structure of the underlying probability space, is different from the usual presentation of most classical textbooks. Similar concepts are discussed for fractional Brownian motion, Ornstein-Uhlenbeck process, Gaussian free field, and fractional Gaussian fields. Wavelet representations and dyadic decompositions form the basis of many highly efficient numerical methods to simulate Gaussian processes and fields, including Brownian motion and other diffusive processes in confining domains

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Multidimensional Wavelets and Computer Vision

    Get PDF
    This report deals with the construction and the mathematical analysis of multidimensional nonseparable wavelets and their efficient application in computer vision. In the first part, the fundamental principles and ideas of multidimensional wavelet filter design such as the question for the existence of good scaling matrices and sensible design criteria are presented and extended in various directions. Afterwards, the analytical properties of these wavelets are investigated in some detail. It will turn out that they are especially well-suited to represent (discretized) data as well as large classes of operators in a sparse form - a property that directly yields efficient numerical algorithms. The final part of this work is dedicated to the application of the developed methods to the typical computer vision problems of nonlinear image regularization and the computation of optical flow in image sequences. It is demonstrated how the wavelet framework leads to stable and reliable results for these problems of generally ill-posed nature. Furthermore, all the algorithms are of order O(n) leading to fast processing

    Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018

    Get PDF
    This open access book features a selection of high-quality papers from the presentations at the International Conference on Spectral and High-Order Methods 2018, offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions

    Efficient compression of motion compensated residuals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Structural reliability and stochastic finite element methods

    Get PDF
    PurposeThis paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.Design/methodology/approachThis confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.FindingsOver 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.Originality/valueThe research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications

    Vol. 15, No. 2 (Full Issue)

    Get PDF

    Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems

    Full text link
    Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science
    corecore