109 research outputs found

    Signal processing algorithms for digital hearing aids

    Get PDF
    Hearing loss is a problem that severely affects the speech communication and disqualify most hearing-impaired people from holding a normal life. Although the vast majority of hearing loss cases could be corrected by using hearing aids, however, only a scarce of hearing-impaired people who could be benefited from hearing aids purchase one. This irregular use of hearing aids arises from the existence of a problem that, to date, has not been solved effectively and comfortably: the automatic adaptation of the hearing aid to the changing acoustic environment that surrounds its user. There are two approaches aiming to comply with it. On the one hand, the "manual" approach, in which the user has to identify the acoustic situation and choose the adequate amplification program has been found to be very uncomfortable. The second approach requires to include an automatic program selection within the hearing aid. This latter approach is deemed very useful by most hearing aid users, even if its performance is not completely perfect. Although the necessity of the aforementioned sound classification system seems to be clear, its implementation is a very difficult matter. The development of an automatic sound classification system in a digital hearing aid is a challenging goal because of the inherent limitations of the Digital Signal Processor (DSP) the hearing aid is based on. The underlying reason is that most digital hearing aids have very strong constraints in terms of computational capacity, memory and battery, which seriously limit the implementation of advanced algorithms in them. With this in mind, this thesis focuses on the design and implementation of a prototype for a digital hearing aid able to automatically classify the acoustic environments hearing aid users daily face on and select the amplification program that is best adapted to such environment aiming at enhancing the speech intelligibility perceived by the user. The most important contribution of this thesis is the implementation of a prototype for a digital hearing aid that automatically classifies the acoustic environment surrounding its user and selects the most appropriate amplification program for such environment, aiming at enhancing the sound quality perceived by the user. The battery life of this hearing aid is 140 hours, which has been found to be very similar to that of hearing aids in the market, and what is of key importance, there is still about 30% of the DSP resources available for implementing other algorithms

    A Taylor polynomial expansion line search for large-scale optimization

    Get PDF
    In trying to cope with the Big Data deluge, the landscape of distributed computing has changed. Large commodity hardware clusters, typically operating in some form of MapReduce framework, are becoming prevalent for organizations that require both tremendous storage capacity and fault tolerance. However, the high cost of communication can dominate the computation time in large-scale optimization routines in these frameworks. This thesis considers the problem of how to efficiently conduct univariate line searches in commodity clusters in the context of gradient-based batch optimization algorithms, like the staple limited-memory BFGS (LBFGS) method. In it, a new line search technique is proposed for cases where the underlying objective function is analytic, as in logistic regression and low rank matrix factorization. The technique approximates the objective function by a truncated Taylor polynomial along a fixed search direction. The coefficients of this polynomial may be computed efficiently in parallel with far less communication than needed to transmit the high-dimensional gradient vector, after which the polynomial may be minimized with high accuracy in a neighbourhood of the expansion point without distributed operations. This Polynomial Expansion Line Search (PELS) may be invoked iteratively until the expansion point and minimum are sufficiently accurate, and can provide substantial savings in time and communication costs when multiple iterations in the line search procedure are required. Three applications of the PELS technique are presented herein for important classes of analytic functions: (i) logistic regression (LR), (ii) low-rank matrix factorization (MF) models, and (iii) the feedforward multilayer perceptron (MLP). In addition, for LR and MF, implementations of PELS in the Apache Spark framework for fault-tolerant cluster computing are provided. These implementations conferred significant convergence enhancements to their respective algorithms, and will be of interest to Spark and Hadoop practitioners. For instance, the Spark PELS technique reduced the number of iterations and time required by LBFGS to reach terminal training accuracies for LR models by factors of 1.8--2. Substantial acceleration was also observed for the Nonlinear Conjugate Gradient algorithm for MLP models, which is an interesting case for future study in optimization for neural networks. The PELS technique is applicable to a broad class of models for Big Data processing and large-scale optimization, and can be a useful component of batch optimization routines

    An investigation into adaptive power reduction techniques for neural hardware

    No full text
    In light of the growing applicability of Artificial Neural Network (ANN) in the signal processing field [1] and the present thrust of the semiconductor industry towards lowpower SOCs for mobile devices [2], the power consumption of ANN hardware has become a very important implementation issue. Adaptability is a powerful and useful feature of neural networks. All current approaches for low-power ANN hardware techniques are ‘non-adaptive’ with respect to the power consumption of the network (i.e. power-reduction is not an objective of the adaptation/learning process). In the research work presented in this thesis, investigations on possible adaptive power reduction techniques have been carried out, which attempt to exploit the adaptability of neural networks in order to reduce the power consumption. Three separate approaches for such adaptive power reduction are proposed: adaptation of size, adaptation of network weights and adaptation of calculation precision. Initial case studies exhibit promising results with significantpower reduction

    Automatic control of a multirotor

    Get PDF
    Objective of this thesis is to describe the design and realisation phases of a multirotor to be used for low risk and cost aerial observation. Starting point of this activity was a wide literature study related to the technological evolution of multirotors design and to the state of the art. Firstly the most common multirotor configurations were defined and, according to a size and performance based evaluation, the most suitable one was chosen. A detailed computer aided design model was drawn as basis for the realisation of two prototypes. The realised multirotors were “X-shaped” octorotors with eight coaxially coupled motors. The mathematical model of the multirotor dynamics was studied. “Proportional Integral Derivative” and “Linear Quadratic” algorithms were chosen as techniques to regulate the attitude dynamics of the multirotor. These methods were tested with a nonlinear model simulation developed in the Matlab Simulink environment. In the meanwhile the Arduino board was selected as the best compromise between costs and performance and the above mentioned algorithms were implemented using this platform thanks to its main characteristic of being completely “open source”. Indeed the multirotor was conceived to be a serviceable tool for the public utility and, at the same time, to be an accessible device for research and studies. The behaviour of the physical multirotor was evaluated with a test bench designed to isolate the rotation about one single body axis at a time. The data of the experimental tests were gathered in real time using a custom Matlab code and several indoor tests allowed the “fine tuning” of the controllers gains. Afterwards a portable “ground station” was conceived and realised in adherence with the real scenarios users needs. Several outdoor experimental flights were executed with successful results and the data gathered during the outdoor tests were used to evaluate some key performance indicators as the endurance and the maximum allowable payload mass. Then the fault tolerance of the control system was evaluated simulating and experimenting the loss of one motor; even in this critical condition the system exhibited an acceptable behaviour. The reached project readiness allowed to meet some potential users as the “Turin Fire Department” and to cooperate with them in a simulated emergency. During this event the multirotor was used to gather and transmit real time aerial images for an improved “situation awareness”. Finally the study was extended to more innovative control techniques like the neural networks based ones. Simulations results demonstrated their effectiveness; nevertheless the inherent complexity and the unreliability outside the training ranges could have a catastrophic impact on the airworthiness. This is a factor that cannot be neglected especially in the applications related to flying platforms. Summarising, this research work was addressed mainly to the operating procedures for implementing automatic control algorithms to real platforms. All the design aspects, from the preliminary multirotor configuration choice to the tests in possible real scenarios, were covered obtaining performances comparable with other commercial of-the-shelf platforms

    Quantum Neural Networks with Qutrits

    Get PDF
    Οι κβαντικοί υπολογιστές, εκμεταλλευόμενοι τις αρχές της κβαντικής μηχανικής, έχουν τη δυνατότητα να μεταμορφώσουν πολλούς τεχνολογικούς τομείς, χρησιμοποιώντας κβαντικά bit (qubits) που μπορούν να υπάρχουν σε υπέρθεση και εναγκαλισμό, επιτρέποντας, μεταξύ άλλων δυνατοτήτων, την παράλληλη αναζήτηση λύσεων. Πρόσφατες εξελίξεις στο κβαντικό υλικό επέτρεψαν την υλοποίηση πολυδιάστατων κβαντικών καταστάσεων σε νέες πλατφόρμες μικροκυκλωμάτων, προτείνοντας μια ακόμη ενδιαφέρουσα προσέγγιση. Η χρήση qudits, κβαντικών συστημάτων με υψηλότερες διάστασεις, προσφέρει αυξημένο χώρο για αναπαράστη πληροφορίας, αλλά επίσης πειραματικές υλοποιήσεις έχουν επιδείξει ανθεκτικότητα έναντι θορύβου και σφαλμάτων. Αυτό επισημαίνει περαιτέρω την θέση τους στο μέλλον του κβαντικού υπολογισμού. Σε αυτήν τη πτυχιακή, εξετάζεται η δυνατότητα των qutrits για την επίλυση προβλημάτων μηχανικής μάθησης σε κβαντικό υπολογιστή. Ο επεκταμένος χώρος καταστάσεων που προσφέρουν τα qutrits επιτρέπει πλουσιότερη αναπαράσταση δεδομένων. Για το σκοπό αυτό, χρησιμοποιώντας το μαθηματικό πλαίσιο του SU(3), εισάγεται η χρήση των πινάκων Gell-Mann για την κωδικοποίηση σε έναν 8-διάστατο χώρο. Αυτό εξοπλίζει τα συστήματα κβαντικού υπολογισμού με τη δυνατότητα επεξεργασίας και αναπαράστασης περισσότερων δεδομένων σε ένα μόνο qutrit. Η έρευνα επικεντρώνεται σε προβλήματα ταξινόμησης χρησιμοποιώντας qutrits, όπου διεξάγεται μια συγκριτική ανάλυση μεταξύ του προτεινόμενου χάρτη χαρακτηριστικών Gell-Mann, κυκλώματων που χρησιμοποιούν qubits και μοντέλων κλασσικής μηχανικής μάθησης. Επιπλέον, εξερευνούνται τεχνικές βελτιστοποίησης σε χώρους Hilbert υψηλών διαστάσεων, με σκοπό την αντιμετώπιση προκλήσεων, όπως τα vanishing gradients και το πρόβλημα των barren plateaus. Τέλος, καλύπτονται πρόσφατες εξελίξεις στον κβαντικό υλικό, με ειδική έμφαση σε συστήματα βασισμένα σε qutrits. Ο κύριος στόχος αυτής της πτυχιακής εργασίας είναι να εξετάσει τη δυνατότητα κωδικοποίησης Gell-Mann για προβλήματα ταξινόμησης, να αποδείξει την εφικτότητα της επέκτασης των χώρων Hilbert για εργασίες μηχανικής μάθησης και να ορίσει μια αξιόπιστη βάση για εργασία με γεωμετρικούς χάρτες χαρακτηριστικών. Αναλύωντας τις σχεδιαστικές επιλογές και πειραματικές διατάξεις λεπτομερώς, αυτή η έρευνα στοχεύει να συμβάλει στην ευρύτερη κατανόηση των δυνατοτήτων και των περιορισμών των συστημάτων με qutrits στο πλαίσιο της κβαντικής μηχανικής μάθησης, συνεισφέροντας στην πρόοδο του κβαντικού υπολογισμού και των εφαρμογών του σε πρακτικούς τομείς.Quantum computers, leveraging the principles of quantum physics, have the potential to revolutionize various domains by utilizing quantum bits (qubits) that can exist in superpositions and entanglement, allowing for parallel exploration of solutions. Recent advancements in quantum hardware have enabled the realization of high-dimensional quantum states on a chip-scale platform, proposing another potential avenue. The utilization of qudits, quantum systems with levels exceeding 2, not only offer increased information capacity, but also exhibit improved resilience against noise and errors. Experimental implementations have successfully showcased the potential of high-dimensional quantum systems in efficiently encoding complex quantum circuits, further highlighting their promise for the future of quantum computing. In this thesis, the potential of qutrits is explored to enhance machine learning tasks in quantum computing. The expanded state space offered by qutrits enables richer data representation, capturing intricate patterns and relationships. To this end, employing the mathematical framework of SU(3), the Gell-Mann feature map is introduced to encode information within an 8-dimensional space. This empowers quantum computing systems to process and represent larger amounts of data within a single qutrit. The primary focus of this thesis centers on classification tasks utilizing qutrits, where a comparative analysis is conducted between the proposed Gell-Mann feature map, well-established qubit feature maps, and classical machine learning models. Furthermore, optimization techniques within expanded Hilbert spaces are explored, addressing challenges such as vanishing gradients and barren plateaus landscapes. This work explores foundational concepts and principles in quantum computing and machine learning to ensure a solid understanding of the subject. It also highlights recent advancements in quantum hardware, specifically focusing on qutrit-based systems. The main objective is to explore the feasibility of the Gell-Mann encoding for multiclass classification in the SU(3) space, demonstrate the viability of expanded Hilbert spaces for machine learning tasks, and establish a robust foundation for working with geometric feature maps. By delving into the design considerations and experimental setups in detail, this research aims to contribute to the broader understanding of the capabilities and limitations of qutrit-based systems in the context of quantum machine learning, contributing to the advancement of quantum computing and its applications in practical domains
    corecore