297 research outputs found

    Channel Estimation for UAV Communication Systems Using Deep Neural Networks

    Get PDF
    Channel modeling of unmanned aerial vehicles (UAVs) from wireless communications has gained great interest for rapid deployment in wireless communication. The UAV channel has its own distinctive characteristics compared to satellite and cellular networks. Many proposed techniques consider and formulate the channel modeling of UAVs as a classification problem, where the key is to extract the discriminative features of the UAV wireless signal. For this issue, we propose a framework of multiple Gaussian–Bernoulli restricted Boltzmann machines (GBRBM) for dimension reduction and pre-training utilization incorporated with an autoencoder-based deep neural network. The developed system used UAV measurements of a town’s already existing commercial cellular network for training and validation. To evaluate the proposed approach, we run ray-tracing simulations in the program Remcom Wireless InSite at a distinct frequency of 28 GHz and used them for training and validation. The results demonstrate that the proposed method is accurate in channel acquisition for various UAV flying scenarios and outperforms the conventional DNNs

    Deep belief network based audio classification for construction sites monitoring

    Get PDF
    In this paper, we propose a Deep Belief Network (DBN) based approach for the classification of audio signals to improve work activity identification and remote surveillance of construction projects. The aim of the work is to obtain an accurate and flexible tool for consistently executing and managing the unmanned monitoring of construction sites by using distributed acoustic sensors. In this paper, ten classes of multiple construction equipment and tools, frequently and broadly used in construction sites, have been collected and examined to conduct and validate the proposed approach. The input provided to the DBN consists in the concatenation of several statistics evaluated by a set of spectral features, like MFCCs and mel-scaled spectrogram. The proposed architecture, along with the preprocessing and the feature extraction steps, has been described in details while the effectiveness of the proposed idea has been demonstrated by some numerical results, evaluated by using real-world recordings. The final overall accuracy on the test set is up to 98% and is a significantly improved performance compared to other state-of-the-are approaches. A practical and real-time application of the presented method has been also proposed in order to apply the classification scheme to sound data recorded in different environmental scenarios

    Face Mining in Wikipedia Biographies

    Get PDF
    RÉSUMÉ Cette thèse présente quelques contributions à la recherche liées au thème de la création d’un système automatisé pour l’extraction de visages dans les pages de biographie sur Wikipédia. La première contribution majeure de ce travail est l’élaboration d’une solution au problème basé sur une nouvelle technique de modélisation graphique probabiliste. Nous utilisons l’inférence probabiliste pour faire des prédictions structurées dans les modèles construits dynamiquement afin d’identifier les véritables exemples de visages correspondant à l’objet d’une biographie parmi tous les visages détectés. Notre modèle probabiliste prend en considération l’information provenant de différentes sources, dont : des résultats de comparaisons entre visages détectés, des métadonnées provenant des images de visage et de leurs détections, des images parentes, des données géospatiales, des noms de fichiers et des sous-titres. Nous croyons que cette recherche est également unique parce que nous sommes les premiers à présenter un système complet et une évaluation expérimentale de la tâche de l’extraction des visages humains dans la nature à une échelle de plus de 50 000 identités. Une autre contribution majeure de nos travaux est le développement d’une nouvelle catégorie de modèles probabilistes discriminatifs basée sur une fonction logistique Beta-Bernoulli généralisée. À travers notre formulation novatrice, nous fournissons une nouvelle méthode d’approximation lisse de la perte 0-1, ainsi qu’une nouvelle catégorie de classificateurs probabilistes. Nous présentons certaines expériences réalisées à l’aide de cette technique pour : 1) une nouvelle forme de régression logistique que nous nommons la régression logistique Beta-Bernoulli généralisée ; 2) une version de cette même technique ; et enfin pour 3) notre modèle pour l’extraction des visages que l’on pourrait considérer comme une technique de prédiction structurée en combinant plusieurs sources multimédias. À travers ces expériences, nous démontrons que les différentes formes de cette nouvelle formulation Beta-Bernoulli améliorent la performance des méthodes de la régression logistique couramment utilisées ainsi que la performance des machines à vecteurs de support (SVM) linéaires et non linéaires dans le but d’une classification binaire. Pour évaluer notre technique, nous avons procédé à des tests de performance reconnus en utilisant différentes propriétés allant de celles qui sont de relativement petite taille à celles qui sont de relativement grande taille, en plus de se baser sur des problèmes ayant des caractéristiques clairsemées ou denses. Notre analyse montre que le modèle Beta-Bernoulli généralisé améliore les formes analogues de modèles classiques de la régression logistique et les machines à vecteurs de support et que lorsque nos évaluations sont effectuées sur les ensembles de données à plus grande échelle, les résultats sont statistiquement significatifs. Une autre constatation est que l’approche est aussi robuste lorsqu’il s’agit de valeurs aberrantes. De plus, notre modèle d’extraction de visages atteint sa meilleure performance lorsque le sous-composant consistant d’un modèle discriminant d’entropie maximale est remplacé par notre modèle de Beta-Bernoulli généralisée de la régression logistique. Cela montre l’applicabilité générale de notre approche proposée pour une tâche de prédiction structurée. Autant que nous sachions, c’est la première fois qu’une approximation lisse de la perte 0-1 a été utilisée pour la classification structurée. Enfin, nous avons exploré plus en profondeur un problème important lié à notre tâche d’extraction des visages – la localisation des points-clés denses sur les visages humains. Nous avons développé un pipeline complet qui résout le problème de localisation des points-clés en utilisant une approche par sous-espace localement linéaire. Notre modèle de localisation des points-clés est d’une efficacité comparable à l’état de l’art.----------ABSTRACT This thesis presents a number of research contributions related to the theme of creating an automated system for extracting faces from Wikipedia biography pages. The first major contribution of this work is the formulation of a solution to the problem based on a novel probabilistic graphical modeling technique. We use probabilistic inference to make structured predictions in dynamically constructed models so as to identify true examples of faces corresponding to the subject of a biography among all detected faces. Our probabilistic model takes into account information from multiple sources, including: visual comparisons between detected faces, meta-data about facial images and their detections, parent images, image locations, image file names, and caption texts. We believe this research is also unique in that we are the first to present a complete system and an experimental evaluation for the task of mining wild human faces on the scale of over 50,000 identities. The second major contribution of this work is the development of a new class of discriminative probabilistic models based on a novel generalized Beta-Bernoulli logistic function. Through our generalized Beta-Bernoulli formulation, we provide both a new smooth 0-1 loss approximation method and new class of probabilistic classifiers. We present experiments using this technique for: 1) a new form of Logistic Regression which we call generalized Beta-Bernoulli Logistic Regression, 2) a kernelized version of the aforementioned technique, and 3) our probabilistic face mining model, which can be regarded as a structured prediction technique that combines information from multimedia sources. Through experiments, we show that the different forms of this novel Beta-Bernoulli formulation improve upon the performance of both widely-used Logistic Regression methods and state-of-the-art linear and non-linear Support Vector Machine techniques for binary classification. To evaluate our technique, we have performed tests using a number of widely used benchmarks with different properties ranging from those that are comparatively small to those that are comparatively large in size, as well as problems with both sparse and dense features. Our analysis shows that the generalized Beta-Bernoulli model improves upon the analogous forms of classical Logistic Regression and Support Vector Machine models and that when our evaluations are performed on larger scale datasets, the results are statistically significant. Another finding is that the approach is also robust when dealing with outliers. Furthermore, our face mining model achieves it’s best performance when its sub-component consisting of a discriminative Maximum Entropy Model is replaced with our generalized Beta-Bernoulli Logistic Regression model. This shows the general applicability of our proposed approach for a structured prediction task. To the best of our knowledge, this represents the first time that a smooth approximation to the 0-1 loss has been used for structured predictions. Finally, we have explored an important problem related to our face extraction task in more depth - the localization of dense keypoints on human faces. Therein, we have developed a complete pipeline that solves the keypoint localization problem using an adaptively estimated, locally linear subspace technique. Our keypoint localization model performs on par with state-of-the-art methods

    Numerical methods for drift-diffusion models

    Get PDF
    The van Roosbroeck system describes the semi-classical transport of free electrons and holes in a self-consistent electric field using a drift-diffusion approximation. It became the standard model to describe the current flow in semiconductor devices at macroscopic scale. Typical devices modeled by these equations range from diodes, transistors, LEDs, solar cells and lasers to quantum nanostructures and organic semiconductors. The report provides an introduction into numerical methods for the van Roosbroeck system. The main focus lies on the Scharfetter-Gummel finite volume discretization scheme and recent efforts to generalize this approach to general statistical distribution functions

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Numerical methods for drift-diffusion models

    Get PDF
    The van Roosbroeck system describes the semi-classical transport of free electrons and holes in a self-consistent electric field using a drift-diffusion approximation. It became the standard model to describe the current flow in semiconductor devices at macroscopic scale. Typical devices modeled by these equations range from diodes, transistors, LEDs, solar cells and lasers to quantum nanostructures and organic semiconductors. The report provides an introduction into numerical methods for the van Roosbroeck system. The main focus lies on the Scharfetter-Gummel finite volume disretization scheme and recent efforts to generalize this approach to general statistical distribution functions

    Feature regularization and learning for human activity recognition.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.Feature extraction is an essential component in the design of human activity recognition model. However, relying on extracted features alone for learning often makes the model a suboptimal model. Therefore, this research work seeks to address such potential problem by investigating feature regularization. Feature regularization is used for encapsulating discriminative patterns that are needed for better and efficient model learning. Firstly, a within-class subspace regularization approach is proposed for eigenfeatures extraction and regularization in human activity recognition. In this ap- proach, the within-class subspace is modelled using more eigenvalues from the reliable subspace to obtain a four-parameter modelling scheme. This model enables a better and true estimation of the eigenvalues that are distorted by the small sample size effect. This regularization is done in one piece, thereby avoiding undue complexity of modelling eigenspectrum differently. The whole eigenspace is used for performance evaluation because feature extraction and dimensionality reduction are done at a later stage of the evaluation process. Results show that the proposed approach has better discriminative capacity than several other subspace approaches for human activity recognition. Secondly, with the use of likelihood prior probability, a new regularization scheme that improves the loss function of deep convolutional neural network is proposed. The results obtained from this work demonstrate that a well regularized feature yields better class discrimination in human activity recognition. The major contribution of the thesis is the development of feature extraction strategies for determining discriminative patterns needed for efficient model learning
    • …
    corecore