20 research outputs found

    Design and theoretical analysis of advanced power based positioning in RF system

    Get PDF
    Accurate locating and tracking of people and resources has become a fundamental requirement for many applications. The global navigation satellite systems (GNSS) is widely used. But its accuracy suffers from signal obstruction by buildings, multipath fading, and disruption due to jamming and spoof. Hence, it is required to supplement GPS with inertial sensors and indoor localization schemes that make use of WiFi APs or beacon nodes. In the GPS-challenging or fault scenario, radio-frequency (RF) infrastructure based localization schemes can be a fallback solution for robust navigation. For the indoor/outdoor transition scenario, we propose hypothesis test based fusion method to integrate multi-modal localization sensors. In the first paper, a ubiquitous tracking using motion and location sensor (UTMLS) is proposed. As a fallback approach, power-based schemes are cost-effective when compared with the existing ToA or AoA schemes. However, traditional power-based positioning methods suffer from low accuracy and are vulnerable to environmental fading. Also, the expected accuracy of power-based localization is not well understood but is needed to derive the hypothesis test for the fusion scheme. Hence, in paper 2-5, we focus on developing more accurate power-based localization schemes. The second paper improves the power-based range estimation accuracy by estimating the LoS component. The ranging error model in fading channel is derived. The third paper introduces the LoS-based positioning method with corresponding theoretical limits and error models. In the fourth and fifth paper, a novel antenna radiation-pattern-aware power-based positioning (ARPAP) system and power contour circle fitting (PCCF) algorithm are proposed to address antenna directivity effect on power-based localization. Overall, a complete LoS signal power based positioning system has been developed that can be included in the fusion scheme --Abstract, page iv

    Advanced Trends in Wireless Communications

    Get PDF
    Physical limitations on wireless communication channels impose huge challenges to reliable communication. Bandwidth limitations, propagation loss, noise and interference make the wireless channel a narrow pipe that does not readily accommodate rapid flow of data. Thus, researches aim to design systems that are suitable to operate in such channels, in order to have high performance quality of service. Also, the mobility of the communication systems requires further investigations to reduce the complexity and the power consumption of the receiver. This book aims to provide highlights of the current research in the field of wireless communications. The subjects discussed are very valuable to communication researchers rather than researchers in the wireless related areas. The book chapters cover a wide range of wireless communication topics

    Computational Methods for Protein Inference in Shotgun Proteomics Experiments

    Get PDF
    In den letzten Jahrzehnten kam es zu einem signifikanten Anstiegs des Einsatzes von Hochdurchsatzmethoden in verschiedensten Bereichen der Naturwissenschaften, welche zu einem regelrechten Paradigmenwechsel führte. Eine große Anzahl an neuen Technologien wurde entwickelt um die Quantifizierung von Molekülen, die in verschiedenste biologische Prozesse involviert sind, voranzutreiben und zu beschleunigen. Damit einhergehend konnte eine beträchtliche Steigerung an Daten festgestellt werden, die durch diese verbesserten Methoden generiert wurden. Durch die Bereitstellung von computergestützten Verfahren zur Analyse eben dieser Masse an Rohdaten, spielt der Forschungsbereich der Bioinformatik eine immer größere Rolle bei der Extraktion biologischer Erkenntnisse. Im Speziellen hilft die computergestützte Massenspektrometrie bei der Prozessierung, Analyse und Visualisierung von Daten aus massenspektrometrischen Hochdursatzexperimenten. Bei der Erforschung der Gesamtheit aller Proteine einer Zelle oder einer anderweitigen Probe biologischen Materials, kommen selbst neueste Methoden an ihre Grenzen. Deswegen greifen viele Labore zu einer, dem Massenspektrometer vorgeschalteten, Verdauung der Probe um die Komplexität der zu messenden Moleküle zu verringern. Diese sogenannten "Bottom-up"-Proteomikexperimente mit Massenspektrometern führen allerdings zu einer erhöhten Schwierigkeit bei der anschließenden computergestützen Analyse. Durch die Verdauung von Proteinen zu Peptiden müssen komplexe Mehrdeutigkeiten während Proteininferenz, Proteingruppierung und Proteinquantifizierung berücksichtigt und/oder aufgelöst werden. Im Rahmen dieser Dissertation stellen wir mehrere Entwicklungen vor, die dabei helfen sollen eine effiziente und vollständig automatisierte Analyse von komplexen und umfangreichen \glqq Bottom-up\grqq{}-Proteomikexperimenten zu ermöglichen. Um die hinderliche Komplexität diskreter, Bayes'scher Proteininferenzmethoden zu verringern, wird neuerdings von sogenannten Faltungsbäumen (engl. "convolution trees") Gebrauch gemacht. Diese bieten bis jetzt jedoch keine genaue und gleichzeitig numerisch stabile Möglichkeit um "max-product"-Inferenz zu betreiben. Deswegen wird in dieser Dissertation zunächst eine neue Methode beschrieben die das mithilfe eines stückweisen bzw. extrapolierendem Verfahren ermöglicht. Basierend auf der Integration dieser Methode in eine mitentwickelte Bibliothek für Bayes'sche Inferenz, wird dann ein OpenMS-Tool für Proteininferenz präsentiert. Dieses Tool ermöglicht effiziente Proteininferenz auf Basis eines diskreten Bayes'schen Netzwerks mithilfe eines "loopy belief propagation" Algorithmus'. Trotz der streng probabilistischen Formulierung des Problems übertrifft unser Verfahren die meisten etablierten Methoden in Recheneffizienz. Das Interface des Algorithmus' bietet außerdem einzigartige Eingabe- und Ausgabeoptionen, wie z.B. das Regularisieren der Anzahl von Proteinen in einer Gruppe, proteinspezifische "Priors", oder rekalibrierte "Posteriors" der Peptide. Schließlich zeigt diese Arbeit einen kompletten, einfach zu benutzenden, aber trotzdem skalierenden Workflow für Proteininferenz und -quantifizierung, welcher um das neue Tool entwickelt wurde. Die Pipeline wurde in nextflow implementiert und ist Teil einer Gruppe von standardisierten, regelmäßig getesteten und von einer Community gepflegten Standardworkflows gebündelt unter dem Projekt nf-core. Unser Workflow ist in der Lage selbst große Datensätze mit komplizierten experimentellen Designs zu prozessieren. Mit einem einzigen Befehl erlaubt er eine (Re-)Analyse von lokalen oder öffentlich verfügbaren Datensätzen mit kompetetiver Genauigkeit und ausgezeichneter Performance auf verschiedensten Hochleistungsrechenumgebungen oder der Cloud.Since the beginning of this millennium, the advent of high-throughput methods in numerous fields of the life sciences led to a shift in paradigms. A broad variety of technologies emerged that allow comprehensive quantification of molecules involved in biological processes. Simultaneously, a major increase in data volume has been recorded with these techniques through enhanced instrumentation and other technical advances. By supplying computational methods that automatically process raw data to obtain biological information, the field of bioinformatics plays an increasingly important role in the analysis of the ever-growing mass of data. Computational mass spectrometry in particular, is a bioinformatics field of research which provides means to gather, analyze and visualize data from high-throughput mass spectrometric experiments. For the study of the entirety of proteins in a cell or an environmental sample, even current techniques reach limitations that need to be circumvented by simplifying the samples subjected to the mass spectrometer. These pre-digested (so-called bottom-up) proteomics experiments then pose an even bigger computational burden during analysis since complex ambiguities need to be resolved during protein inference, grouping and quantification. In this thesis, we present several developments in the pursuit of our goal to provide means for a fully automated analysis of complex and large-scale bottom-up proteomics experiments. Firstly, due to prohibitive computational complexities in state-of-the-art Bayesian protein inference techniques, a refined, more stable technique for performing inference on sums of random variables was developed to enable a variation of standard Bayesian inference for the problem. nextflow and part of a set of standardized, well-tested, and community-maintained workflows by the nf-core collective. Our workflow runs on large-scale data with complex experimental designs and allows a one-command analysis of local and publicly available data sets with state-of-the-art accuracy on various high-performance computing environments or the cloud

    Development and application of quantitative proteomics strategies to analyze molecular mechanisms of neurodegeneration

    Get PDF
    Neurodegenerative disorders such as Alzheimer’s disease, Huntington’s disease, Parkinson’s disease, Amyotrophic Lateral Sclerosis or Prion diseases are chronic, incurable and often fatal. A cardinal feature of all neurodegenerative disorders is the accumulation of misfolded and aggregated proteins. Depending on the disease, these aggregated proteins are cell type specific and have distinct cellular localizations, compositions and structures. Despite intensive research, the contribution of protein misfolding and aggregation to the cell type specific toxicity is not completely understood. In recent years, quantitative proteomics has matured into an exceptionally powerful technology providing accurate quantitative information on almost all cellular proteins as well as protein interactions, modifications, and subcellular localizations, which cannot be addressed by other omics technologies. The aim of this thesis is to investigate key features of neurodegeneration such as misfolded proteins and toxic protein aggregates with cutting edge proteomics, presenting a technological “proof of concept” and novel insights into the (patho)physiology of neurodegeneration

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p

    Designing Data Spaces

    Get PDF
    This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty

    Machine Learning Applications for the Study and Control of Quantum Systems

    Get PDF
    In this thesis, I consider the three main paradigms of machine learning – supervised, unsupervised, and reinforcement learning – and explore how each can be employed as a tool to study or control quantum systems. To this end, I adopt classical machine learning methods, but also illustrate how present-day quantum devices and concepts from condensed matter physics can be harnessed to adapt the machine learning models to the physical system being studied. In the first project, I use supervised learning techniques from classical object detection to locate quantum vortices in rotating BoseEinstein condensates. The machine learning model achieves high accuracies even in the presence of noise, which makes it especially suitable for experimental settings. I then move on to the field of unsupervised learning and introduce a quantum anomaly detection framework based on parameterized quantum circuits to map out phase diagrams of quantum many-body systems. The proposed algorithm allows quantum systems to be directly analyzed on a quantum computer without any prior knowledge about its phases. Lastly, I consider two reinforcement learning applications for quantum control. In the first example, I use Q-learning to maximize the entanglement in discrete-time quantum walks. In the final study, I introduce a novel approach for controlling quantum many-body systems by leveraging matrix product states as a trainable machine learning ansatz for the reinforcement learning agent. This framework enables us to reach far larger system sizes than conventional neural network-based approaches.Okinawa Institute of Science and Technology Graduate Universit

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore