53 research outputs found

    Map-aided fingerprint-based indoor positioning

    Get PDF
    The objective of this work is to investigate potential accuracy improvements in the fingerprint-based indoor positioning processes, by imposing map-constraints into the positioning algorithms in the form of a-priori knowledge. In our approach, we propose the introduction of a Route Probability Factor (RPF), which reflects the possibility of a user, to be located on one position instead of all others. The RPF does not only affect the probabilities of the points along the pre-defined frequent routes, but also influences all the neighbouring points that lie at the proximity of each frequent route. The outcome of the evaluation process, indicates the validity of the RPF approach, demonstrated by the significant reduction of the positioning error

    Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization

    Get PDF
    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K-Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors

    RSS Indoor Localization Based on a Single Access Point

    Get PDF
    This research work investigates how RSS information fusion from a single, multi-antenna access point (AP) can be used to perform device localization in indoor RSS based localization systems. The proposed approach demonstrates that different RSS values can be obtained by carefully modifying each AP antenna orientation and polarization, allowing the generation of unique, low correlation fingerprints, for the area of interest. Each AP antenna can be used to generate a set of fingerprint radiomaps for different antenna orientations and/or polarization. The RSS fingerprints generated from all antennas of the single AP can be then combined to create a multi-layer fingerprint radiomap. In order to select the optimum fingerprint layers in the multilayer radiomap the proposed methodology evaluates the obtained localization accuracy, for each fingerprint radio map combination, for various well-known deterministic and probabilistic algorithms (Weighted k-Nearest-Neighbor-WKNN and Minimum Mean Square Error-MMSE). The optimum candidate multi-layer radiomap is then examined by calculating the correlation level of each fingerprint pair by using the "Tolerance Based-Normal Probability Distribution (TBNPD)" algorithm. Both steps take place during the offline phase, and it is demonstrated that this approach results in selecting the optimum multi-layer fingerprint radiomap combination. The proposed approach can be used to provide localisation services in areas served only by a single AP

    Predictive no-reference assessment of video quality

    Get PDF
    Among the various means to evaluate the quality of video streams, lightweight No-Reference (NR) methods have low computation and may be executed on thin clients. Thus, these methods would be perfect candidates in cases of real-time quality assessment, automated quality control and in adaptive mobile streaming. Yet, existing real-time, NR approaches are not typically designed to tackle network distorted streams, thus performing poorly when compared to Full-Reference (FR) algorithms. In this work, we present a generic NR method whereby machine learning (ML) may be used to construct a quality metric trained on simplistic NR metrics. Testing our method on nine, representative ML algorithms allows us to show the generality of our approach, whilst finding the best-performing algorithms. We use an extensive video dataset (960 video samples), generated under a variety of lossy network conditions, thus verifying that our NR metric remains accurate under realistic streaming scenarios. In this way, we achieve a quality index that is comparably as computationally efficient as typical NR metrics and as accurate as the FR algorithm Video Quality Metric (97% correlation)

    Deep Learning for Quality Assessment in Live Video Streaming

    Get PDF
    Video content providers put stringent requirements on the quality assessment methods realized on their services. They need to be accurate, real-time, adaptable to new content, and scal-able as the video set grows. In this letter, we introduce a novel automated and computationally efficient video assessment method. It enables accurate real-time (online) analysis of delivered quality in an adaptable and scalable manner. Offline deep unsupervised learning processes are employed at the server side and inexpensive no-reference measurements at the client side. This provides both real-time assessment and performance comparable to the full reference counterpart, while maintaining its no-reference characteristics. We tested our approach on the LIMP Video Quality Database (an extensive packet loss impaired video set) obtaining a correlation between 78% and 91% to the FR benchmark (the video quality metric). Due to its unsupervised learning essence, our method is flexible and dynamically adaptable to new content and scalable with the number of videos. Index Terms-Deep learning (DL), multimedia video services, unsupervised learning (UL), video quality assessment

    Recent trends in molecular diagnostics of yeast infections : from PCR to NGS

    Get PDF
    The incidence of opportunistic yeast infections in humans has been increasing over recent years. These infections are difficult to treat and diagnose, in part due to the large number and broad diversity of species that can underlie the infection. In addition, resistance to one or several antifungal drugs in infecting strains is increasingly being reported, severely limiting therapeutic options and showcasing the need for rapid detection of the infecting agent and its drug susceptibility profile. Current methods for species and resistance identification lack satisfactory sensitivity and specificity, and often require prior culturing of the infecting agent, which delays diagnosis. Recently developed high-throughput technologies such as next generation sequencing or proteomics are opening completely new avenues for more sensitive, accurate and fast diagnosis of yeast pathogens. These approaches are the focus of intensive research, but translation into the clinics requires overcoming important challenges. In this review, we provide an overview of existing and recently emerged approaches that can be used in the identification of yeast pathogens and their drug resistance profiles. Throughout the text we highlight the advantages and disadvantages of each methodology and discuss the most promising developments in their path from bench to bedside

    Quantitative Characterization of the Software Layer of a HW/SW Co-Designed Processor

    Get PDF
    HW/SW co-designed processors currently have a renewed interest due to their capability to boost performance without running into the power and complexity walls. By employing a software layer that performs dynamic binary translation and applies aggressive optimizations through exploiting the runtime application behavior, these hybrid architectures provide better performance/watt. However, a poorly designed software layer can result in significant translation/optimization overheads that may offset its benefits. This work presents a detailed characterization of the software layer of a HW/SW co-designed processor using a variety of benchmark suites. We observe that the performance of the software layer is very sensitive to the characteristics of the emulated application with a variance of more than 50%. We also show that the interaction between the software layer and the emulated application, while sharing the microarchitectural resources, can have 0-20% impact on performance. Finally, we identify some key elements which should be further investigated to reduce the observed variations in performance. The paper provides critical insights to improve the software layer design.Peer ReviewedPostprint (author's final draft

    HW/SW Co-designed Processors: Challenges, Design Choices and a Simulation Infrastructure for Evaluation

    Get PDF
    Improving single thread performance is a key challenge in modern microprocessors especially because the traditional approach of increasing clock frequency and deep pipelining cannot be pushed further due to power constraints. Therefore, researchers have been looking at unconventional architectures to boost single thread performance without running into the power wall. HW/SW co-designed processors like Nvidia Denver, are emerging as a promising alternative. However, HW/SW co-designed processors need to address some key challenges such as startup delay, providing high performance with simple hardware, translation/optimization overhead, etc. before they can become mainstream. A fundamental requirement for evaluating different design choices and trade-offs to meet these challenges is to have a simulation infrastructure. Unfortunately, there is no such infrastructure available today. Building the aforementioned infrastructure itself poses significant challenges as it encompasses the complexities of not only an architectural framework but also of a compilation one. This paper identifies the key challenges that HW/SW codesigned processors face and the basic requirements for a simulation infrastructure targeting these architectures. Furthermore, the paper presents DARCO, a simulation infrastructure to enable research in this domain.Peer ReviewedPostprint (author's final draft

    Активность микрофлоры как показатель токсичности морских донных отложений шельфовой зоны Черного моря и Керченского пролива

    Get PDF
    Изучена потенциальная активность донной микрофлоры в местах утечки остатков химических токсикантов, затопленных в период Второй Мировой войны ХХ в. Отмечены особенности восстановления жизнедеятельности микрофлоры при различных уровнях загрязнения донных отложений мышьяком и хлорированными органическими сульфидами. Полученные результаты перспективно использовать при оценке экологического состояния донных отложений в загрязненных прибрежных акваториях
    corecore