399,878 research outputs found

    Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform

    Get PDF
    Sparse Matrix-Matrix multiplication (SpMM) is a fundamental operation over irregular data, which is widely used in graph algorithms, such as finding minimum spanning trees and shortest paths. In this work, we present a hybrid CPU and GPU-based parallel SpMM algorithm to improve the performance of SpMM. First, we improve data locality by element-wise multiplication. Second, we utilize the ordered property of row indices for partial sorting instead of full sorting of all triples according to row and column indices. Finally, through a hybrid CPU-GPU approach using two level pipelining technique, our algorithm is able to better exploit a heterogeneous system. Compared with the state-of-the-art SpMM methods in cuSPARSE and CUSP libraries, our approach achieves an average of 1.6x and 2.9x speedup separately on the nine representative matrices from University of Florida sparse matrix collection

    Energy Efficient Route Planning Using VANET

    Get PDF
    One of the key challenges in conducting dynamic route planning is the process of collecting and disseminating instantaneous travel data in real time. Recent studies are evaluating VANET (Vehicular Ad Hoc Network) and its associated WAVE (Wireless Access in Vehicular Environment) standards to facilitate this process. In these studies, travel data accumulated from vehicle OBUs (on board unit) are shared with other vehicles over DSRC (dedicated short- range communication) medium using centralized or distributed approach. In most studies, data collection and dissemination process are not scalable enough for high density traffic environment. Specifically, with a centralized approach, if traffic management center (TMC) or Road Side Unit (RSU) performs route planning for vehicles, there will be many bidirectional communications between the centralized entity and vehicles, leading to higher channel congestion in heavy traffic areas. With a distributed approach, information shared by other vehicles might not be useful or pertinent for some vehicles, leading to wastage of channel bandwidth. Methods used for data collection also need to be intelligent to count in nontraditional circumstances to achieve accuracy. In this thesis, we have proposed a three tiered architecture for data collection, analysis and dissemination. In addition, 1) we demonstrated the concept of queuing delay at intersection for travel time calculation and developed a hybrid metric that considers average travel time and occupancy rate, 2) we offload the computation of route planning to vehicle OBUs and 3) we developed an algorithm that determines the area of propagation for data that needs to be disseminated. We evaluated the performance of our approach progressively using VEINS, SUMO and OMNET++ simulators

    Towards Transfer Learning for Large-Scale Image Classification Using Annealing-based Quantum Boltzmann Machines

    Full text link
    Quantum Transfer Learning (QTL) recently gained popularity as a hybrid quantum-classical approach for image classification tasks by efficiently combining the feature extraction capabilities of large Convolutional Neural Networks with the potential benefits of Quantum Machine Learning (QML). Existing approaches, however, only utilize gate-based Variational Quantum Circuits for the quantum part of these procedures. In this work we present an approach to employ Quantum Annealing (QA) in QTL-based image classification. Specifically, we propose using annealing-based Quantum Boltzmann Machines as part of a hybrid quantum-classical pipeline to learn the classification of real-world, large-scale data such as medical images through supervised training. We demonstrate our approach by applying it to the three-class COVID-CT-MD dataset, a collection of lung Computed Tomography (CT) scan slices. Using Simulated Annealing as a stand-in for actual QA, we compare our method to classical transfer learning, using a neural network of the same order of magnitude, to display its improved classification performance. We find that our approach consistently outperforms its classical baseline in terms of test accuracy and AUC-ROC-Score and needs less training epochs to do this.Comment: 7 pages, 3 figures (5 if counting subfigures), 1 table. To be published in the proceedings of the 2023 IEEE International Conference on Quantum Computing and Engineering (QCE

    Hybrid Software Architecture For Doctor-Patient Consultation

    Get PDF
    This aim of the research is to solve an inadequate performance of the conventional approach in capturing clinical finding during doctor-patient consultation, by designing and implementing the proposed hybrid software architecture. Doctor-patient consultation is a crucial process in diagnosing and capturing clinical findings of patient problem. Currently, most doctor-patient consultation used conventional ways of capturing clinical findings by using paper‟s note, note book, manually entered digital records, and so on. With these conventional ways, the number of patient to be treated properly in the consultation process is less than the number of patients that had been registered per day. This problem most probably caused by the low performance of process and system response time, system interruption, and inadequate integrated system that make patients‟ health records difficult to be accessed seamlessly across other modules in health information system. The proposed architecture incorporates hybrid technique that could operate during online and offline situation by utilizing local and central data storage. This architecture also provide fast track search using International Clinical Diseases version 10 (ICD-10) and Read Clinical Term (CTV3) for doctors to clerk in clinical findings such as diagnosis, symptoms, medication and other related clinical notes. The research was conducted through case study approach by way of structured and semi-structured interview at Health Centre of UTeM. The findings from the data collection and validation showed that the proposed architecture is suitable to be used but requires minor modification. Application of this hybrid architecture dramatically reduces the time taken and improves response time for doctor to capture patient health record during doctor-patient consultation process

    Hybrid XML Retrieval: Combining Information Retrieval and a Native XML Database

    Get PDF
    This paper investigates the impact of three approaches to XML retrieval: using Zettair, a full-text information retrieval system; using eXist, a native XML database; and using a hybrid system that takes full article answers from Zettair and uses eXist to extract elements from those articles. For the content-only topics, we undertake a preliminary analysis of the INEX 2003 relevance assessments in order to identify the types of highly relevant document components. Further analysis identifies two complementary sub-cases of relevance assessments ("General" and "Specific") and two categories of topics ("Broad" and "Narrow"). We develop a novel retrieval module that for a content-only topic utilises the information from the resulting answer list of a native XML database and dynamically determines the preferable units of retrieval, which we call "Coherent Retrieval Elements". The results of our experiments show that -- when each of the three systems is evaluated against different retrieval scenarios (such as different cases of relevance assessments, different topic categories and different choices of evaluation metrics) -- the XML retrieval systems exhibit varying behaviour and the best performance can be reached for different values of the retrieval parameters. In the case of INEX 2003 relevance assessments for the content-only topics, our newly developed hybrid XML retrieval system is substantially more effective than either Zettair or eXist, and yields a robust and a very effective XML retrieval.Comment: Postprint version. The editor version can be accessed through the DO

    Searching digital music libraries

    Get PDF
    There has been a recent explosion of interest in digital music libraries. In particular, interactive melody retrieval is a striking example of a search paradigm that differs radically from the standard full-text search. Many different techniques have been proposed for melody matching, but the area lacks standard databases that allow them to be compared on common grounds––and copyright issues have stymied attempts to develop such a corpus. This paper focuses on methods for evaluating different symbolic music matching strategies, and describes a series of experiments that compare and contrast results obtained using three dominant paradigms. Combining two of these paradigms yields a hybrid approach which is shown to have the best overall combination of efficiency and effectiveness

    Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts for Human Movement Analysis

    Full text link
    Human movement analysis is a key area of research in robotics, biomechanics, and data science. It encompasses tracking, posture estimation, and movement synthesis. While numerous methodologies have evolved over time, a systematic and quantitative evaluation of these approaches using verifiable ground truth data of three-dimensional human movement is still required to define the current state of the art. This paper presents seven datasets recorded using inertial-based motion capture. The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ. The datasets were created with the intention of being used for research in human motion modeling, analysis, and generation. The protocols for data collection are described in detail, and a preliminary analysis of the collected data is provided as a benchmark. The Gesture Operational Model, a hybrid stochastic-biomechanical approach based on kinematic descriptors, is utilized to model the dynamics of the experts' movements and create mathematical representations of their motion trajectories for analysis and quantifying their body dexterity. The models allowed accurate the generation of human professional poses and an intuitive description of how body joints cooperate and change over time through the performance of the task

    A roadside units positioning framework in the context of vehicle-to-infrastructure based on integrated AHP-entropy and group-VIKOR

    Get PDF
    The positioning of roadside units (RSUs) in a vehicle-to-infrastructure (V2I) communication system may have an impact on network performance. Optimal RSU positioning is required to reduce cost and maintain the quality of service. However, RSU positioning is considered a difficult task due to numerous criteria, such as the cost of RSUs, the intersection area and communication strength, which affect the positioning process and must be considered. Furthermore, the conflict and trade-off amongst these criteria and the significance of each criterion are reflected on the RSU positioning process. Towards this end, a four-stage methodology for a new RSU positioning framework using multi-criteria decision-making (MCDM) in V2I communication system context has been designed. Real time V2I hardware for data collection purpose was developed. This hardware device consisted of multi mobile-nodes (in the car) and RSUs and connected via an nRF24L01+ PA/LNA transceiver module with a microcontroller. In the second phase, different testing scenarios were identified to acquire the required data from the V2I devices. These scenarios were evaluated based on three evaluation attributes. A decision matrix consisted of the scenarios as alternatives and its assessment per criterion was constructed. In the third phase, the alternatives were ranked using hybrid of MCDM techniques, specifically the Analytic Hierarchy Process (AHP), Entropy and Vlsekriterijumska Optimizacija I Kompromisno Resenje (VIKOR). The result of each decision ranking was aggregated using Borda voting approach towards a final group ranking. Finally, the validation process was made to ensure the ranking result undergoes a systematic and valid rank. The results indicate the following: (1) The rank of scenarios obtained from group VIKOR suggested the second scenario with, four RSUs, a maximum distance of 200 meters between RSUs and the antennas height of two-meter, is the best positioning scenarios; and (2) in the objective validation. The study also reported significant differences between the scores of the groups, indicating that the ranking results are valid. Finally, the integration of AHP, Entropy and VIKOR has effectively solved the RSUs positioning problems
    corecore