4,252 research outputs found

    Efficient And Scalable Evaluation Of Continuous, Spatio-temporal Queries In Mobile Computing Environments

    Get PDF
    A variety of research exists for the processing of continuous queries in large, mobile environments. Each method tries, in its own way, to address the computational bottleneck of constantly processing so many queries. For this research, we present a two-pronged approach at addressing this problem. Firstly, we introduce an efficient and scalable system for monitoring traditional, continuous queries by leveraging the parallel processing capability of the Graphics Processing Unit. We examine a naive CPU-based solution for continuous range-monitoring queries, and we then extend this system using the GPU. Additionally, with mobile communication devices becoming commodity, location-based services will become ubiquitous. To cope with the very high intensity of location-based queries, we propose a view oriented approach of the location database, thereby reducing computation costs by exploiting computation sharing amongst queries requiring the same view. Our studies show that by exploiting the parallel processing power of the GPU, we are able to significantly scale the number of mobile objects, while maintaining an acceptable level of performance. Our second approach was to view this research problem as one belonging to the domain of data streams. Several works have convincingly argued that the two research fields of spatiotemporal data streams and the management of moving objects can naturally come together. [IlMI10, ChFr03, MoXA04] For example, the output of a GPS receiver, monitoring the position of a mobile object, is viewed as a data stream of location updates. This data stream of location updates, along with those from the plausibly many other mobile objects, is received at a centralized server, which processes the streams upon arrival, effectively updating the answers to the currently active queries in real time. iv For this second approach, we present GEDS, a scalable, Graphics Processing Unit (GPU)-based framework for the evaluation of continuous spatio-temporal queries over spatiotemporal data streams. Specifically, GEDS employs the computation sharing and parallel processing paradigms to deliver scalability in the evaluation of continuous, spatio-temporal range queries and continuous, spatio-temporal kNN queries. The GEDS framework utilizes the parallel processing capability of the GPU, a stream processor by trade, to handle the computation required in this application. Experimental evaluation shows promising performance and shows the scalability and efficacy of GEDS in spatio-temporal data streaming environments. Additional performance studies demonstrate that, even in light of the costs associated with memory transfers, the parallel processing power provided by GEDS clearly counters and outweighs any associated costs. Finally, in an effort to move beyond the analysis of specific algorithms over the GEDS framework, we take a broader approach in our analysis of GPU computing. What algorithms are appropriate for the GPU? What types of applications can benefit from the parallel and stream processing power of the GPU? And can we identify a class of algorithms that are best suited for GPU computing? To answer these questions, we develop an abstract performance model, detailing the relationship between the CPU and the GPU. From this model, we are able to extrapolate a list of attributes common to successful GPU-based applications, thereby providing insight into which algorithms and applications are best suited for the GPU and also providing an estimated theoretical speedup for said GPU-based application

    Classification of road users detected and tracked with LiDAR at intersections

    Get PDF
    Data collection is a necessary component of transportation engineering. Manual data collection methods have proven to be inefficient and limited in terms of the data required for comprehensive traffic and safety studies. Automatic methods are being introduced to characterize the transportation system more accurately and are providing more information to better understand the dynamics between road users. Video data collection is an inexpensive and widely used automated method, but the accuracy of video-based algorithms is known to be affected by obstacles and shadows and the third dimension is lost with video camera data collection. The impressive progress in sensing technologies has encouraged development of new methods for measuring the movements of road users. The Center for Road Safety at Purdue University proposed application of a LiDAR-based algorithm for tracking vehicles at intersections from a roadside location. LiDAR provides a three-dimensional characterization of the sensed environment for better detection and tracking results. The feasibility of this system was analyzed in this thesis using an evaluation methodology to determine the accuracy of the algorithm when tracking vehicles at intersections. According to the implemented method, the LiDAR-based system provides successful detection and tracking of vehicles, and its accuracy is comparable to the results provided by frame-by-frame extraction of trajectory data using video images by human observers. After supporting the suitability of the system for tracking, the second component of this thesis focused on proposing a classification methodology to discriminate between vehicles, pedestrians, and two-wheelers. Four different methodologies were applied to identify the best method for implementation. The KNN algorithm, which is capable of creating adaptive decision boundaries based on the characteristics of similar observations, provided better performance when evaluating new locations. The multinomial logit model did not allow the inclusion of collinear variables into the model. Overfitting of the training data was indicated in the classification tree and boosting methodologies and produced lower performance when the models were applied to the test data. Despite ANOVA analysis not supporting superior performance by a competitor, the objective of classifying movements at intersections under diverse conditions was achieved with the KNN algorithm and was chosen as the method to implement with the existing algorithm

    Development of a simulation tool for measurements and analysis of simulated and real data to identify ADLs and behavioral trends through statistics techniques and ML algorithms

    Get PDF
    openCon una popolazione di anziani in crescita, il numero di soggetti a rischio di patologia è in rapido aumento. Molti gruppi di ricerca stanno studiando soluzioni pervasive per monitorare continuamente e discretamente i soggetti fragili nelle loro case, riducendo i costi sanitari e supportando la diagnosi medica. Comportamenti anomali durante l'esecuzione di attività di vita quotidiana (ADL) o variazioni sulle tendenze comportamentali sono di grande importanza.With a growing population of elderly people, the number of subjects at risk of pathology is rapidly increasing. Many research groups are studying pervasive solutions to continuously and unobtrusively monitor fragile subjects in their homes, reducing health-care costs and supporting the medical diagnosis. Anomalous behaviors while performing activities of daily living (ADLs) or variations on behavioral trends are of great importance. To measure ADLs a significant number of parameters need to be considering affecting the measurement such as sensors and environment characteristics or sensors disposition. To face the impossibility to study in the real context the best configuration of sensors able to minimize costs and maximize accuracy, simulation tools are being developed as powerful means. This thesis presents several contributions on this topic. In the following research work, a study of a measurement chain aimed to measure ADLs and represented by PIRs sensors and ML algorithm is conducted and a simulation tool in form of Web Application has been developed to generate datasets and to simulate how the measurement chain reacts varying the configuration of the sensors. Starting from eWare project results, the simulation tool has been thought to provide support for technicians, developers and installers being able to speed up analysis and monitoring times, to allow rapid identification of changes in behavioral trends, to guarantee system performance monitoring and to study the best configuration of the sensors network for a given environment. The UNIVPM Home Care Web App offers the chance to create ad hoc datasets related to ADLs and to conduct analysis thanks to statistical algorithms applied on data. To measure ADLs, machine learning algorithms have been implemented in the tool. Five different tasks have been identified. To test the validity of the developed instrument six case studies divided into two categories have been considered. To the first category belong those studies related to: 1) discover the best configuration of the sensors keeping environmental characteristics and user behavior as constants; 2) define the most performant ML algorithms. The second category aims to proof the stability of the algorithm implemented and its collapse condition by varying user habits. Noise perturbation on data has been applied to all case studies. Results show the validity of the generated datasets. By maximizing the sensors network is it possible to minimize the ML error to 0.8%. Due to cost is a key factor in this scenario, the fourth case studied considered has shown that minimizing the configuration of the sensors it is possible to reduce drastically the cost with a more than reasonable value for the ML error around 11.8%. Results in ADLs measurement can be considered more than satisfactory.INGEGNERIA INDUSTRIALEopenPirozzi, Michel

    Data and resource management in wireless networks via data compression, GPS-free dissemination, and learning

    Get PDF
    “This research proposes several innovative approaches to collect data efficiently from large scale WSNs. First, a Z-compression algorithm has been proposed which exploits the temporal locality of the multi-dimensional sensing data and adapts the Z-order encoding algorithm to map multi-dimensional data to a one-dimensional data stream. The extended version of Z-compression adapts itself to working in low power WSNs running under low power listening (LPL) mode, and comprehensively analyzes its performance compressing both real-world and synthetic datasets. Second, it proposed an efficient geospatial based data collection scheme for IoTs that reduces redundant rebroadcast of up to 95% by only collecting the data of interest. As most of the low-cost wireless sensors won’t be equipped with a GPS module, the virtual coordinates are used to estimate the locations. The proposed work utilizes the anchor-based virtual coordinate system and DV-Hop (Distance vector of hops to anchors) to estimate the relative location of nodes to anchors. Also, it uses circle and hyperbola constraints to encode the position of interest (POI) and any user-defined trajectory into a data request message which allows only the sensors in the POI and routing trajectory to collect and route. It also provides location anonymity by avoiding using and transmitting GPS location information. This has been extended also for heterogeneous WSNs and refined the encoding algorithm by replacing the circle constraints with the ellipse constraints. Last, it proposes a framework that predicts the trajectory of the moving object using a Sequence-to-Sequence learning (Seq2Seq) model and only wakes-up the sensors that fall within the predicted trajectory of the moving object with a specially designed control packet. It reduces the computation time of encoding geospatial trajectory by more than 90% and preserves the location anonymity for the local edge servers”--Abstract, page iv

    Location-Dependent Query Processing Under Soft Real-Time Constraints

    Get PDF
    corecore