468,982 research outputs found

    Performance map of a cluster detection test using extended power.

    Get PDF
    International audienceBACKGROUND: Conventional power studies possess limited ability to assess the performance of cluster detection tests. In particular, they cannot evaluate the accuracy of the cluster location, which is essential in such assessments. Furthermore, they usually estimate power for one or a few particular alternative hypotheses and thus cannot assess performance over an entire region. Takahashi and Tango developed the concept of extended power that indicates both the rate of null hypothesis rejection and the accuracy of the cluster location. We propose a systematic assessment method, using here extended power, to produce a map showing the performance of cluster detection tests over an entire region. METHODS: To explore the behavior of a cluster detection test on identical cluster types at any possible location, we successively applied four different spatial and epidemiological parameters. These parameters determined four cluster collections, each covering the entire study region. We simulated 1,000 datasets for each cluster and analyzed them with Kulldorff's spatial scan statistic. From the area under the extended power curve, we constructed a map for each parameter set showing the performance of the test across the entire region. RESULTS: Consistent with previous studies, the performance of the spatial scan statistic increased with the baseline incidence of disease, the size of the at-risk population and the strength of the cluster (i.e., the relative risk). Performance was heterogeneous, however, even for very similar clusters (i.e., similar with respect to the aforementioned factors), suggesting the influence of other factors. CONCLUSIONS: The area under the extended power curve is a single measure of performance and, although needing further exploration, it is suitable to conduct a systematic spatial evaluation of performance. The performance map we propose enables epidemiologists to assess cluster detection tests across an entire study region

    Sensor-Based Safety Performance Assessment of Individual Construction Workers

    Get PDF
    Over the last decade, researchers have explored various technologies and methodologies to enhance worker safety at construction sites. The use of advanced sensing technologies mainly has focused on detecting and warning about safety issues by directly relying on the detection capabilities of these technologies. Until now, very little research has explored methods to quantitatively assess individual workers’ safety performance. For this, this study uses a tracking system to collect and use individuals’ location data in the proposed safety framework. A computational and analytical procedure/model was developed to quantify the safety performance of individual workers beyond detection and warning. The framework defines parameters for zone-based safety risks and establishes a zone-based safety risk model to quantify potential risks to workers. To demonstrate the model of safety analysis, the study conducted field tests at different construction sites, using various interaction scenarios. Probabilistic evaluation showed a slight underestimation and overestimation in certain cases; however, the model represented the overall safety performance of a subject quite well. Test results showed clear evidence of the model’s ability to capture safety conditions of workers in pre-identified hazard zones. The developed approach presents a way to provide visualized and quantified information as a form of safety index, which has not been available in the industry. In addition, such an automated method may present a suitable safety monitoring method that can eliminate human deployment that is expensive, error-prone, and time-consuming

    Performance evaluation on optimisation of 200 dimensional numerical tests - results and issues

    Get PDF
    Abstract: Many tasks in science and technology require optimisation. Resolving such tasks could bring great benefits to community. Multidimensional problems where optimisation parameters are hundreds and more face unusual computational limitations. Algorithms, which perform well on low number of dimensions, when are applied to high dimensional space suffers insuperable difficulties. This article presents an investigation on 200 dimensional scalable, heterogeneous, real-value, numerical tests. For some of these tests optimal values are dependent on dimensions’ number and virtually unknown for variety of dimensions. Dependence on initialisation for successful identification of optimal values is analysed by comparison between experiments with start from random initial locations and start from one location. The aim is to: (1) assess dependence on initialisation in optimisation of 200 dimensional tests; (2) evaluate tests complexity and required for their resolving periods of time; (3) analyse adaptation to tasks with unknown solutions; (4) identify specific peculiarities which could support the performance on high dimensions (5) identify computational limitations which numerical methods could face on high dimensions. Presented and analysed experimental results can be used for further comparison and evaluation of real value methods

    Performance evaluation on optimisation of 200 dimensional numerical tests - results and issues

    Get PDF
    Abstract: Many tasks in science and technology require optimisation. Resolving such tasks could bring great benefits to community. Multidimensional problems where optimisation parameters are hundreds and more face unusual computational limitations. Algorithms, which perform well on low number of dimensions, when are applied to high dimensional space suffers insuperable difficulties. This article presents an investigation on 200 dimensional scalable, heterogeneous, real-value, numerical tests. For some of these tests optimal values are dependent on dimensions’ number and virtually unknown for variety of dimensions. Dependence on initialisation for successful identification of optimal values is analysed by comparison between experiments with start from random initial locations and start from one location. The aim is to: (1) assess dependence on initialisation in optimisation of 200 dimensional tests; (2) evaluate tests complexity and required for their resolving periods of time; (3) analyse adaptation to tasks with unknown solutions; (4) identify specific peculiarities which could support the performance on high dimensions (5) identify computational limitations which numerical methods could face on high dimensions. Presented and analysed experimental results can be used for further comparison and evaluation of real value methods

    Using Non-Parametric Tests to Evaluate Traffic Forecasting Performance.

    Get PDF
    This paper proposes the use of a number of nonparametric comparison methods for evaluating traffic flow forecasting techniques. The advantage to these methods is that they are free of any distributional assumptions and can be legitimately used on small datasets. To demonstrate the applicability of these tests, a number of models for the forecasting of traffic flows are developed. The one-step-ahead forecasts produced are then assessed using nonparametric methods. Consideration is given as to whether a method is universally good or good at reproducing a particular aspect of the original series. That choice will be dictated, to a degree, by the user’s purpose for assessing traffic flow

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    An adaptive multi-fidelity sampling framework for safety analysis of connected and automated vehicles

    Full text link
    Testing and evaluation are expensive but critical steps in the development of connected and automated vehicles (CAVs). In this paper, we develop an adaptive sampling framework to efficiently evaluate the accident rate of CAVs, particularly for scenario-based tests where the probability distribution of input parameters is known from the Naturalistic Driving Data. Our framework relies on a surrogate model to approximate the CAV performance and a novel acquisition function to maximize the benefit (information to accident rate) of the next sample formulated through an information-theoretic consideration. In addition to the standard application with only a single high-fidelity model of CAV performance, we also extend our approach to the bi-fidelity context where an additional low-fidelity model can be used at a lower computational cost to approximate the CAV performance. Accordingly, for the second case, our approach is formulated such that it allows the choice of the next sample in terms of both fidelity level (i.e., which model to use) and sampling location to maximize the benefit per cost. Our framework is tested in a widely-considered two-dimensional cut-in problem for CAVs, where Intelligent Driving Model (IDM) with different time resolutions are used to construct the high and low-fidelity models. We show that our single-fidelity method outperforms the existing approach for the same problem, and the bi-fidelity method can further save half of the computational cost to reach a similar accuracy in estimating the accident rate

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu
    • …
    corecore