2,594,931 research outputs found

    Performance testing of lidar receivers

    Get PDF
    In addition to the considerations about the different types of noise sources, dynamic range, and linearity of a lidar receiver, one requires information about the pulse shape retaining capabilities of the receiver. For this purpose, relatively precise information about the height resolution as well as the recovery time of the receiver, due both to large transients and to fast changes in the received signal, is required. As more and more analog receivers using fast analog to digital converters and transient recorders will be used in the future lidar systems, methods to test these devices are essential. The method proposed for this purpose is shown. Tests were carried out using LCW-10, LT-20, and FTVR-2 as optical parts of the optical pulse generator circuits. A commercial optical receiver, LNOR, and a transient recorder, VK 220-4, were parts of the receiver system

    Testing for changes in forecasting performance

    Full text link
    We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossiā€™s (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald (TLSW) and Total Loss UDmax (TLUD) tests. Using theoretical analyses and simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes considered are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and empirical applications illustrate the relevance of our findings in practice.First author draf

    Performance Testing of Distributed Component Architectures

    Get PDF
    Performance characteristics, such as response time, throughput andscalability, are key quality attributes of distributed applications. Current practice,however, rarely applies systematic techniques to evaluate performance characteristics.We argue that evaluation of performance is particularly crucial in early developmentstages, when important architectural choices are made. At first glance, thiscontradicts the use of testing techniques, which are usually applied towards the endof a project. In this chapter, we assume that many distributed systems are builtwith middleware technologies, such as the Java 2 Enterprise Edition (J2EE) or theCommon Object Request Broker Architecture (CORBA). These provide servicesand facilities whose implementations are available when architectures are defined.We also note that it is the middleware functionality, such as transaction and persistenceservices, remote communication primitives and threading policy primitives,that dominates distributed system performance. Drawing on these observations, thischapter presents a novel approach to performance testing of distributed applications.We propose to derive application-specific test cases from architecture designs so thatthe performance of a distributed application can be tested based on the middlewaresoftware at early stages of a development process. We report empirical results thatsupport the viability of the approach

    DiPerF: an automated DIstributed PERformance testing Framework

    Full text link
    We present DiPerF, a distributed performance testing framework, aimed at simplifying and automating service performance evaluation. DiPerF coordinates a pool of machines that test a target service, collects and aggregates performance metrics, and generates performance statistics. The aggregate data collected provide information on service throughput, on service "fairness" when serving multiple clients concurrently, and on the impact of network latency on service performance. Furthermore, using this data, it is possible to build predictive models that estimate a service performance given the service load. We have tested DiPerF on 100+ machines on two testbeds, Grid3 and PlanetLab, and explored the performance of job submission services (pre WS GRAM and WS GRAM) included with Globus Toolkit 3.2.Comment: 8 pages, 8 figures, will appear in IEEE/ACM Grid2004, November 200

    Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs

    Get PDF
    Dropouts are a common issue in cognitive tests with non-human primates. One main reason for dropouts is that researchers often face a trade-off between obtaining a sufficiently large sample size and logistic restrictions, such as limited access to testing facilities. The commonly-used opportunistic testing approach deals with this trade-off by only testing those individuals who readily participate and complete the cognitive tasks within a given time frame. All other individuals are excluded from further testing and data analysis. However, it is unknown if this approach merely excludes subjects who are not consistently motivated to participate, or if these dropouts systematically differ in cognitive ability. If the latter holds, the selection bias resulting from opportunistic testing would systematically affect performance scores and thus comparisons between individuals and species. We assessed the potential effects of opportunistic testing on cognitive performance in common marmosets (Callithrix jacchus) and squirrel monkeys (Saimiri sciureus) with a test battery consisting of six cognitive tests: two inhibition tasks (Detour Reaching and A-not-B), one cognitive flexibility task (Reversal Learning), one quantity discrimination task, and two memory tasks. Importantly, we used a full testing approach in which subjects were given as much time as they required to complete each task. For each task, we then compared the performance of subjects who completed the task within the expected number of testing days with those subjects who needed more testing time. We found that the two groups did not differ in task performance, and therefore opportunistic testing would have been justified without risking biased results. If our findings generalise to other species, maximising sample sizes by only testing consistently motivated subjects will be a valid alternative whenever full testing is not feasible.</p

    PENGARUH BEBAN KERJA, STRES KERJA DAN LINGKUNGAN KERJA TERHADAP KINERJA KARYAWAN (Studi Pada Karyawan Toko Besi Musthafa di Kab. Pasuruan)

    Get PDF
    Each company has various assumptions regarding responses to employee performance results, these responses are related to workload, work stress, and work environment. This research analyzes the influence of workload, work stress, and work environment on employee performance at the Musthafa Iron Shop. This research is quantitative and descriptive. To find out this response, researchers researched the three factors that influence employee performance. Through a questionnaire instrument involving 41 respondents, which means all employees. Data testing uses the Statistical Program for Social Science (SPSS) 26.00 by making certain decisions to determine results and discussion. The test instruments used include validity tests and reliability tests. The data analysis method uses multiple linear regression analysis, classical assumption testing, and hypothesis testing. The research results show that workload has a negative and significant effect on employee performance, work stress has a negative and significant effect on employee performance, the work environment has a positive and significant effect on employee performance, and workload has the greatest influence on employee performanc
    • ā€¦
    corecore