3,279,113 research outputs found
Performance testing of lidar receivers
In addition to the considerations about the different types of noise sources, dynamic range, and linearity of a lidar receiver, one requires information about the pulse shape retaining capabilities of the receiver. For this purpose, relatively precise information about the height resolution as well as the recovery time of the receiver, due both to large transients and to fast changes in the received signal, is required. As more and more analog receivers using fast analog to digital converters and transient recorders will be used in the future lidar systems, methods to test these devices are essential. The method proposed for this purpose is shown. Tests were carried out using LCW-10, LT-20, and FTVR-2 as optical parts of the optical pulse generator circuits. A commercial optical receiver, LNOR, and a transient recorder, VK 220-4, were parts of the receiver system
Performance Testing of Distributed Component Architectures
Performance characteristics, such as response time, throughput andscalability, are key quality attributes of distributed applications. Current practice,however, rarely applies systematic techniques to evaluate performance characteristics.We argue that evaluation of performance is particularly crucial in early developmentstages, when important architectural choices are made. At first glance, thiscontradicts the use of testing techniques, which are usually applied towards the endof a project. In this chapter, we assume that many distributed systems are builtwith middleware technologies, such as the Java 2 Enterprise Edition (J2EE) or theCommon Object Request Broker Architecture (CORBA). These provide servicesand facilities whose implementations are available when architectures are defined.We also note that it is the middleware functionality, such as transaction and persistenceservices, remote communication primitives and threading policy primitives,that dominates distributed system performance. Drawing on these observations, thischapter presents a novel approach to performance testing of distributed applications.We propose to derive application-specific test cases from architecture designs so thatthe performance of a distributed application can be tested based on the middlewaresoftware at early stages of a development process. We report empirical results thatsupport the viability of the approach
Testing for changes in forecasting performance
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi’s (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald (TLSW) and Total Loss UDmax (TLUD) tests. Using theoretical analyses and simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes considered are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and empirical applications illustrate the relevance of our findings in practice.First author draf
Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs
Dropouts are a common issue in cognitive tests with non-human primates. One main reason for dropouts is that researchers often face a trade-off between obtaining a sufficiently large sample size and logistic restrictions, such as limited access to testing facilities. The commonly-used opportunistic testing approach deals with this trade-off by only testing those individuals who readily participate and complete the cognitive tasks within a given time frame. All other individuals are excluded from further testing and data analysis. However, it is unknown if this approach merely excludes subjects who are not consistently motivated to participate, or if these dropouts systematically differ in cognitive ability. If the latter holds, the selection bias resulting from opportunistic testing would systematically affect performance scores and thus comparisons between individuals and species. We assessed the potential effects of opportunistic testing on cognitive performance in common marmosets (Callithrix jacchus) and squirrel monkeys (Saimiri sciureus) with a test battery consisting of six cognitive tests: two inhibition tasks (Detour Reaching and A-not-B), one cognitive flexibility task (Reversal Learning), one quantity discrimination task, and two memory tasks. Importantly, we used a full testing approach in which subjects were given as much time as they required to complete each task. For each task, we then compared the performance of subjects who completed the task within the expected number of testing days with those subjects who needed more testing time. We found that the two groups did not differ in task performance, and therefore opportunistic testing would have been justified without risking biased results. If our findings generalise to other species, maximising sample sizes by only testing consistently motivated subjects will be a valid alternative whenever full testing is not feasible.</p
Nonparametric Bayesian multiple testing for longitudinal performance stratification
This paper describes a framework for flexible multiple hypothesis testing of
autoregressive time series. The modeling approach is Bayesian, though a blend
of frequentist and Bayesian reasoning is used to evaluate procedures.
Nonparametric characterizations of both the null and alternative hypotheses
will be shown to be the key robustification step necessary to ensure reasonable
Type-I error performance. The methodology is applied to part of a large
database containing up to 50 years of corporate performance statistics on
24,157 publicly traded American companies, where the primary goal of the
analysis is to flag companies whose historical performance is significantly
different from that expected due to chance.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS252 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Fluid infusion system
Performance testing carried out in the development of the prototype zero-g fluid infusion system is described and summarized. Engineering tests were performed in the course of development, both on the original breadboard device and on the prototype system. This testing was aimed at establishing baseline system performance parameters and facilitating improvements. Acceptance testing was then performed on the prototype system to verify functional performance. Acceptance testing included a demonstration of the fluid infusion system on a laboratory animal
DiPerF: an automated DIstributed PERformance testing Framework
We present DiPerF, a distributed performance testing framework, aimed at
simplifying and automating service performance evaluation. DiPerF coordinates a
pool of machines that test a target service, collects and aggregates
performance metrics, and generates performance statistics. The aggregate data
collected provide information on service throughput, on service "fairness" when
serving multiple clients concurrently, and on the impact of network latency on
service performance. Furthermore, using this data, it is possible to build
predictive models that estimate a service performance given the service load.
We have tested DiPerF on 100+ machines on two testbeds, Grid3 and PlanetLab,
and explored the performance of job submission services (pre WS GRAM and WS
GRAM) included with Globus Toolkit 3.2.Comment: 8 pages, 8 figures, will appear in IEEE/ACM Grid2004, November 200
Pengujian RESTful API Pada Website Monitoring Kartu Santri Menggunakan Metode Equivalence Partitions
Pesantren Go Digital is an initiative by PT Telkom Indonesia to digitize the boarding school segment in Indonesia. One of the digital solutions developed is Kartu Santri, a cashless transaction service integrated with electronic money. However, the development of the Santri Card Monitoring website is still ongoing, and the API's performance is crucial for ensuring the website's features function properly. API performance sometimes encounters challenges, such as server failures, accuracy issues with retrieved data, or responses that do not align with the sent requests. This research aims to test the API's quality and functionality on the Santri Card Monitoring website using the Black Box Testing Equivalence Partitions method. This method allows detailed testing by determining valid and invalid data boundaries. The testing was conducted using the Postman tool, with results showing the API's effectiveness at 71,25% in the first iteration, which increased to 100% after improvement. Consequently, the RESTful API of the Santri Card Monitoring website was deemed "Very Good," with the improvements significantly enhancing its effectiveness. In conclusion, the Black Box Testing Equivalence Partitions method proves highly effective for testing API performance, contributing to the improved quality and functionality of the Santri Card Monitoring website in Pesantren Go Digital
- …
