32,061 research outputs found
Performance limitations of subband adaptive filters
In this paper, we evaluate the performance limitations of subband adaptive filters in terms of achievable final error terms. The limiting factors are the aliasing level in the subbands, which poses a distortion and thus presents a lower bound for the minimum mean squared error in each subband, and the distortion function of the overall filter bank, which in a system identification setup restricts the accuracy of the equivalent fullband model. Using a generalized DFT modulated filter bank for the subband decomposition, both errors can be stated in terms of the underlying prototype filter. If a source model for coloured input signals is available, it is also possible to calculate the power spectral densities in both subbands and reconstructed fullband. The predicted limits of error quantities compare favourably with simulations presented
Adobe Flash as a medium for online experimentation: a test of reaction time measurement capabilities
Adobe Flash can be used to run complex psychological experiments over the Web. We examined the reliability of using Flash to measure reaction times (RTs) using a simple binary-choice task implemented both in Flash and in a Linux-based system known to record RTs with millisecond accuracy. Twenty-four participants were tested in the laboratory using both implementations; they also completed the Flash version on computers of their own choice outside the lab. RTs from the trials run on Flash outside the lab were approximately 20 msec slower than those from trials run on Flash in the lab, which in turn were approximately 10 msec slower than RTs from the trials run on the Linux-based system (baseline condition). RT SDs were similar in all conditions, suggesting that although Flash may overestimate RTs slightly, it does not appear to add significant noise to the data recorded
A Generalised Sidelobe Canceller Architecture Based on Oversampled Subband Decompositions
Adaptive broadband beamforming can be performed in oversampled subband signals, whereby an independent beamformer is operated in each frequency band. This has been shown to result in a considerably reduced computational complexity. In this paper, we primarily investigate the convergence behaviour of the generalised sidelobe canceller (GSC) based on normalised least mean squares algorithm (NLMS) when operated in subbands. The minimum mean squared error can be limited, amongst other factors, by the aliasing present in the subbands. With regard to convergence speed, there is strong indication that the subband-GSC converges faster than a fullband counterpart of similar modelling capabilities. Simulations are presented
Dilettante, Venturesome, Tory and Crafts: Drivers of Performance Among Taxonomic Groups
Empirical research has failed to cumulate into a coherent taxonomy of small firms. This may be because the method adapted from biology by Bill McKelvey has almost never been adopted. His approach calls for extensive variables and a focused sample of organizations, contrary to most empirical studies, which are specialized. Comparing general and special purpose approaches, we find some of the latter have more explanatory power than others and that general purpose taxonomies have the greatest explanatory power. Examining performance, we find the types do not display significantly different levels of performance but they display highly varied drivers of performance
Black hole pair creation and the stability of flat space
We extend the Gross-Perry-Yaffe approach of hot flat space instability to
Minkowski space. This is done by a saddle point approximation of the partition
function in a Schwarzschild wormhole background which is coincident with an
eternal black hole. The appearance of an instability in the whole manifold is
here interpreted as a black hole pair creation.Comment: 11 pages,RevTeX4, 2 figures. Accepted for publication in Int. J. Mod.
Phys.
Digital Signal Processing Education: Technology and Tradition
In this paper we discuss a DSP course presented to both University students and to participants on industrial short courses. The "traditional" DSP course will typically run over one to two semesters and usually cover the fundamental mathematics of z-, Laplace and Fourier transforms, followed by the algorithm and application detail. In the course we will discuss, the use of advanced DSP software and integrated support software allow the presentation time to be greatly shortened and more focussed algorithm and application learning to be introduced. By combining the traditional lecture with the use of advanced DSP software, all harnessed by the web, we report on the objectives, syllabus, and mode of teaching
Performance of a Combustion Driven Shock Tunnel with Application to the Tailored- Interface Operating Conditions
Performance characteristics of driven tube portion of combustion driven wind tunnel designed for quasi-steady reservoir condition
Hiding the complexity: building a distributed ATLAS Tier-2 with a single resource interface using ARC middleware
Since their inception, Grids for high energy physics have found management of data to be the most challenging aspect of operations. This problem has generally been tackled by the experiment's data management framework controlling in fine detail the distribution of data around the grid and the careful brokering of jobs to sites with co-located data. This approach, however, presents experiments with a difficult and complex system to manage as well as introducing a rigidity into the framework which is very far from the original conception of the grid.<p></p>
In this paper we describe how the ScotGrid distributed Tier-2, which has sites in Glasgow, Edinburgh and Durham, was presented to ATLAS as a single, unified resource using the ARC middleware stack. In this model the ScotGrid 'data store' is hosted at Glasgow and presented as a single ATLAS storage resource. As jobs are taken from the ATLAS PanDA framework, they are dispatched to the computing cluster with the fastest response time. An ARC compute element at each site then asynchronously stages the data from the data store into a local cache hosted at each site. The job is then launched in the batch system and accesses data locally.<p></p>
We discuss the merits of this system compared to other operational models and consider, from the point of view of the resource providers (sites), and from the resource consumers (experiments); and consider issues involved in transitions to this model
- …