106,538 research outputs found
Recommended from our members
Investigating the impact of image content on the energy efficiency of hardware-accelerated digital spatial filters
Battery-operated low-power portable computing devices are becoming an inseparable part of human daily life. One of the major goals is to achieve the longest battery life in such a device. Additionally, the need for performance in processing multimedia content is ever increasing. Processing image and video content consume more power than other applications. A widely used approach to improving energy efficiency is to implement the computationally intensive functions as digital hardware accelerators. Spatial filtering is one of the most commonly used methods of digital image processing. As per the Fourier theory, an image can be considered as a two-dimensional signal that is composed of spatially extended two-dimensional sinusoidal patterns called gratings. Spatial frequency theory states that sinusoidal gratings can be characterised by its spatial frequency, phase, amplitude, and orientation. This article presents results from our investigation into assessing the impact of these characteristics of a digital image on the energy efficiency of hardware-accelerated spatial filters employed to process the same image. Two greyscale images each of size 128 × 128 pixels comprising two-dimensional sinusoidal gratings at maximum spatial frequency of 64 cycles per image orientated at 0° and 90°, respectively, were processed in a hardware implemented Gaussian smoothing filter. The energy efficiency of the filter was compared with the baseline energy efficiency of processing a featureless plain black image. The results show that energy efficiency of the filter drops to 12.5% when the gratings are orientated at 0° whilst rises to 72.38% at 90°
Reconstructing the calibrated strain signal in the Advanced LIGO detectors
Advanced LIGO's raw detector output needs to be calibrated to compute
dimensionless strain h(t). Calibrated strain data is produced in the time
domain using both a low-latency, online procedure and a high-latency, offline
procedure. The low-latency h(t) data stream is produced in two stages, the
first of which is performed on the same computers that operate the detector's
feedback control system. This stage, referred to as the front-end calibration,
uses infinite impulse response (IIR) filtering and performs all operations at a
16384 Hz digital sampling rate. Due to several limitations, this procedure
currently introduces certain systematic errors in the calibrated strain data,
motivating the second stage of the low-latency procedure, known as the
low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses
finite impulse response (FIR) filtering to apply corrections to the output of
the front-end calibration. It applies time-dependent correction factors to the
sensing and actuation components of the calibrated strain to reduce systematic
errors. The gstlal calibration pipeline is also used in high latency to
recalibrate the data, which is necessary due mainly to online dropouts in the
calibrated data and identified improvements to the calibration models or
filters.Comment: 20 pages including appendices and bibliography. 11 Figures. 3 Table
Practical considerations regarding results from static and dynamic load testing of bridges
Bridge tests are a helpful tool for bridge assessment and evaluation. Both in the case of a static and dynamic load testing, each element of the test: the load selection and application, the creation of a numerical model to follow the progress of the test or to check the validity of the test results, the measurement process itself and the comparative analysis of experimental results and calculations could be a source of errors in the bridge final evaluation if these errors and uncertainties are not properly considered. The article presents some of the most important factors that may bring errors in the interpretation of the test results and their comparison to targeted values or values derived from a numerical model. This, at the end, may result in the adoption of decisions that are not accurate and appropriate. The selected sources of feasible errors are presented with the division into static and dynamic loading tests. The presented examples of bridge load testing show how the use of improper test methods could lead to significant errors in bridge assessment and evaluation and, consequently, to wrong decisions.Peer ReviewedPostprint (published version
Online Store Locator: An Essential Resource for Retailers in the 21st Century
Most retailers use their websites and social media to increase their visibility, while potential customers get information about these retailers using the Internet on electronic devices. Many papers have previously studied online marketing strategies used by retailers, but little attention has been paid to determine how these companies provide information through the Internet about the location and characteristics of their stores. This paper aims to obtain evidence about the inclusion of interactive web maps on retailers’ websites to provide information about the location of their stores. With this purpose, the store locator interactive tools of specialty retailers’ websites included in the report “Global Powers of Retailing 2015” are studied in detail using different procedures, such as frequency analysis and word clouds. From the results obtained, it was concluded that most of these firms use interactive maps to provide information about their offline stores, but today some of them still use non-interactive (static) maps or text format to present this information. Moreover, some differences were observed among the search filters used in the store locator services, according to the retailer’s specialty. These results provided insight into the important role of online store locator tools on retailers’ websites
Fast recursive filters for simulating nonlinear dynamic systems
A fast and accurate computational scheme for simulating nonlinear dynamic
systems is presented. The scheme assumes that the system can be represented by
a combination of components of only two different types: first-order low-pass
filters and static nonlinearities. The parameters of these filters and
nonlinearities may depend on system variables, and the topology of the system
may be complex, including feedback. Several examples taken from neuroscience
are given: phototransduction, photopigment bleaching, and spike generation
according to the Hodgkin-Huxley equations. The scheme uses two slightly
different forms of autoregressive filters, with an implicit delay of zero for
feedforward control and an implicit delay of half a sample distance for
feedback control. On a fairly complex model of the macaque retinal horizontal
cell it computes, for a given level of accuracy, 1-2 orders of magnitude faster
than 4th-order Runge-Kutta. The computational scheme has minimal memory
requirements, and is also suited for computation on a stream processor, such as
a GPU (Graphical Processing Unit).Comment: 20 pages, 8 figures, 1 table. A comparison with 4th-order Runge-Kutta
integration shows that the new algorithm is 1-2 orders of magnitude faster.
The paper is in press now at Neural Computatio
- …