7,559 research outputs found
The application of compressive sampling to radio astronomy I: Deconvolution
Compressive sampling is a new paradigm for sampling, based on sparseness of
signals or signal representations. It is much less restrictive than
Nyquist-Shannon sampling theory and thus explains and systematises the
widespread experience that methods such as the H\"ogbom CLEAN can violate the
Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution
method for extended sources is introduced. This method can reconstruct both
point sources and extended sources (using the isotropic undecimated wavelet
transform as a basis function for the reconstruction step). We compare this
CS-based deconvolution method with two CLEAN-based deconvolution methods: the
H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best
performance in deconvolving extended sources for both uniform and natural
weighting of the sampled visibilities. Both visual and numerical results of the
comparison are provided.Comment: Published by A&A, Matlab code can be found:
http://code.google.com/p/csra/download
Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing
The random demodulator is a recent compressive sensing architecture providing
efficient sub-Nyquist sampling of sparse band-limited signals. The compressive
sensing paradigm requires an accurate model of the analog front-end to enable
correct signal reconstruction in the digital domain. In practice, hardware
devices such as filters deviate from their desired design behavior due to
component variations. Existing reconstruction algorithms are sensitive to such
deviations, which fall into the more general category of measurement matrix
perturbations. This paper proposes a model-based technique that aims to
calibrate filter model mismatches to facilitate improved signal reconstruction
quality. The mismatch is considered to be an additive error in the discretized
impulse response. We identify the error by sampling a known calibrating signal,
enabling least-squares estimation of the impulse response error. The error
estimate and the known system model are used to calibrate the measurement
matrix. Numerical analysis demonstrates the effectiveness of the calibration
method even for highly deviating low-pass filter responses. The proposed method
performance is also compared to a state of the art method based on discrete
Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
A framework to experiment optimizations for real-time and embedded software
Typical constraints on embedded systems include code size limits, upper
bounds on energy consumption and hard or soft deadlines. To meet these
requirements, it may be necessary to improve the software by applying various
kinds of transformations like compiler optimizations, specific mapping of code
and data in the available memories, code compression, etc. However, a
transformation that aims at improving the software with respect to a given
criterion might engender side effects on other criteria and these effects must
be carefully analyzed. For this purpose, we have developed a common framework
that makes it possible to experiment various code transfor-mations and to
evaluate their impact of various criteria. This work has been carried out
within the French ANR MORE project.Comment: International Conference on Embedded Real Time Software and Systems
(ERTS2), Toulouse : France (2010
Performance assessment methods for boilers and heat pump systems in residential buildings
According to the European Commission, 40% of the total energy use belongs to the buildings sector. That corresponds to 36% of CO2 emissions in the European Union alone. Currently, HVAC systems are the major energy users in the building sector. Therefore, there is a necessity to assess the performance of different energy/comfort systems in buildings. However, finding a way to mitigate the performance gap between the calculated and real energy use in dwellings is of great importance. In Flanders, the Energy Performance and indoor climate regulation (EPB) dates back to 2006. Since the building context related to energy demand has changed significantly over the past years, investigation on how to evolve building energy assessment method framework in the EPB regulation in Flanders by dealing with the current issues will be indispensable. In 2017, new EN52000 series of standards have been published, containing extensive methods of assessing the overall energy performance of buildings.
The main focus of this article is to analyze the assessment methods for the energy performance of boilers and heat pumps for residential appliance by comparing methodology stated in respected Energy performance and indoor climate regulation in Flanders (EPB), EcoDesign regulations and EN52000 standard series. The aim for future research is to determine the parameters that mostly influence the performance and in a next step compare the predicted performance to real energy use
Wireless sensor network as a distribute database
Wireless sensor networks (WSN) have played a role in various fields. In-network data processing is one of the most important and challenging techniques as it affects the key features of WSNs, which are energy consumption, nodes life circles and network performance. In the form of in-network processing, an intermediate node or aggregator will fuse or aggregate sensor data, which are collected from a group of sensors before transferring to the base station. The advantage of this approach is to minimize the amount of information transferred due to lack of computational resources.
This thesis introduces the development of a hybrid in-network data processing for WSNs to fulfil the WSNs constraints. An architecture for in-network data processing were proposed in clustering level, data compression level and data mining level. The Neighbour-aware Multipath Cluster Aggregation (NMCA) is designed in the clustering level, which combines cluster-based and multipath approaches to process different packet loss rates. The data compression schemes and Optimal Dynamic Huffman (ODH) algorithm compressed data in the cluster head for the compressed level. A semantic data mining for fire detection was designed for extracting information from the raw data by the semantic data-mining model is developed to improve data accuracy and extract the fire event in the simulation. A demo in-door location system with in-network data processing approach is built to test the performance of the energy reduction of our designed strategy. In conclusion, the added benefits that the technical work can provide for in-network data processing is discussed and specific contributions and future work are highlighted
- …