791 research outputs found
Computer vision techniques for forest fire perception
This paper presents computer vision techniques for forest fire perception involving measurement of forest fire properties (fire front, flame height, flame inclination angle, fire base width) required for the implementation of advanced forest fire-fighting strategies. The system computes a 3D perception model of the fire and could also be used for visualizing the fire evolution in remote computer systems. The presented system integrates the processing of images from visual and infrared cameras. It applies sensor fusion techniques involving also telemetry sensors, and GPS. The paper also includes some results of forest fire experiments.European Commission EVG1-CT-2001-00043European Commission IST-2001-34304Ministerio de Educación y Ciencia DPI2005-0229
Mobile graphics: SIGGRAPH Asia 2017 course
Peer ReviewedPostprint (published version
Synthesis of Multiresolution Scenes with Global Illumination on a GPU
[Abstract] The radiosity computation has the important feature of producing view independent results, but these results are mesh dependent and, in consequence, are attached to a specific level of detail in the input mesh. Therefore, rendering at iterative frame rates would benefit from the utilization of multiresolution models. In this paper we focus on the rendering stage of a solution for hierarchical radiosity for multiresolution systems. This method is based on the application of an enriched hierarchical radiosity algorithm to an input scene with low resolution objects (represented by coarse meshes), and the efficient data management of the resulting values. The proposed encoding makes it possible to apply the color values obtained for the coarse objects to detailed versions of these objects during the rendering phase. These finer meshes are obtained by a standard mesh subdivision strategy, such as the Loop subdivision scheme. Our solution performs the whole rendering stage of this multiresolution approach on the GPU, implementing it in the geometry shader using Microsoft HLSL. Results of our implementation show an important reduction in computational costs
A Hybrid Global Minimization Scheme for Accurate Source Localization in Sensor Networks
We consider the localization problem of multiple wideband sources in a
multi-path environment by coherently taking into account the attenuation
characteristics and the time delays in the reception of the signal. Our
proposed method leaves the space for unavailability of an accurate signal
attenuation model in the environment by considering the model as an unknown
function with reasonable prior assumptions about its functional space. Such
approach is capable of enhancing the localization performance compared to only
utilizing the signal attenuation information or the time delays. In this paper,
the localization problem is modeled as a cost function in terms of the source
locations, attenuation model parameters and the multi-path parameters. To
globally perform the minimization, we propose a hybrid algorithm combining the
differential evolution algorithm with the Levenberg-Marquardt algorithm.
Besides the proposed combination of optimization schemes, supporting the
technical details such as closed forms of cost function sensitivity matrices
are provided. Finally, the validity of the proposed method is examined in
several localization scenarios, taking into account the noise in the
environment, the multi-path phenomenon and considering the sensors not being
synchronized
Assessment of 3D mesh watermarking techniques
With the increasing usage of three-dimensional meshes in Computer-Aided Design (CAD), medical imaging, and entertainment fields like virtual reality, etc., the authentication problems and awareness of intellectual property protection have risen since the last decade. Numerous watermarking schemes have been suggested to protect ownership and prevent the threat of data piracy. This paper begins with the potential difficulties that arose when dealing with three-dimension entities in comparison to two-dimensional entities and also lists possible algorithms suggested hitherto and their comprehensive analysis. Attacks, also play a crucial role in deciding a watermarking algorithm so an attack based analysis is also presented to analyze resilience of watermarking algorithms under several attacks. In the end, some evaluation measures and potential solutions are brooded over to design robust and oblivious watermarking schemes in the future
The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch
Recent and forthcoming advances in instrumentation, and giant new surveys,
are creating astronomical data sets that are not amenable to the methods of
analysis familiar to astronomers. Traditional methods are often inadequate not
merely because of the size in bytes of the data sets, but also because of the
complexity of modern data sets. Mathematical limitations of familiar algorithms
and techniques in dealing with such data sets create a critical need for new
paradigms for the representation, analysis and scientific visualization (as
opposed to illustrative visualization) of heterogeneous, multiresolution data
across application domains. Some of the problems presented by the new data sets
have been addressed by other disciplines such as applied mathematics,
statistics and machine learning and have been utilized by other sciences such
as space-based geosciences. Unfortunately, valuable results pertaining to these
problems are mostly to be found only in publications outside of astronomy. Here
we offer brief overviews of a number of concepts, techniques and developments,
some "old" and some new. These are generally unknown to most of the
astronomical community, but are vital to the analysis and visualization of
complex datasets and images. In order for astronomers to take advantage of the
richness and complexity of the new era of data, and to be able to identify,
adopt, and apply new solutions, the astronomical community needs a certain
degree of awareness and understanding of the new concepts. One of the goals of
this paper is to help bridge the gap between applied mathematics, artificial
intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in
Astronomy, special issue "Robotic Astronomy
Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering
This study introduces a new method for detecting and sorting spikes from multiunit recordings. The method combines the wavelet transform, which localizes distinctive spike features, with superparamagnetic clustering,
which allows automatic classification of the data without assumptions such as low variance or gaussian distributions. Moreover, an improved method for setting amplitude thresholds for spike detection is proposed. We describe several criteria for implementation that render the algorithm unsupervised and fast. The algorithm is compared to other conventional methods using several simulated data sets whose characteristics closely resemble those of in vivo recordings. For these data sets, we found that
the proposed algorithm outperformed conventional methods
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
- …