747,762 research outputs found
A software system for laboratory experiments in image processing
Laboratory experiments for image processing courses are usually software implementations of processing algorithms, but students of image processing come from diverse backgrounds with widely differing software experience. To avoid learning overhead, the software system should be easy to learn and use, even for those with no exposure to mathematical programming languages or object-oriented programming. The class library for image processing (CLIP) supports users with knowledge of C, by providing three C++ types with small public interfaces, including natural and efficient operator overloading. CLIP programs are compact and fast. Experience in using the system in undergraduate and graduate teaching indicates that it supports subject matter learning with little distraction from language/system learning
Determining the relative position of vehicles considering bidirectional traffic scenarios in VANETS
Researchers pertaining to both academia and industry have shown strong interest in developing and improving the existing critical ITS solutions. In some of the existing solutions, specially the ones that aim at providing context aware services, the knowledge of relative positioning of one node by other nodes becomes crucial. In this paper we explore, apart from the conventional use of GPS data, the applicability of image processing to aid in determining the relative positions of nodes in a vehicular network. Experiments conducted show that both the use of location information and image processing works well and can be deployed depending on the requirement of the application. Our experiments show that the results that used location information were affected by GPS errors, while the use of image processing, although producing more accurate results, require significantly more processing power
Multi-modal Image Processing based on Coupled Dictionary Learning
In real-world scenarios, many data processing problems often involve
heterogeneous images associated with different imaging modalities. Since these
multimodal images originate from the same phenomenon, it is realistic to assume
that they share common attributes or characteristics. In this paper, we propose
a multi-modal image processing framework based on coupled dictionary learning
to capture similarities and disparities between different image modalities. In
particular, our framework can capture favorable structure similarities across
different image modalities such as edges, corners, and other elementary
primitives in a learned sparse transform domain, instead of the original pixel
domain, that can be used to improve a number of image processing tasks such as
denoising, inpainting, or super-resolution. Practical experiments demonstrate
that incorporating multimodal information using our framework brings notable
benefits.Comment: SPAWC 2018, 19th IEEE International Workshop On Signal Processing
Advances In Wireless Communication
A Research and Strategy of Remote Sensing Image Denoising Algorithms
Most raw data download from satellites are useless, resulting in transmission
waste, one solution is to process data directly on satellites, then only
transmit the processed results to the ground. Image processing is the main data
processing on satellites, in this paper, we focus on image denoising which is
the basic image processing. There are many high-performance denoising
approaches at present, however, most of them rely on advanced computing
resources or rich images on the ground. Considering the limited computing
resources of satellites and the characteristics of remote sensing images, we do
some research on these high-performance ground image denoising approaches and
compare them in simulation experiments to analyze whether they are suitable for
satellites. According to the analysis results, we propose two feasible image
denoising strategies for satellites based on satellite TianZhi-1.Comment: 9 pages, 4 figures, ICNC-FSKD 201
Physical-based optimization for non-physical image dehazing methods
Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods
Documentation of procedures for textural/spatial pattern recognition techniques
A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features
- …