431 research outputs found
Remote sensing of tidal networks and their relation to vegetation
The study of the morphology of tidal networks and their relation to salt marsh vegetation is currently an active area of research, and a number of theories have been developed which require validation using extensive observations. Conventional methods of measuring networks and associated vegetation can be cumbersome and subjective. Recent advances in remote sensing techniques mean that these can now often reduce measurement effort whilst at the same time increasing measurement scale. The status of remote sensing of tidal networks and their relation to vegetation is reviewed. The measurement of network planforms and their associated variables is possible to sufficient resolution using digital aerial photography and airborne scanning laser altimetry (LiDAR), with LiDAR also being able to measure channel depths. A multi-level knowledge-based technique is described to extract networks from LiDAR in a semi-automated fashion. This allows objective and detailed geomorphological information on networks to be obtained over large areas of the inter-tidal zone. It is illustrated using LIDAR data of the River Ems, Germany, the Venice lagoon, and Carnforth Marsh, Morecambe Bay, UK. Examples of geomorphological variables of networks extracted from LiDAR data are given. Associated marsh vegetation can be classified into its component species using airborne hyperspectral and satellite multispectral data. Other potential applications of remote sensing for network studies include determining spatial relationships between networks and vegetation, measuring marsh platform vegetation roughness, in-channel velocities and sediment processes, studying salt pans, and for marsh restoration schemes
An Adaptive Threshold for the Canny Edge Detection with Actor-Critic Algorithm
Visual surveillance aims to perform robust foreground object detection
regardless of the time and place. Object detection shows good results using
only spatial information, but foreground object detection in visual
surveillance requires proper temporal and spatial information processing. In
deep learning-based foreground object detection algorithms, the detection
ability is superior to classical background subtraction (BGS) algorithms in an
environment similar to training. However, the performance is lower than that of
the classical BGS algorithm in the environment different from training. This
paper proposes a spatio-temporal fusion network (STFN) that could extract
temporal and spatial information using a temporal network and a spatial
network. We suggest a method using a semi-foreground map for stable training of
the proposed STFN. The proposed algorithm shows excellent performance in an
environment different from training, and we show it through experiments with
various public datasets. Also, STFN can generate a compliant background image
in a semi-supervised method, and it can operate in real-time on a desktop with
GPU. The proposed method shows 11.28% and 18.33% higher FM than the latest deep
learning method in the LASIESTA and SBI dataset, respectively
Biomimetic Design for Efficient Robotic Performance in Dynamic Aquatic Environments - Survey
This manuscript is a review over the published articles on edge detection. At first, it provides theoretical background, and then reviews wide range of methods of edge detection in different categorizes. The review also studies the relationship between categories, and presents evaluations regarding to their application, performance, and implementation. It was stated that the edge detection methods structurally are a combination of image smoothing and image differentiation plus a post-processing for edge labelling. The image smoothing involves filters that reduce the noise, regularize the numerical computation, and provide a parametric representation of the image that works as a mathematical microscope to analyze it in different scales and increase the accuracy and reliability of edge detection. The image differentiation provides information of intensity transition in the image that is necessary to represent the position and strength of the edges and their orientation. The edge labelling calls for post-processing to suppress the false edges, link the dispread ones, and produce a uniform contour of objects
ENHANCED NON-LINEAR FILTERING SCHEME FOR EDGE DETECTION
Edge detection plays an important role in image processing. Edge detectors have always been a compromising between information and noise. Since edge detection is a derivative operation, it tends to amplify noise. This means that increasing the amount of information may increase the noise as well. There are a variety of edge detectors or operators with different size of the kernel. In general, many established edge detectors focus on the gradient in grayscale image to detect edges. This paper proposed an improvement of edge detection algorithm by considering two edge features: gradient and length. In the proposed algorithm, the threshold value of the gradient was set to a value similar to a default value used in other existing edge detectors. The length of the edges feature was used to increase the robustness of the proposed algorithm towards the noise. The proposed algorithm was validated with synthetic and natural images with the inclusion of three types of noise: additive, multiplicative and impulsive noises. Results were compared with other established edge detectors whereby the proposed algorithm demonstrated its superiority in handling edges in low contrast regions and less sensitive towards the noise
Real-Time Edge Detection using Sundance Video and Image Processing System
Edge detection from images is one of the most important concerns in digital image and video processing. With development in technology, edge detection has been greatly benefited and new avenues for research opened up, one such field being the real time video and image processing whose applications have allowed other digital image and video processing. It consists of the implementation of various image processing algorithms like edge detection using sobel, prewitt, canny and laplacian etc. A different technique is reported to increase the performance of the edge detection. The algorithmic computations in real-time may have high level of time based complexity and hence the use of Sundance Module Video and Image processing system for the implementation of such algorithms is proposed here. In this module is based on the Sundance module SMT339 processor is a dedicated high speed image processing module for use in a wide range of image analysis systems. This processor is combination of the DSP and FPGA processor. The image processing engine is based upon the âTexas Instrumentsâ TMS320DM642 Video Digital Signal Processor. And A powerful Vitrex-4 FPGA (XC4VFX60-10) is used onboard as the FPGA processing unit for image data. It is observed that techniques which follow the stage process of detection of noise and filtering of noisy pixels achieve better performance than others. In this thesis such schemes of sobel, prewitt, canny and laplacian detector are proposed
Adaptive detection and tracking using multimodal information
This thesis describes work on fusing data from multiple sources of information, and focuses on two main areas: adaptive detection and adaptive object tracking in automated vision scenarios. The work on adaptive object detection explores a new paradigm in dynamic parameter selection, by selecting thresholds for object detection to maximise agreement between pairs of sources. Object tracking, a complementary technique to object detection, is also explored in a multi-source context and an efficient framework for robust tracking, termed the Spatiogram Bank tracker, is proposed as a means to overcome the difficulties of traditional histogram tracking. As well as performing theoretical analysis of the proposed methods, specific example applications are given for both the detection and the tracking aspects, using thermal infrared and visible spectrum video data, as well as other multi-modal information sources
Recommended from our members
Intelligent laser scanning for computer aided manufacture.
Reverse engineering requires the acquisition of large amounts of data describing the surface of an object, sufficient to replicate that object accurately using appropriate fabrication techniques. This is important within a wide range of commercial and scientific fields where CAD models may be unavailable for parts that must be duplicated or modified, or where a physical model is used as a prototype. The three-dimensional digitisation of objects is an essential first step in reverse engineering. Optical triangulation laser sensors are one of the most popular and common non-contact methods used in the data acquisition process today. They provide the means for high resolution scanning of complex objects. Multiple scans of the object are usually required to capture the full 3D profile of the object. A number of factors, including scan resolution, system optics and the precision of the mechanical parts comprising the system may affect the accuracy of the process. A single perspective optical triangulation sensor provides an inexpensive method for the acquisition of 3D range image data
- âŠ