6 research outputs found

    Automatic Detection and Quantification of Bluff Erosion Events in Single Image Series

    Get PDF
    Many communities along coastlines and riverbanks are threatened by water erosion and hence an accurate model to predict erosion events is needed in order to plan mitigation strategies. Such models need to rely on readily available meteorological data that may or may not be correlated with the occurrence of erosion events. In order to accurately study these potential correlations, researchers need a quantified time series index indicating the occurrence and magnitude of erosion in the studied area. We show that such an index can be obtained by creating and analyzing a single image series using relatively cheap consumer grade digital cameras. These image series are naturally of lower quality and subject to a large amount of variability as environmental conditions change over time. We initially analyze each image as a whole and subsequently demonstrate the great advantages of segmenting each image. This allows for independent parallel processing of segments while preventing cross-contamination between them. Finally, we are able to automatically detect 67% of all erosion events while accepting only a small number of false positives

    Identification of Weather Conditions Related to Roadside LiDAR Data

    Get PDF
    Traffic data collection is essential for traffic safety and operations studies and has been recognized as a fundamental component in the development of intelligent transportation systems. In recent years, growing interest is shown by both industrial and academic communities in high-resolution data that can portray traffic operations for all transportation participants such as connected or conventional vehicles, transit buses, and pedestrians. Roadside Light Detection and Ranging (LiDAR) sensors can be deployed to collect such high-resolution traffic data sets. However, LiDAR sensing could be negatively affected in the context of rain, snow, and wind conditions as the collected 3D point clouds of surrounding objects may drift. Weather-caused impacts can lead to difficulties in data processing and even accuracy compromise. Consequently, solutions are desired and sought, focused on the issue that the affected data have been identified through a labor-intensive and time-consuming process. In this research, a methodology is proposed for developing an automatic identification of the LiDAR data sets that are affected by rain, snow, and wind conditions. First, the impacts of rain, snow, and wind are characterized using statistical measures. Detection distance offset (DDO) and Detection distance offset for wind (DDOW) are calculated and investigated, and it shows that rain or snow conditions can be differentiated according to the standard deviation of the DDOs. Snow conditions can be additionally identified using the sum of the DDOs. Unlike rain and snow, wind conditions can be recognized by the differences between the upper and lower boundaries of DDOs, and therefore, a separate analysis is developed. Based upon the multiple analyses developed in the research, an automatic identification process is designed. The thresholds for identifying rain, snow, and wind conditions are set up, respectively. The process is validated using realistic roadside LiDAR data collected at the intersection of McCarran Blvd and Evans Ave in Reno, Nevada. The validation demonstrated that the proposed identification could precisely detect affected data sets in the context of rain, snow, and wind conditions

    A two-step approach to see-through bad weather for surveillance video quality enhancement

    No full text

    Real-time image dehazing by superpixels segmentation and guidance filter

    Get PDF
    Haze and fog had a great influence on the quality of images, and to eliminate this, dehazing and defogging are applied. For this purpose, an effective and automatic dehazing method is proposed. To dehaze a hazy image, we need to estimate two important parameters such as atmospheric light and transmission map. For atmospheric light estimation, the superpixels segmentation method is used to segment the input image. Then each superpixel intensities are summed and further compared with each superpixel individually to extract the maximum intense superpixel. Extracting the maximum intense superpixel from the outdoor hazy image automatically selects the hazy region (atmospheric light). Thus, we considered the individual channel intensities of the extracted maximum intense superpixel as an atmospheric light for our proposed algorithm. Secondly, on the basis of measured atmospheric light, an initial transmission map is estimated. The transmission map is further refined through a rolling guidance filter that preserves much of the image information such as textures, structures and edges in the final dehazed output. Finally, the haze-free image is produced by integrating the atmospheric light and refined transmission with the haze imaging model. Through detailed experimentation on several publicly available datasets, we showed that the proposed model achieved higher accuracy and can restore high-quality dehazed images as compared to the state-of-the-art models. The proposed model could be deployed as a real-time application for real-time image processing, real-time remote sensing images, real-time underwater images enhancement, video-guided transportation, outdoor surveillance, and auto-driver backed systems

    Seguimento de pessoas em sistemas multi-câmara

    Get PDF
    Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicação e MultimédiaEsta dissertação aborda o estudo, análise e implementação de sistemas de seguimento coerente de pessoas, em ambientes de múltiplas câmaras. Pretende-se que o sistema detete e siga pessoas nas imagens adquiridas por cada câmara de vigilância. Posteriormente, estas trajetórias serão correlacionadas para permitir a determinação da trajetória total, que uma pessoa percorre enquanto permanecer na zona vigiada. De salientar o facto de que as áreas cobertas pelas várias câmaras, podem ser sobrepostas ou sem área comum. O sistema relaciona os campos de visão das câmaras com o mapa da área monitorizada, para que as trajetórias sejam representadas num sistema de coordenadas comum a todas a câmaras (sistema de coordenadas do mundo). As tarefas desenvolvidas nesta tese envolvem a calibração do sistema de múltiplas câmaras, a deteção e o seguimento de objetos, a correspondência das várias trajetórias locais detetadas em cada câmara e a representação destas trajetórias numa planta formada pelas várias vistas das câmaras.Abstract: This thesis approaches the study, analysis and implementation of systems for tracking people in coherent systems of multiple cameras. It is intended that the system detect and follow people in the images acquired by each surveillance camera, these being fixed. Later, these trajectories will be correlated to allow the determination of the total trajectory, a person travels whileremaining in the surveillance zone. To emphasize the factthat, the areas covered by multiple cameras, can be no overlapping or common area. The system relates the fields of view of the cameras with the map of the monitored area, so that these trajectories can be represented in a coordinate system common to all cameras (the world coordinate system). The tasks developed in this thesis involve the calibration of multiple cameras system, the detection and tracking of objects and matching the various local trajectories detected in each camera and the representation of these trajectories in a plant formed by the various views of the cameras
    corecore