7 research outputs found
Background Subtraction via Generalized Fused Lasso Foreground Modeling
Background Subtraction (BS) is one of the key steps in video analysis. Many
background models have been proposed and achieved promising performance on
public data sets. However, due to challenges such as illumination change,
dynamic background etc. the resulted foreground segmentation often consists of
holes as well as background noise. In this regard, we consider generalized
fused lasso regularization to quest for intact structured foregrounds. Together
with certain assumptions about the background, such as the low-rank assumption
or the sparse-composition assumption (depending on whether pure background
frames are provided), we formulate BS as a matrix decomposition problem using
regularization terms for both the foreground and background matrices. Moreover,
under the proposed formulation, the two generally distinctive background
assumptions can be solved in a unified manner. The optimization was carried out
via applying the augmented Lagrange multiplier (ALM) method in such a way that
a fast parametric-flow algorithm is used for updating the foreground matrix.
Experimental results on several popular BS data sets demonstrate the advantage
of the proposed model compared to state-of-the-arts
BAT Algorithm-Based Multi-Class Crop Leaf Disease Prediction Bootstrap Model
In the task of identification of infected agriculture plants, the leaf-based disease identification technique is especially effective in better understand crop disease among various techniques to detect infection. Recognition of an infected leaf image from healthy images gets encumbered when the model is required to detect the type of leaf disease. This paper presents a BAT-based crop disease prediction bootstrap model (BCDPBM) that identifies the health of the leaf and performs disease prediction. The BAT algorithm in the proposed model increases the capability of the Gaussian mixture model for foreground region detection. Furthermore, in the work, the co-occurrence matrix feature and histogram feature are extracted for the training of the bootstrap model. Hence, leaf foreground detection by the BAT algorithm with the Gaussian mixture improves the feature extraction quality for bootstrap learning. The proposed model utilizes a dataset of real leaf images for conducting experiments. The results of the model are compared with different existing models across various parameters. The results show the prediction accuracy enhancement of multiclass leaf disease using the BCDPBM model
Análisis de flujo peatonal en una intersección de avenidas utilizando procesamiento de imágenes
En diversos cruces de avenidas de la ciudad de Lima, se observan patrones anormales de
comportamiento peatonal, inadecuada semaforización y diseño vial no ideal para los peatones.
Estos últimos son los más vulnerables en los accidentes de tránsito y son quienes deberÃan
requerir mayor prioridad en el diseño de intersecciones. Para mejorar los diseños de las
intersecciones se requiere datos de flujo peatonal, que se pueden medir de diversas maneras.
Actualmente, el método de conteo de peatones utilizado es manual, el cual es realizado por
equipos de personas desde una esquina o mediante una grabación y que tiene una baja eficiencia
por error humano, además del costo por hora que ello implica. Por esta razón, la presente tesis
expone nuevos métodos de conteo más eficaces ejecutados en una intersección conocida y sus
resultados comparados con cifras esperadas.
El primer capÃtulo de la tesis presenta el marco problemático que justifica la importancia de la
presente tesis, los métodos generales usuales para detección de peatones y expone los objetivos
de la tesis.
El segundo capÃtulo describe los fundamentos teóricos sobre procesamiento de imágenes
utilizados para el desarrollo de los posteriores capÃtulos y los métodos especÃficos para cada
etapa del algoritmo.
El tercer capÃtulo enumera los pasos de la propuesta para el conteo de peatones, las funciones
implementadas y librerÃas utilizadas para cada una de las etapas de esta aplicación.
Por último, el cuarto capÃtulo revisa los resultados de la propuesta por cada etapa para diferentes
videos y hace un análisis de estos para su recomendación de uso en futuras aplicaciones
Automatic Extraction of Vehicle, Bicycle, and Pedestrian Traffic From Video Data
SPR No. 742This project investigated the use of traffic cameras to count and classify vehicles. The intent is to provide an alternative approach to pneumatic tubes for collecting traffic data at high volume locations and to eliminate safety risks to SCDOT personnel and contractors. The objective is to develop algorithms to post-process the 48-hour videos to determine the number of vehicles in each one of four categories: motorcycles, passenger cars and light trucks, buses/campers/tow trucks, and small to large trucks. To this end, background subtraction and foreground detection algorithms were implemented to detect moving vehicles, and a Convolutional Neural Network (CNN) model was developed to classify vehicles using thermal images obtained from a custom-built thermal camera and solar-powered trailer. Additionally, to overcome false detection of vehicles due to either camera motion or erratic light reflection from the pavement surface, an algorithm was developed to keep track of each vehicle\u2019s trajectory and the vehicle trajectories were used to determine the presence of an actual vehicle. The developed algorithms and CNN model were incorporated into a Windows-based application, named DECAF (detection and classification by functional class) to enable users to easily specify the folder that contains the video files to be processed, specify the region for which traffic should be analyzed, specify the time interval for which the data should be aggregated, and view the detection and classification results in two report formats: 1) a spreadsheet with vehicle-by-vehicle information, and 2) a PDF summary report with totals aggregated for the user-specified interval. DECAF was tested using videos collected from five different sites in Columbia, SC, and the overall detection and classification accuracy for the hours evaluated was found to be 95% or higher
Development of a Vision-based Monitoring System for Quality Assessment of 3D Printing
Additive Manufacturing (AM), also known as 3D printing, is a process of manufacturing parts and components by adding successive layers of material on top of each other until the final shape is achieved. The research target of this project is Fused Filament Fabrication (FFF), which is a specific type of Additive Manufacturing technology. FFF uses a filament of thermoplastic material, which is melted and extruded, then deposited layer by layer to create a 3D object. However, FFF has some limitations that need to be considered. For instance, the printing process can be time-consuming, and errors such as misalignment and incorrect slicing can occur, leading to complete failure and wasted time and material.
This thesis presents a vision-based monitoring system for FFF 3D printing quality assessment. The proposed system includes a simulation tool that generates simulated images of printed layers, along with feature extraction methods for assessing the size, shape and infill density of printed objects. The proposed system utilizes background subtraction for isolating the printed object from the background and estimating its size through pixel length analysis and bounding box calculation. The shape analysis of the printed objects is performed using the Fourier-Mellin transform (FMT) method. Moreover, the infill density is computed by combining foreground extraction and image thresholding methods, utilizing both camera and simulated images. The proposed system is able to analyse and examine the quality of 3D printing during the printing process and identify the defective printed object when deviates of 5 percent is detected in terms of the size, shape, and density of the printed object, alert the user to terminate the entire process and save time and cost. This new monitoring system provides an effective solution to improve the quality and efficiency of FFF 3D printing