22 research outputs found

    Recent Progress in Wide-Area Surveillance: Protecting Our Pipeline Infrastructure

    Get PDF
    The pipeline industry has millions of miles of pipes buried along the length and breadth of the country. Since none of the areas through which pipelines run are to be used for other activities, it needs to be monitored so as to know whether the right-of-way (RoW) of the pipeline is encroached upon at any point in time. Rapid advances made in the area of sensor technology have enabled the use of high end video acquisition systems to monitor the RoW of pipelines. The images captured by aerial data acquisition systems are affected by a host of factors that include light sources, camera characteristics, geometric positions and environmental conditions. We present a multistage framework for the analysis of aerial imagery for automatic detection and identification of machinery threats along the pipeline RoW which would be capable of taking into account the constraints that come with aerial imagery such as low resolution, lower frame rate, large variations in illumination, motion blurs, etc. The proposed framework is described from three directions. In the first part of the framework, a method is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. This method makes use of monogenic phase features into a cascade of pre-trained classifiers to eliminate unwanted regions. The second part of the framework is a part-based object detection model for searching specific targets which are considered as threat objects. The third part of the framework is to assess the severity of the threats to pipelines in terms of computing the geolocation and the temperature information of the threat objects. The proposed scheme is tested on the real-world dataset that were captured along the pipeline RoW

    Automatic Building Change Detection in Wide Area Surveillance

    Get PDF
    We present an automated mechanism that can detect and characterize the building changes by analyzing airborne or satellite imagery. The proposed framework can be categorized into three stages: building detection, boundary extraction and change identification. To detect the buildings, we utilize local phase and local amplitude from monogenic signal to extract building features for addressing issues of varying illumination. Then a support vector machine with Radial basis kernel is used for classification. In the boundary extraction stage, a level-set function with self-organizing map based segmentation method is used to find the building boundary and compute physical area of the building segments. In the last stage, the change of the detected building is identified by computing the area differences of the same building that captured at different times. The experiments are conducted on a set of real-life aerial imagery to show the effectiveness of the proposed method

    Robust feature based reconstruction technique to remove rain from video

    No full text
    In the context of extracting information from video, especially in the case of surveillance videos, bad weather conditions can pose a huge challenge. They affect feature extraction processes and hence the performance of other post-processing algorithms. In general, bad weather conditions can be classified into static and dynamic weather conditions. Static weather conditions like haze, fog and smoke cause blurring of features and saturation of intensities in the image. The temporal derivatives of the scene intensities are very low. Dynamic weather conditions like rain and snow have varying effects from frame to frame. The temporal derivative of the scene intensity for any pixel will not be zero in the presence of rain. In essence, the actual scene content is not occluded by rain or snow at all instances in the video sequence. In this research, a new framework is presented to achieve robust reconstruction of videos that are affected by rain. The main challenge is to model the location of rain streaks in a frame. This is due to the fact that the location of rain streaks at any particular instant is completely random. However, the changes in scene intensity caused by rain streaks have a generalized behavior. In addition, the instances in which the actual scene is not occluded is sufficient to enable modeling of an efficient technique to have a robust reconstruction of the scene. The first part of the proposed framework for rain removal is a novel technique to detect rain streaks based on phase congruency features. The features capture all structural edges that are conspicuous to the human visual system. The variation of features from frame to frame can be used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The presence of motion components in a local sense is ignored in this part of the framework. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique based on local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than the state-of-the-art techniques. The performance of the proposed technique is evaluated in the case of removing snow from videos as well. It is observed that the method is capable of removing light snow streaks from the video. As part of ongoing research, attempts are being made at making the algorithm run in real-time

    Robust feature based reconstruction technique to remove rain from video

    No full text
    In the context of extracting information from video, especially in the case of surveillance videos, bad weather conditions can pose a huge challenge. They affect feature extraction processes and hence the performance of other post-processing algorithms. In general, bad weather conditions can be classified into static and dynamic weather conditions. Static weather conditions like haze, fog and smoke cause blurring of features and saturation of intensities in the image. The temporal derivatives of the scene intensities are very low. Dynamic weather conditions like rain and snow have varying effects from frame to frame. The temporal derivative of the scene intensity for any pixel will not be zero in the presence of rain. In essence, the actual scene content is not occluded by rain or snow at all instances in the video sequence. In this research, a new framework is presented to achieve robust reconstruction of videos that are affected by rain. The main challenge is to model the location of rain streaks in a frame. This is due to the fact that the location of rain streaks at any particular instant is completely random. However, the changes in scene intensity caused by rain streaks have a generalized behavior. In addition, the instances in which the actual scene is not occluded is sufficient to enable modeling of an efficient technique to have a robust reconstruction of the scene. The first part of the proposed framework for rain removal is a novel technique to detect rain streaks based on phase congruency features. The features capture all structural edges that are conspicuous to the human visual system. The variation of features from frame to frame can be used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The presence of motion components in a local sense is ignored in this part of the framework. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique based on local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than the state-of-the-art techniques. The performance of the proposed technique is evaluated in the case of removing snow from videos as well. It is observed that the method is capable of removing light snow streaks from the video. As part of ongoing research, attempts are being made at making the algorithm run in real-time

    Automated Whale Blow Detection in Infrared Video

    No full text
    In this chapter, solutions to the problem of whale blow detection in infrared video are presented. The solutions are considered to be assistive technology that could help whale researchers to sift through hours or days of video without manual intervention. Video is captured from an elevated position along the shoreline using an infrared camera. The presence of whales is inferred from the presence of blows detected in the video. In this chapter, three solutions are proposed for this problem. The first algorithm makes use of a neural network (multi-layer perceptron) for classification, the second uses fractal features and the third solution is using convolutional neural networks. The central idea of all the algorithms is to attempt and model the spatio-temporal characteristics of a whale blow accurately using appropriate mathematical models. We provide a detailed description and analysis of the proposed solutions, the challenges and some possible directions for future research

    Utilizing Local Phase Information to Remove Rain from Video

    No full text
    In the context of extracting information from video, bad weather conditions like rain can have a detrimental effect. In this paper, a novel framework to detect and remove rain streaks from video is proposed. The first part of the proposed framework for rain removal is a technique to detect rain streaks based on phase congruency features. The variation of features from frame to frame is used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique using local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than state-of-the-art techniques

    Biopharmaceutical potentials of Prosopis spp. (Mimosaceae, Leguminosa)

    No full text
    Prosopis is a commercially important plant genus, which has been used since ancient times, particularly for medicinal purposes. Traditionally, Paste, gum, and smoke from leaves and pods are applied for anticancer, antidiabetic, anti-inflammatory, and antimicrobial purposes. Components of Prosopis such as flavonoids, tannins, alkaloids, quinones, or phenolic compounds demonstrate potentials in various biofunctions, such as analgesic, anthelmintic, antibiotic, antiemetic, microbial antioxidant, antimalarial, antiprotozoal, antipustule, and antiulcer activities; enhancement of H+, K+, ATPases; oral disinfection; and probiotic and nutritional effects; as well as in other biopharmaceutical applications, such as binding abilities for tablet production. The compound juliflorine provides a cure in Alzheimer disease by inhibiting acetylcholine esterase at cholinergic brain synapses. Some indirect medicinal applications of Prosopis spp. are indicated, including antimosquito larvicidal activity, chemical synthesis by associated fungal or bacterial symbionts, cyanobacterial degradation products, “mesquite” honey and pollens with high antioxidant activity, etc. This review will reveal the origins, distribution, folk uses, chemical components, biological functions, and applications of different representatives of Prosopis

    Bioactive Efficacy of Novel Carboxylic Acid from Halophilic Pseudomonas aeruginosa against Methicillin-Resistant Staphylococcus aureus

    No full text
    Methicillin-resistant Staphylococcus aureus (MRSA) infections are increasingly causing morbidity and mortality; thus, drugs with multifunctional efficacy against MRSA are needed. We extracted a novel compound from the halophilic Pseudomonas aeruginosa using an ethyl acetate (HPAEtOAcE). followed by purification and structure elucidation through HPLC, LCMS, and 1H and 13C NMR, revealing the novel 5-(1H-indol-3-yl)-4-pentyl-1,3-oxazole-2-carboxylic acid (Compound 1). Molecular docking of the compound against the MRSA PS (pantothenate synthetase) protein was confirmed using the CDOCKER algorithm in BDS software with specific binding to the amino acids Arg (B:188) and Lys (B:150) through covalent hydrogen bonding. Molecular dynamic simulation of RMSD revealed that the compound–protein complex was stabilized. The proficient bioactivities against MRSA were attained by the HPAEtOAcE, including MIC and MBCs, which were 0.64 and 1.24 µg/mL, respectively; 100% biomass inhibition and 99.84% biofilm inhibition were observed with decayed effects by CLSM and SEM at 48 h. The hla, IrgA, and SpA MRSA genes were downregulated in RT-PCR. Non-hemolytic and antioxidant potential in the DPPH assay were observed at 10 mg/mL and IC50 29.75 ± 0.38 by the HPAEtOAcE. In vitro growth inhibition assays on MRSA were strongly supported by in silico molecular docking; Lipinski’s rule on drug-likeness and ADMET toxicity prediction indicated the nontoxic nature of compound

    RARNet fusing image enhancement for real-world image rain removal

    No full text

    Single image rain streak removal via layer similarity prior

    No full text
    corecore