1,366 research outputs found

    Low-rank Based Algorithms for Rectification, Repetition Detection and De-noising in Urban Images

    Full text link
    In this thesis, we aim to solve the problem of automatic image rectification and repeated patterns detection on 2D urban images, using novel low-rank based techniques. Repeated patterns (such as windows, tiles, balconies and doors) are prominent and significant features in urban scenes. Detection of the periodic structures is useful in many applications such as photorealistic 3D reconstruction, 2D-to-3D alignment, facade parsing, city modeling, classification, navigation, visualization in 3D map environments, shape completion, cinematography and 3D games. However both of the image rectification and repeated patterns detection problems are challenging due to scene occlusions, varying illumination, pose variation and sensor noise. Therefore, detection of these repeated patterns becomes very important for city scene analysis. Given a 2D image of urban scene, we automatically rectify a facade image and extract facade textures first. Based on the rectified facade texture, we exploit novel algorithms that extract repeated patterns by using Kronecker product based modeling that is based on a solid theoretical foundation. We have tested our algorithms in a large set of images, which includes building facades from Paris, Hong Kong and New York

    Summary of findings and research recommendations from the Gulf of Mexico Research Initiative

    Get PDF
    © The Author(s), 2021. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Wilson, C. A., Feldman, M. G., Carron, M. J., Dannreuther, N. M., Farrington, J. W., Halanych, K. M., Petitt, J. L., Rullkotter, J., Sandifer, P. A., Shaw, J. K., Shepherd, J. G., Westerholm, D. G., Yanoff, C. J., & Zimmermann, L. A. Summary of findings and research recommendations from the Gulf of Mexico Research Initiative. Oceanography, 34(1), (2021): 228–239, https://doi.org/10.5670/oceanog.2021.128.Following the Deepwater Horizon explosion and oil spill in 2010, the Gulf of Mexico Research Initiative (GoMRI) was established to improve society’s ability to understand, respond to, and mitigate the impacts of petroleum pollution and related stressors of the marine and coastal ecosystems. This article provides a high-level overview of the major outcomes of the scientific work undertaken by GoMRI. This i scientifically independent initiative, consisting of over 4,500 experts in academia, government, and industry, contributed to significant knowledge advances across the physical, chemical, geological, and biological oceanographic research fields, as well as in related technology, socioeconomics, human health, and oil spill response measures. For each of these fields, this paper outlines key advances and discoveries made by GoMRI-funded scientists (along with a few surprises), synthesizing their efforts in order to highlight lessons learned, future research needs, remaining gaps, and suggestions for the next generation of scientists

    Seafloor characterization using airborne hyperspectral co-registration procedures independent from attitude and positioning sensors

    Get PDF
    The advance of remote-sensing technology and data-storage capabilities has progressed in the last decade to commercial multi-sensor data collection. There is a constant need to characterize, quantify and monitor the coastal areas for habitat research and coastal management. In this paper, we present work on seafloor characterization that uses hyperspectral imagery (HSI). The HSI data allows the operator to extend seafloor characterization from multibeam backscatter towards land and thus creates a seamless ocean-to-land characterization of the littoral zone

    Multimodal Adversarial Learning

    Get PDF
    Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition, generative modelling, and multi-modal learning in various computer vision applications. However, recent findings have shown that such state-of-the-art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input. A good target detection systems can accurately identify targets by localizing their coordinates on the input image of interest. This is ideally achieved by labeling each pixel in an image as a background or a potential target pixel. However, prior research still confirms that such state of the art targets models are susceptible to adversarial attacks. In the case of generative models, facial sketches drawn by artists mostly used by law enforcement agencies depend on the ability of the artist to clearly replicate all the key facial features that aid in capturing the true identity of a subject. Recent works have attempted to synthesize these sketches into plausible visual images to improve visual recognition and identification. However, synthesizing photo-realistic images from sketches proves to be an even more challenging task, especially for sensitive applications such as suspect identification. However, the incorporation of hybrid discriminators, which perform attribute classification of multiple target attributes, a quality guided encoder that minimizes the perceptual dissimilarity of the latent space embedding of the synthesized and real image at different layers in the network have shown to be powerful tools towards better multi modal learning techniques. In general, our overall approach was aimed at improving target detection systems and the visual appeal of synthesized images while incorporating multiple attribute assignment to the generator without compromising the identity of the synthesized image. We synthesized sketches using XDOG filter for the CelebA, Multi-modal and CelebA-HQ datasets and from an auxiliary generator trained on sketches from CUHK, IIT-D and FERET datasets. Our results overall for different model applications are impressive compared to current state of the art

    Automated shape analysis and visualization of the human back.

    Get PDF
    Spinal and back deformities can lead to pain and discomfort, disrupting productivity, and may require prolonged treatment. The conventional method of assessing and monitoring tile de-formity using radiographs has known radiation hazards. An alternative approach for monitoring the deformity is to base the assessment on the shape of back surface. Though three-dimensional data acquisition methods exist, techniques to extract relevant information for clinical use have not been widely developed. Thi's thesis presentsthe content and progression of research into automated analysis and visu-alization of three-dimensional laser scans of the human back. Using mathematical shape analysis, methods have been developed to compute stable curvature of the back surface and to detect the anatomic landmarks from the curvature maps. Compared with manual palpation, the landmarks have been detected to within accuracy of 1.15mm and precision of 0.8111m.Based on the detected spinous process landmarks, the back midline which is the closest surface approximation of the spine, has been derived using constrained polynomial fitting and statistical techniques. Three-dimensional geometric measurementsbasedon the midline were then corn-puted to quantify the deformity. Visualization plays a crucial role in back shape analysis since it enables the exploration of back deformities without the need for physical manipulation of the subject. In the third phase,various visualization techniques have been developed, namely, continuous and discrete colour maps, contour maps and three-dimensional views. In the last phase of the research,a software system has been developed for automating the tasks involved in analysing, visualizing and quantifying of the back shape. The novel aspectsof this research lie in the development of effective noise smoothing methods for stable curvature computation; improved shape analysis and landmark detection algorithm; effective techniques for visualizing the shape of the back; derivation of the back midline using constrained polynomials and computation of three dimensional surface measurements.

    Block-level discrete cosine transform coefficients for autonomic face recognition

    Get PDF
    This dissertation presents a novel method of autonomic face recognition based on the recently proposed biologically plausible network of networks (NoN) model of information processing. The NoN model is based on locally parallel and globally coordinated transformations. In the NoN architecture, the neurons or computational units form distributed networks, which themselves link to form larger networks. In the general case, an n-level hierarchy of nested distributed networks is constructed. This models the structures in the cerebral cortex described by Mountcastle and the architecture based on that proposed for information processing by Sutton. In the implementation proposed in the dissertation, the image is processed by a nested family of locally operating networks along with a hierarchically superior network that classifies the information from each of the local networks. The implementation of this approach helps obtain sensitivity to the contrast sensitivity function (CSF) in the middle of the spectrum, as is true for the human vision system. The input images are divided into blocks to define the local regions of processing. The two-dimensional Discrete Cosine Transform (DCT), a spatial frequency transform, is used to transform the data into the frequency domain. Thereafter, statistical operators that calculate various functions of spatial frequency in the block are used to produce a block-level DCT coefficient. The image is now transformed into a variable length vector that is trained with respect to the data set. The classification was done by the use of a backpropagation neural network. The proposed method yields excellent results on a benchmark database. The results of the experiments yielded a maximum of 98.5% recognition accuracy and an average of 97.4% recognition accuracy. An advanced version of the method where the local processing is done on offset blocks has also been developed. This has validated the NoN approach and further research using local processing as well as more advanced global operators is likely to yield even better results

    On-line cascading event tracking and avoidance decision support tool

    Get PDF
    Cascading outages in power systems are costly events that power system operators and planners actively seek to avoid. Such events can quickly result in power outages for millions of customers. Although it is unreasonable to claim that blackouts can be completely prevented, we can nonetheless reduce the frequency and impact of such high consequence events. Power operators can take actions if they have the right information provided by tools for monitoring and managing the risk of cascading outages. Such tools are being developed in this research project by identifying contingencies that could initiate cascading outages and by determining operator actions to avoid the start of a cascade.;A key to cascading outage defense is the level of grid operator situational awareness. Severe disturbances and complex unfolding of post-disturbance phenomena, including interdependent events, demand critical actions to be taken on the part of the operators, thus making operators dependent on decision support tools and automatic controls. In other industries (e.g., airline, nuclear, process control), control operators employ computational capabilities that help them predict system response and identify corrective actions. Power system operators should have a similar capability with online simulation tools.;To create an online simulator to help operators identify the potential for and actions to avoid cascades, we developed a systematic way to identify power system initiating contingencies for operational use. The work extends the conventional contingency list by including a subset of high-order contingencies identified through topology processing. The contingencies are assessed via an online, mid-term simulator, designed to provide generalized, event-based, corrective control and decision support for operators with very high computational efficiency. Speed enhancement is obtained algorithmically by employing a multi-frontal linear solver within an implicit integration scheme. The contingency selection and simulation capabilities were illustrated on two systems: a test system with six generators and the IEEE RTS-96 with 33 generators. Comparisons with commercial grade simulators indicate the developed simulator is accurate and fast
    • …
    corecore