26 research outputs found

    Modeling Bottom-Up Visual Attention Using Dihedral Group D4

    Get PDF
    Published version. Source at http://dx.doi.org/10.3390/sym8080079 In this paper, first, we briefly describe the dihedral group D4 that serves as the basis for calculating saliency in our proposed model. Second, our saliency model makes two major changes in a latest state-of-the-art model known as group-based asymmetry. First, based on the properties of the dihedral group D4, we simplify the asymmetry calculations associated with the measurement of saliency. This results is an algorithm that reduces the number of calculations by at least half that makes it the fastest among the six best algorithms used in this research article. Second, in order to maximize the information across different chromatic and multi-resolution features, the color image space is de-correlated. We evaluate our algorithm against 10 state-of-the-art saliency models. Our results show that by using optimal parameters for a given dataset, our proposed model can outperform the best saliency algorithm in the literature. However, as the differences among the (few) best saliency models are small, we would like to suggest that our proposed model is among the best and the fastest among the best. Finally, as a part of future work, we suggest that our proposed approach on saliency can be extended to include three-dimensional image data

    A Survey on Modelling of Automotive Radar Sensors for Virtual Test and Validation of Automated Driving

    Get PDF
    Radar sensors were among the first perceptual sensors used for automated driving. Although several other technologies such as lidar, camera, and ultrasonic sensors are available, radar sensors have maintained and will continue to maintain their importance due to their reliability in adverse weather conditions. Virtual methods are being developed for verification and validation of automated driving functions to reduce the time and cost of testing. Due to the complexity of modelling high-frequency wave propagation and signal processing and perception algorithms, sensor models that seek a high degree of accuracy are challenging to simulate. Therefore, a variety of different modelling approaches have been presented in the last two decades. This paper comprehensively summarises the heterogeneous state of the art in radar sensor modelling. Instead of a technology-oriented classification as introduced in previous review articles, we present a classification of how these models can be used in vehicle development by using the V-model originating from software development. Sensor models are divided into operational, functional, technical, and individual models. The application and usability of these models along the development process are summarised in a comprehensive tabular overview, which is intended to support future research and development at the vehicle level and will be continuously updated

    Air Force Institute of Technology Research Report 2017

    Get PDF
    This Research Report presents the FY18 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs)

    Point Normal Orientation and Surface Reconstruction by Incorporating Isovalue Constraints to Poisson Equation

    Full text link
    Oriented normals are common pre-requisites for many geometric algorithms based on point clouds, such as Poisson surface reconstruction. However, it is not trivial to obtain a consistent orientation. In this work, we bridge orientation and reconstruction in implicit space and propose a novel approach to orient point clouds by incorporating isovalue constraints to the Poisson equation. Feeding a well-oriented point cloud into a reconstruction approach, the indicator function values of the sample points should be close to the isovalue. Based on this observation and the Poisson equation, we propose an optimization formulation that combines isovalue constraints with local consistency requirements for normals. We optimize normals and implicit functions simultaneously and solve for a globally consistent orientation. Owing to the sparsity of the linear system, an average laptop can be used to run our method within reasonable time. Experiments show that our method can achieve high performance in non-uniform and noisy data and manage varying sampling densities, artifacts, multiple connected components, and nested surfaces

    Air Force Institute of Technology Research Report 2020

    Get PDF
    This Research Report presents the FY20 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document

    Object Tracking in Video Sequences

    Get PDF
    Tässä diplomityössä vertaillaan konenäössä käytettyjen SIFT-, SURF- ja ORB-algoritmia objektin seurannassa. Tutkimuksen tavoitteena on tarkastella algoritmien soveltuvuutta erityyppisille videoille ja erilaisiin käyttötarkoituksiin, kuten reaaliaikaisiin järjestelmiin, mutta myös järjestelmiin, joissa reaaliaikaisuus ei ole vaatimuksena. Algoritmeja on aikaisemmissa tutkimuksissa vertailtu kuvaparien avulla, mutta tutkimuksia objektin seurannasta SIFT-, SURF- ja ORB-algoritmeja käyttäen ei löytynyt. SIFT- ja SURF-algoritmien vertailu niitä uudemman ORB-algormitmin kanssa tuo lisäksi uutta tietoa sen suorituskyvystä. Tutkimukset ovat olleet ORB:n osalta vielä vähäisiä. Vertailu tehdään neljän eri testivideon avulla algoritmien vakioparametreilla ja optimoiduilla parametreilla. Vertailussa otetaan huomioon algoritmien tarkkuus, nopeus, sekä sietokyky skaalaus-, rotaatio- ja kuvakulmamuutoksille. Testiympäristössä käytettiin Python-ohjelmointikieltä ja konenäköön suunnattua OpenCV-kirjastoa. Tuloksista selviää, että kaikki kolme algoritmia soveltuvat objektin seuraamiseen. Algoritmin valinta kuitenkin riippuu käyttökohteesta ja videon ominaisuuksista. Erityisesti ORB:n kohdalla tarkkuus parani merkittävästi optimoiduilla parametreilla. SIFT:n ja SURF:n tarkkuutta ei optimoinnilla juurikaan saatu parannettua, mutta niiden laskenta-aika lyheni. Algoritmeistä ORB oli jokaisessa videossa nopein ja SIFT keskiarvollisesti tarkin. Laskenta-ajallisesti SURF oli algoritmeista hitain, mikä voi rajottaa sen käyttöä. Tulosten perusteella ORB:n käyttöä voidaan suositella käytettäväksi reaaliaikaisissa järjestelmissä optimoiduilla parametreilla ja SIFT:n käyttöä puolestaan tarkempaan seurantaan. SURF:n tarkkuus oli paras tapauksissa, joissa videokuva oli heilahtanut, joten sen käyttöä voidaan suositella kyseisissä tilanteissa

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Vehicle make and model recognition for intelligent transportation monitoring and surveillance.

    Get PDF
    Vehicle Make and Model Recognition (VMMR) has evolved into a significant subject of study due to its importance in numerous Intelligent Transportation Systems (ITS), such as autonomous navigation, traffic analysis, traffic surveillance and security systems. A highly accurate and real-time VMMR system significantly reduces the overhead cost of resources otherwise required. The VMMR problem is a multi-class classification task with a peculiar set of issues and challenges like multiplicity, inter- and intra-make ambiguity among various vehicles makes and models, which need to be solved in an efficient and reliable manner to achieve a highly robust VMMR system. In this dissertation, facing the growing importance of make and model recognition of vehicles, we present a VMMR system that provides very high accuracy rates and is robust to several challenges. We demonstrate that the VMMR problem can be addressed by locating discriminative parts where the most significant appearance variations occur in each category, and learning expressive appearance descriptors. Given these insights, we consider two data driven frameworks: a Multiple-Instance Learning-based (MIL) system using hand-crafted features and an extended application of deep neural networks using MIL. Our approach requires only image level class labels, and the discriminative parts of each target class are selected in a fully unsupervised manner without any use of part annotations or segmentation masks, which may be costly to obtain. This advantage makes our system more intelligent, scalable, and applicable to other fine-grained recognition tasks. We constructed a dataset with 291,752 images representing 9,170 different vehicles to validate and evaluate our approach. Experimental results demonstrate that the localization of parts and distinguishing their discriminative powers for categorization improve the performance of fine-grained categorization. Extensive experiments conducted using our approaches yield superior results for images that were occluded, under low illumination, partial camera views, or even non-frontal views, available in our real-world VMMR dataset. The approaches presented herewith provide a highly accurate VMMR system for rea-ltime applications in realistic environments.\\ We also validate our system with a significant application of VMMR to ITS that involves automated vehicular surveillance. We show that our application can provide law inforcement agencies with efficient tools to search for a specific vehicle type, make, or model, and to track the path of a given vehicle using the position of multiple cameras

    Tuberculosis diagnosis from pulmonary chest x-ray using deep learning.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.Tuberculosis (TB) remains a life-threatening disease, and it is one of the leading causes of mortality in developing countries. This is due to poverty and inadequate medical resources. While treatment for TB is possible, it requires an accurate diagnosis first. Several screening tools are available, and the most reliable is Chest X-Ray (CXR), but the radiological expertise for accurately interpreting the CXR images is often lacking. Over the years, CXR has been manually examined; this process results in delayed diagnosis, is time-consuming, expensive, and is prone to misdiagnosis, which could further spread the disease among individuals. Consequently, an algorithm could increase diagnosis efficiency, improve performance, reduce the cost of manual screening and ultimately result in early/timely diagnosis. Several algorithms have been implemented to diagnose TB automatically. However, these algorithms are characterized by low accuracy and sensitivity leading to misdiagnosis. In recent years, Convolutional Neural Networks (CNN), a class of Deep Learning, has demonstrated tremendous success in object detection and image classification task. Hence, this thesis proposed an efficient Computer-Aided Diagnosis (CAD) system with high accuracy and sensitivity for TB detection and classification. The proposed model is based firstly on novel end-to-end CNN architecture, then a pre-trained Deep CNN model that is fine-tuned and employed as a features extractor from CXR. Finally, Ensemble Learning was explored to develop an Ensemble model for TB classification. The Ensemble model achieved a new stateof- the-art diagnosis accuracy of 97.44% with a 99.18% sensitivity, 96.21% specificity and 0.96% AUC. These results are comparable with state-of-the-art techniques and outperform existing TB classification models.Author's Publications listed on page iii
    corecore