15 research outputs found
Iterative Methods for Visualization of Implicit Surfaces on GPU
The original publication is available at www.springerlink.comInternational audienceThe ray-casting of implicit surfaces on GPU has been explored in the last few years. However, until recently, they were restricted to second degree (quadrics). We present an iterative solution to ray cast cubics and quartics on GPU. Our solution targets efficient implementation, obtaining interactive rendering for thousands of surfaces per frame. We have given special attention to torus rendering since it is a useful shape for multiple CAD models. We have tested four different iterative methods, including a novel one, comparing them with classical tessellation solution
Detection of hemorrhage and exudates in retinal fundus image of diabetic patients
Diabetes is a disease that interferes with the body's ability to use and store sugar, which can cause many health problems. Over time, diabetes affects the circulatory system including the retina. As diabetes progress, the vision of a patient may start to deteriorate and then leading to Diabetic Retinopathy (DR) which further will cause blindness. So, early detection of the disease is important to avoid blindness. There are
several ways to diagnose DR and slit – lamp examination is one of the traditional method used by the ophthalmologist. This method requires the clinician to see directly into patient’s eye through an ophthalmoscope or the slit lamp machine to determine whether or not the eyes contain any abnormal features that indicate DR. However, this is not the most effective method yet. Any human can get tired and drowsy including doctors. This
natural flaws of human being can affect the diagnosis and then causing false result analysis. Besides, every individuals doesn’t hold same opinion and judgment. Therefore, this project is proposed to assist the clinicians in identifying DR. There are two main abnormal features that are formed in the retina of a diabetic retinopathy’s patient. They are hemorrhage and exudates. Hemorrhage are formed as a result due to leakage of retinal blood vessel which has similar red colour to the vessel. Whereas exudates are yellow-white deposits structure on the retina that is formed due to leakage of blood from abnormal vessels. This thesis mainly focuses on developing a Fundus Image Analysis (FIA) system that extracts the anatomical and both the abnormal features of the retina in order to diagnose the disease. This research is carried out in three phases. In the first
phase, an automated system is developed to distinguish the anatomical features of the retina from the abnormal features. This phase is called the Masking Phase. This phase involved combinations of several image processing techniques including Specify Polygonal Region of Interest (ROIPOLY), Contrast-limited adaptive histogram equalization (CLAHE), Morphological Opening and Structuring, Median Filtering and Thresholding. The second phase is the Haemorrhage Extraction phase. In this phase, Saturation Adjust Method, Morphological operations and Regional Minima technique is proposed. The third and the last phase is the Exudates Extraction phase. In this phase, Edge Detection, Gradient Magnitude and Region Of Interest techniques are combined to form a complete working algorithm. The experimented images in this project are the retinal fundus images that was taken from a public database (diaretdb1 - Standard Diabetic Retinopathy Database). It is a public database for benchmarking diabetic retinopathy detection from digital images. By using this database and the defined testing
protocol, the results between different methods can be compared. At the end of this project, the result shows that the method applied is able to detect exudates features and capable of detecting and distinguishing hemorrhage from blood vessels. Final result shows the accuracy of 48.3% for detecting images with haemorrhages and 68.5% for images with exudates
Air Force Institute of Technology Research Report 2020
This Research Report presents the FY20 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Explainability has been widely stated as a cornerstone of the responsible and
trustworthy use of machine learning models. With the ubiquitous use of Deep
Neural Network (DNN) models expanding to risk-sensitive and safety-critical
domains, many methods have been proposed to explain the decisions of these
models. Recent years have also seen concerted efforts that have shown how such
explanations can be distorted (attacked) by minor input perturbations. While
there have been many surveys that review explainability methods themselves,
there has been no effort hitherto to assimilate the different methods and
metrics proposed to study the robustness of explanations of DNN models. In this
work, we present a comprehensive survey of methods that study, understand,
attack, and defend explanations of DNN models. We also present a detailed
review of different metrics used to evaluate explanation methods, as well as
describe attributional attack and defense methods. We conclude with lessons and
take-aways for the community towards ensuring robust explanations of DNN model
predictions.Comment: Under Review ACM Computing Surveys "Special Issue on Trustworthy AI
Surface and Sub-Surface Analyses for Bridge Inspection
The development of bridge inspection solutions has been discussed in the recent past. In this dissertation, significant development and improvement on the state-of-the-art in the field of bridge inspection using multiple sensors (e.g. ground penetrating radar (GPR) and visual sensor) has been proposed. In the first part of this research (discussed in chapter 3), the focus is towards developing effective and novel methods for rebar detection and localization for sub-surface bridge inspection of steel rebars. The data has been collected using Ground Penetrating Radar (GPR) sensor on real bridge decks. In this regard, a number of different approaches have been successively developed that continue to improve the state-of-the-art in this particular research area. The second part (discussed in chapter 4) of this research deals with the development of an automated system for steel bridge defect detection system using a Multi-Directional Bicycle Robot. The training data has been acquired from actual bridges in Vietnam and validation is performed on data collected using Bicycle Robot from actual bridge located in Highway-80, Lovelock, Nevada, USA. A number of different proposed methods have been discussed in chapter 4. The final chapter of the dissertation will conclude the findings from the different parts and discuss ways of improving on the existing works in the near future
Video interaction using pen-based technology
Dissertação para obtenção do Grau de Doutor em
InformáticaVideo can be considered one of the most complete and complex media and its manipulating
is still a difficult and tedious task. This research applies pen-based technology to
video manipulation, with the goal to improve this interaction. Even though the human
familiarity with pen-based devices, how they can be used on video interaction, in order
to improve it, making it more natural and at the same time fostering the user’s creativity
is an open question.
Two types of interaction with video were considered in this work: video annotation
and video editing. Each interaction type allows the study of one of the interaction modes
of using pen-based technology: indirectly, through digital ink, or directly, trough pen
gestures or pressure. This research contributes with two approaches for pen-based video
interaction: pen-based video annotations and video as ink.
The first uses pen-based annotations combined with motion tracking algorithms, in
order to augment video content with sketches or handwritten notes. It aims to study how
pen-based technology can be used to annotate a moving objects and how to maintain the
association between a pen-based annotations and the annotated moving object
The second concept replaces digital ink by video content, studding how pen gestures
and pressure can be used on video editing and what kind of changes are needed in the
interface, in order to provide a more familiar and creative interaction in this usage context.This work was partially funded by the UTAustin-Portugal, Digital Media, Program
(Ph.D. grant: SFRH/BD/42662/2007 - FCT/MCTES); by the HP Technology for Teaching
Grant Initiative 2006; by the project "TKB - A Transmedia Knowledge Base for contemporary
dance" (PTDC/EAT/AVP/098220/2008 funded by FCT/MCTES); and by CITI/DI/FCT/UNL (PEst-OE/EEI/UI0527/2011
Air Force Institute of Technology Research Report 2019
This Research Report presents the FY19 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document
Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning
Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome Parkfunktionalität in einem realen Versuchsträger umgesetzt.
Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit.
Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken über eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren Datensätze dieser Annotationsebene und Radarspezifikation öffentlich verfügbar. Das überwachte
Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen.
Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstützt.
Für die kohärente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrückt. Ein speziell für Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie
Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM für beliebige statische Umgebungen realisiert.
Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen Parkfunktionalität evaluiert. Im Durchschnitt über 42 autonome Parkmanöver
(∅3.73 km/h) bei durchschnittlicher Manöverlänge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare
Radar-Lokalisierungsergebnisse um ≈ 50% übertrifft. Die Kartengenauigkeit von veränderlichen, neukartierten Orten über eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. Für das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet