565 research outputs found

    Industrial X-ray Image Analysis with Deep Neural Networks Robust to Unexpected Input Data

    Get PDF
    X-ray inspection is often an essential part of quality control within quality critical manufacturing industries. Within such industries, X-ray image interpretation is resource intensive and typically conducted by humans. An increased level of automatization would be preferable, and recent advances in artificial intelligence (e.g., deep learning) have been proposed as solutions. However, typically, such solutions are overconfident when subjected to new data far from the training data, so-called out-of-distribution (OOD) data; we claim that safe automatic interpretation of industrial X-ray images, as part of quality control of critical products, requires a robust confidence estimation with respect to OOD data. We explored if such a confidence estimation, an OOD detector, can be achieved by explicit modeling of the training data distribution, and the accepted images. For this, we derived an autoencoder model trained unsupervised on a public dataset with X-ray images of metal fusion welds and synthetic data. We explicitly demonstrate the dangers with a conventional supervised learning-based approach and compare it to the OOD detector. We achieve true positive rates of around 90% at false positive rates of around 0.1% on samples similar to the training data and correctly detect some example OOD data

    Developing algorithms for the analysis of retinal Optical Coherence Tomography images

    Get PDF
    Vision loss, with a prevalence loss greater than 42 million in the United States is one of the major challenges of today's health-care industry and medical science. Early detection of different retinal-related diseases will dramatically reduce the risk of vision loss. Optical Coherence Tomography (OCT) is a relatively new imaging technique which is of great importance in the identification of ocular and especially retinal diseases. Thus, the efficient analysis of OCT images provides several advantages. In this thesis, we propose a series of image processing and machine learning techniques for the automated analysis of OCT images. The proposed methodology in chapter 2 localizes different retinal layers using a modified version of active contour models. In chapter 3, we propose a method which classifies OCT images based on different pathological conditions using novel methods, e.g., transfer learning and new texture detection techniques. The proposed methods along with the clinically meaningful extracted characteristics provide numbers of applications and benefits, e.g., saving a considerable amount of time and providing more-efficient and -accurate indices for the diagnosis and treatment of different ocular diseases to ophthalmologists and finally reducing the overall risk of vision loss

    Evaluating retinal blood vessels' abnormal tortuosity in digital image fundus

    Get PDF
    Abnormal tortuosity of retinal blood vessels is one of the early indicators of a number of vascular diseases. Therefore early detection and evaluation of this phenomenon can provide a window for early diagnosis and treatment. Currently clinicians rely on a qualitative gross scale to estimate the degree of vessel tortuosity. There have been many attempts to develop an accurate automated measure of tortuosity, yet it seems that none of these measures has gained universal acceptance. This can be attributed to the fact that descriptions and de�nitions of retinal vessel tortuosity are ambiguous and non-standard. In addition uni�ed public datasets for di�erent disease are not regularly available. I have propose a tortuosity evaluation framework in order to quantify the tortuosity of arteries and veins in two dimensional colour fundus images. The quanti�cation methods within the framework include retinal vessel morphology analysis based on the measurements of 66 features of blood vessels. These features are grouped as follows: 1) Structural properties 2) Distance approach features 3) Curvature approach features 4) Combined approach features 5) Signal approach features. The features numbered 1 to 4 above are derived from literature. Item number �ve are new features which I have proposed and developed in this thesis. These features have been evaluated using a manually graded retinal tortuosity dataset as controlled set. I have also built three tortuosity datasets, each of which contains two manual gradings. These datasets are: 1) A general tortuosity dataset 2) A diabetic retinopathy dataset 3) A hypertensive retinopathy dataset. In addition, I have investigated the di�erences in tortuosity patterns in hypertensive and diabetic retinopathy. New pathology based datasets were used in this investigation. These are the major contributions of this thesi

    Соціально-гуманітарні аспекти розвитку сучасного суспільства

    Get PDF

    You Cannot be Serious: The Conceptual Innovator as Trickster

    Get PDF
    In 1917, when the American Society of Independent Artists refused to exhibit a porcelain urinal that Marcel Duchamp had submitted to them as a sculpture, a friend of Duchamp's wrote : "There are those who anxiously ask: 'Is he serious or is he joking?' Perhaps he is both!" Duchamp's behavior - making a provocative and radically innovative artistic gesture, then declining to explain his motives in the face of accusations that this was a hoax - became a model that subsequently inspired a series of iconoclastic young conceptual innovators. These include many of the most important artists of the twentieth century, and their line of descent runs from Joseph Beuys, Andy Warhol, Yves Klein, and Piero Manzoni to Gilbert & George, Jeff Koons, Tracey Emin and Damien Hirst. The ambiguity of these artists' actions has triggered heated and persistent debates over the sincerity of their work, which have increased the effectiveness of the work's attacks on existing artistic conventions at the same time that they have advanced the artists' reputations and careers. The model of the conceptual artist as trickster is a novel feature of the innovative conceptual art of the past century, and it has produced a type of conceptual art that is more personal than nearly all other forms of art: we can never look at their work without thinking not only of their ideas - what is the artistic significance of a manufactured object purchased at a hardware store, or a silkscreen of a photograph taken from a magazine - but also of their attitudes - was Fountain or Fat Chair really intended to be taken seriously?

    Proceedings of the 2014 Berry Summer Thesis Institute

    Get PDF
    Thanks to a gift from the Berry Family Foundation and the Berry family, the University Honors Program launched the Berry Summer Thesis Institute in 2012. The institute introduces students in the University Honors Program to intensive research, scholarship opportunities and professional development. Each student pursues a 12-week summer thesis research project under the guidance of a UD faculty mentor. This contains the product of the students\u27 research

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    The flexibility of myosin 7a

    Get PDF
    Myosin 7a is a molecular motor found in hair cells of the ear and the photoreceptor cells of the eye. Myosin 7a is comprised of an actin-binding motor domain, a lever; which is composed of 5 IQ motifs that can potentially bind 5 light chains followed by a single alpha helical (SAH) domain, and a tail composed of 2 MyTH4-FERM domains. The lever is an essential mechanical element in myosin 7a function, but an understanding of its mechanical properties and how these derive from its substructure is lacking. It has been observed in vitro that myosin 7a is able to regulate its activity through a head-tail interaction. How the flexibility of the sub-domains of the lever allows the molecule to fold up is not completely understood. To address this, the first aim of this study was to look for evidence of novel light chain binding partners in myosin 7a, which revealed calmodulin to be the preferred light chain. My second aim was to study the structure and flexibility of the lever of full-length myosin 7a using single-particle image processing of images from negative stain electron microscopy (EM). Image averaging revealed the lever to be much shorter than expected. Additionally, there was evidence of thermally-driven flexing at the motor-lever junction. A stiffness of 78 pN.nm.rad-2 for the flexing was inferred, which represents a significant compliance in the head. An investigation into lever bending analysis, by monitoring the decay of tangent-tangent correlations of the lever shapes, yielded a persistence length of 38 ± 3 nm. Finally, long time molecular dynamics (MD) simulations were compared with a novel coarse-grained (CG) simulation technique called Fluctuating Finite Element Analysis (FFEA), which treats proteins as visco-elastic continua subject to thermal noise to probe the flexibility of myosin 7a. FFEA allows sufficiently long time simulations that are computationally less expensive than corresponding all-atom MD simulations to allow myosin 7a to explore its full range of configurations. Extraction of flexibility data from all-atom MD simulations calculated the bending stiffness of the SAH domain to be 60.5 pN.nm2, with reasonable overlap of the major modes of motion between the all-atom and CG simulation types
    corecore