86 research outputs found

    Mobile-Based risk assessment of diabetic retinopathy by image processing

    Get PDF
    Abry Christian, Abry-Deffayet Dominique. Pour une évaluation de la spécificité lexicale d 'une région : la Savoie. In: Le Monde alpin et rhodanien. Revue régionale d'ethnologie, n°1/1981. Les régions de la France. Colloque de la Société d'Ethnologie Française. Grenoble 7-8 décembre 1978. pp. 111-126

    Vision-Based Autonomous Robotic Floor Cleaning in Domestic Environments

    Get PDF
    Fleer DR. Vision-Based Autonomous Robotic Floor Cleaning in Domestic Environments. Bielefeld: Universität Bielefeld; 2018

    Toward Global Localization of Unmanned Aircraft Systems using Overhead Image Registration with Deep Learning Convolutional Neural Networks

    Get PDF
    Global localization, in which an unmanned aircraft system (UAS) estimates its unknown current location without access to its take-off location or other locational data from its flight path, is a challenging problem. This research brings together aspects from the remote sensing, geoinformatics, and machine learning disciplines by framing the global localization problem as a geospatial image registration problem in which overhead aerial and satellite imagery serve as a proxy for UAS imagery. A literature review is conducted covering the use of deep learning convolutional neural networks (DLCNN) with global localization and other related geospatial imagery applications. Differences between geospatial imagery taken from the overhead perspective and terrestrial imagery are discussed, as well as difficulties in using geospatial overhead imagery for image registration due to a lack of suitable machine learning datasets. Geospatial analysis is conducted to identify suitable areas for future UAS imagery collection. One of these areas, Jerusalem northeast (JNE) is selected as the area of interest (AOI) for this research. Multi-modal, multi-temporal, and multi-resolution geospatial overhead imagery is aggregated from a variety of publicly available sources and processed to create a controlled image dataset called Jerusalem northeast rural controlled imagery (JNE RCI). JNE RCI is tested with handcrafted feature-based methods SURF and SIFT and a non-handcrafted feature-based pre-trained fine-tuned VGG-16 DLCNN on coarse-grained image registration. Both handcrafted and non-handcrafted feature based methods had difficulty with the coarse-grained registration process. The format of JNE RCI is determined to be unsuitable for the coarse-grained registration process with DLCNNs and the process to create a new supervised machine learning dataset, Jerusalem northeast machine learning (JNE ML) is covered in detail. A multi-resolution grid based approach is used, where each grid cell ID is treated as the supervised training label for that respective resolution. Pre-trained fine-tuned VGG-16 DLCNNs, two custom architecture two-channel DLCNNs, and a custom chain DLCNN are trained on JNE ML for each spatial resolution of subimages in the dataset. All DLCNNs used could more accurately coarsely register the JNE ML subimages compared to the pre-trained fine-tuned VGG-16 DLCNN on JNE RCI. This shows the process for creating JNE ML is valid and is suitable for using machine learning with the coarse-grained registration problem. All custom architecture two-channel DLCNNs and the custom chain DLCNN were able to more accurately coarsely register the JNE ML subimages compared to the fine-tuned pre-trained VGG-16 approach. Both the two-channel custom DLCNNs and the chain DLCNN were able to generalize well to new imagery that these networks had not previously trained on. Through the contributions of this research, a foundation is laid for future work to be conducted on the UAS global localization problem within the rural forested JNE AOI

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on üks põhilistest teenustest, mille pakkumine võib suurendada rõivapoodide edukust, sest tänu sellele lahendusele väheneb füüsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem välja pakutud masinnägemise ja graafika meetoditel õnnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaõnnestunud põhiliselt seetõttu, et ei ole suudetud korralikult arvesse võtta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. Käesolev projekt kavatseb kõrvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. Välja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analüüsimises, modelleerimises, mõõtmete arvutamises, orientiiride paigutamises, mannekeenidelt võetud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti käigus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati välja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise süsteemi täiendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    Elasticity mapping for breast cancer diagnosis using tactile imaging and auxiliary sensor fusion

    Get PDF
    Tactile Imaging (TI) is a technology utilising capacitive pressure sensors to image elasticity distributions within soft tissues such as the breast for cancer screening. TI aims to solve critical problems in the cancer screening pathway, particularly: low sensitivity of manual palpation, patient discomfort during X-ray mammography, and the poor quality of breast cancer referral forms between primary and secondary care facilities. TI is effective in identifying ‘non-palpable’, early-stage tumours, with basic differential ability that reduced unnecessary biopsies by 21% in repeated clinical studies. TI has its limitations, particularly: the measured hardness of a lesion is relative to the background hardness, and lesion location estimates are subjective and prone to operator error. TI can achieve more than simple visualisation of lesions and can act as an accurate differentiator and material analysis tool with further metric development and acknowledgement of error sensitivities when transferring from phantom to clinical trials. This thesis explores and develops two methods, specifically inertial measurement and IR vein imaging, for determining the breast background elasticity, and registering tactile maps for lesion localisation, based on fusion of tactile and auxiliary sensors. These sensors enhance the capabilities of TI, with background tissue elasticity determined with MAE < 4% over tissues in the range 9 kPa – 90 kPa and probe trajectory across the breast measured with an error ratio < 0.3%, independent of applied load, validated on silicone phantoms. A basic TI error model is also proposed, maintaining tactile sensor stability and accuracy with 1% settling times < 1.5s over a range of realistic operating conditions. These developments are designed to be easily implemented into commercial systems, through appropriate design, to maximise impact, providing a stable platform for accurate tissue measurements. This will allow clinical TI to further reduce benign referral rates in a cost-effective manner, by elasticity differentiation and lesion classification in future works.Tactile Imaging (TI) is a technology utilising capacitive pressure sensors to image elasticity distributions within soft tissues such as the breast for cancer screening. TI aims to solve critical problems in the cancer screening pathway, particularly: low sensitivity of manual palpation, patient discomfort during X-ray mammography, and the poor quality of breast cancer referral forms between primary and secondary care facilities. TI is effective in identifying ‘non-palpable’, early-stage tumours, with basic differential ability that reduced unnecessary biopsies by 21% in repeated clinical studies. TI has its limitations, particularly: the measured hardness of a lesion is relative to the background hardness, and lesion location estimates are subjective and prone to operator error. TI can achieve more than simple visualisation of lesions and can act as an accurate differentiator and material analysis tool with further metric development and acknowledgement of error sensitivities when transferring from phantom to clinical trials. This thesis explores and develops two methods, specifically inertial measurement and IR vein imaging, for determining the breast background elasticity, and registering tactile maps for lesion localisation, based on fusion of tactile and auxiliary sensors. These sensors enhance the capabilities of TI, with background tissue elasticity determined with MAE < 4% over tissues in the range 9 kPa – 90 kPa and probe trajectory across the breast measured with an error ratio < 0.3%, independent of applied load, validated on silicone phantoms. A basic TI error model is also proposed, maintaining tactile sensor stability and accuracy with 1% settling times < 1.5s over a range of realistic operating conditions. These developments are designed to be easily implemented into commercial systems, through appropriate design, to maximise impact, providing a stable platform for accurate tissue measurements. This will allow clinical TI to further reduce benign referral rates in a cost-effective manner, by elasticity differentiation and lesion classification in future works

    Personal Identification Based on Live Iris Image Analysis

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans

    Model based 3D vision synthesis and analysis for production audit of installations.

    Get PDF
    PhDOne of the challenging problems in the aerospace industry is to design an automated 3D vision system that can sense the installation components in an assembly environment and check certain safety constraints are duly respected. This thesis describes a concept application to aid a safety engineer to perform an audit of a production aircraft against safety driven installation requirements such as segregation, proximity, orientation and trajectory. The capability is achieved using the following steps. The initial step is to perform image capture of a product and measurement of distance between datum points within the product with/without reference to a planar surface. This provides the safety engineer a means to perform measurements on a set of captured images of the equipment they are interested in. The next step is to reconstruct the digital model of fabricated product by using multiple captured images to reposition parts according to the actual model. Then, the projection onto the 3D digital reconstruction of the safety related installation constraints, respecting the original intent of the constraints that are defined in the digital mock up is done. The differences between the 3D reconstruction of the actual product and the design time digital mockup of the product are identified. Finally, the differences/non conformances that have a relevance to safety driven installation requirements with reference to the original safety requirement intent are identified. The above steps together give the safety engineer the ability to overlay a digital reconstruction that should be as true to the fabricated product as possible so that they can see how the product conforms or doesn't conform to the safety driven installation requirements. The work has produced a concept demonstrator that will be further developed in future work to address accuracy, work flow and process efficiency. A new depth based segmentation technique GrabcutD which is an improvement to existing Grabcut, a graph cut based segmentation method is proposed. Conventional Grabcut relies only on color information to achieve segmentation. However, in stereo or multiview analysis, there is additional information that could be also used to improve segmentation. Clearly, depth based approaches bear the potential discriminative power of ascertaining whether the object is nearer of farer. We show the usefulness of the approach when stereo information is available and evaluate it using standard datasets against state of the art result
    corecore