43 research outputs found

    Optical blur disturbs – the influence of optical-blurred images in photogrammtry

    Get PDF
    Photogrammetric processes such as camera calibration, feature and target detection and referencing are assumed to strongly depend on the quality of the images that are provided for the process. Consequently, motion and optically blurred images are usually excluded from photogrammetric processes to supress their negative influence. To evaluate how much optical blur is acceptable and how large the influence of optical blur is on photogrammetric procedures a variety of test environments were established. These were based upon previous motion blur research and included test fields for the analysis of camera calibration. For the evaluation, a DSLR camera as well as Lytro Illum light field camera were used. The results show that optical blur has a negative influence on photogrammetric procedures, mostly automatic target detection. With the intervention of an experienced operator and the use of semi-automatic tools, acceptable results can be established

    Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming

    Identification parade in immersive virtual reality: A technical setup

    Full text link
    Virtual Reality (VR) has sparked interest within the forensic community, where it is currently used for training purposes and in variety of forensic scenarios. In combination with efficient and user friendly full body 3-Dimensional (3D) documentation methods, VR visualisations present a viable tool for suspect witness identification. The well-known procedure of placing several persons in a room with a one-way-mirror, along with a witness on the other side of the mirror has practical disadvantages. The primary concern implicates the witness(s) and person(s) of interest coming face-to-face prior to the line-up, combined with finding sufficient persons to include within the line-up. Although image identification using printed paper partially resolved this problem, features such as body stature also marks an issue for the recognition and identification process. To test whether VR provides the technical capabilities to perform an identification parade, a total of 15 subjects were 3D documented using the multi-camera device “Photobox”. From this group, one of the documented persons then interrupted a lecture, where consequently, the students were asked afterwards to identify the same person in VR and paper identification sets. It was found that the participating students were able to identify the “suspect” in both datasets. The results imply that VR technology allow users to identify persons. However, as this is a preliminary study the similarity problem was not analysed in this paper and requires further investigation to demonstrate the robustness of this approach

    Automated wound segmentation and classification of seven common injuries in forensic medicine

    Full text link
    In forensic medical investigations, physical injuries are documented with photographs accompanied by written reports. Automatic segmentation and classification of wounds on these photographs could provide forensic pathologists with a tool to improve the assessment of injuries and accelerate the reporting process. In this pilot study, we trained and compared several preexisting deep learning architectures for image segmentation and wound classification on forensically relevant photographs in our database. The best scores were a mean pixel accuracy of 69.4% and a mean intersection over union (IoU) of 48.6% when evaluating the trained models on our test set. The models had difficulty distinguishing the background from wounded areas. As an example, image pixels showing subcutaneous hematomas or skin abrasions were assigned to the background class in 31% of cases. Stab wounds, on the other hand, were reliably classified with a pixel accuracy of 93%. These results can be partially attributed to undefined wound boundaries for some types of injuries, such as subcutaneous hematoma. However, despite the large class imbalance, we demonstrate that the best trained models could reliably distinguish among seven of the most common wounds encountered in forensic medical investigations

    Automatic isolation of blurred images from UAV image sequences

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated filtering process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. A “shaking table” was used to create images with known blur during a series of laboratory tests. This platform can be moved in one direction by a mathematical function controlled by a defined frequency and amplitude. The shaking table was used to displace a Nikon D80 digital SLR camera with a user defined frequency and amplitude. The actual camera displacement was measured accurately and exposures were synchronized, which provided the opportunity to acquire images with a known blur effect. Acquired images were processed digitally to determine a quantifiable measure of image blur, which has been created by the actual shaking table function. Once determined for a sequence of images, a user defined threshold can be used to differentiate between “blurred” and "acceptable" images. A subsequent step is to establish the effect that blurred images have upon the accuracy of subsequent measurements. Both of these aspects will be discussed in this paper and future work identified

    Influence of blur on feature matching and a geometric approach for photogrammetric deblurring

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by a UAV, which have a high ground resolution and good spectral and radiometric resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost efficient and have become attractive for many applications including change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The aim of this research is to develop a blur correction method to deblur UAV images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears "muddy" and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process, which is advantageous. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images may limit the application. Deblurring images needs to focus on geometric correct deblurring to assure geometric correct measurements

    UAV image blur – its influence and ways to correct it

    Get PDF
    Unmanned aerial vehicles (UAVs) have become an interesting and active research topic in photogrammetry. Current research is based on image sequences acquired by UAVs which have a high ground resolution and good spectral resolution due to low flight altitudes combined with a high-resolution camera. One of the main problems preventing full automation of data processing of UAV imagery is the unknown degradation effect of blur caused by camera movement during image acquisition. The purpose of this paper is to analyse the influence of blur on photogrammetric image processing, the correction of blur and finally, the use of corrected images for coordinate measurements. It was found that blur influences image processing significantly and even prevents automatic photogrammetric analysis, hence the desire to exclude blurred images from the sequence using a novel filtering technique. If necessary, essential blurred images can be restored using information of overlapping images of the sequence or a blur kernel with the developed edge shifting technique. The corrected images can be then used for target identification, measurements and automated photogrammetric processing

    Automatic detection of blurred images in UAV image sets

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm

    Cost-effective 3D documentation device in forensic medicine

    Get PDF
    3D documentation in forensics and forensic medicine is being introduced more frequently in various institutes around the world. However, several institutes lack capacity in finances as well as staff to perform 3D documentations regularly. This technical paper aims to present a 3D documentation device that is low cost and easy to use and is a viable entry level solution for forensic medical departments. For this the small single-board computer Raspberry Pi 4 was used in conjunction with its high quality (HQ) camera module to create the 3DLamp - a flexible, low cost and easy to use documentation device. Besides a detailed description of the device this paper also presents four case examples where a 3D documentation was performed and analyses the acquired data and the created 3D models. It was found that the device returns feasible 3D models that appear usable for forensic 3D reconstructions

    Forensic examination of living persons in 3D models.

    Get PDF
    Physical injuries caused by interpersonal violence or accidents are usually documented with photographs. In addition to standard injury photography using 2D photographs, the Institute *INSTITUT NAME BLINDED FOR REVIEW* uses a Botspot Botscan ® multi-camera device (Photobox; Aniwaa Ltd, Berlin, Germany) that allows for 3D documentation of a subject. The Photobox contains 70 cameras positioned at different heights looking at a central platform. Within a fraction of a second, all cameras are activated and acquire the necessary images for 3D documentation. In previous studies by Michienzi et al. (2018), the geometric correctness of 3D documented injuries was analyzed. While their work concentrated solely on artificial injuries and their dimensions, the work presented in this study analyzes whether the Photobox allows for accurate medical interpretation of injuries, by forensic pathologists. To perform this analysis, 40 datasets of a variety of real cases were processed to 3D models. The created 3D models were then examined by forensic pathologists on 2D computer screens, and the findings were compared with the original reports. While the aim of this work was to assess whether examinations based on a 3D model allows comparable results to immediate examinations of the subject, the results showed that examinations based on a 3D model are 85% accurate when comparing with physical examinations. This indicates that 3D models allow for reasonably accurate interpretation, and it is possible that accuracy might increase with improved equipment and better trained personnel
    corecore