836 research outputs found

    A review on automatic mammographic density and parenchymal segmentation

    Get PDF
    Breast cancer is the most frequently diagnosed cancer in women. However, the exact cause(s) of breast cancer still remains unknown. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective way to tackle breast cancer. There are more than 70 common genetic susceptibility factors included in the current non-image-based risk prediction models (e.g., the Gail and the Tyrer-Cuzick models). Image-based risk factors, such as mammographic densities and parenchymal patterns, have been established as biomarkers but have not been fully incorporated in the risk prediction models used for risk stratification in screening and/or measuring responsiveness to preventive approaches. Within computer aided mammography, automatic mammographic tissue segmentation methods have been developed for estimation of breast tissue composition to facilitate mammographic risk assessment. This paper presents a comprehensive review of automatic mammographic tissue segmentation methodologies developed over the past two decades and the evidence for risk assessment/density classification using segmentation. The aim of this review is to analyse how engineering advances have progressed and the impact automatic mammographic tissue segmentation has in a clinical environment, as well as to understand the current research gaps with respect to the incorporation of image-based risk factors in non-image-based risk prediction models

    Diagnostic Reference Levels for digital mammography in Australia

    Get PDF
    Aims: In 3 phases, this thesis explores: radiation doses delivered to women during mammography, methods to estimate mean glandular dose (MGD), and the use of mammographic breast density (MBD) in MGD calculations. Firstly, it examines Diagnostic reference levels (DRLs) for digital mammography in Australia, with novel focus on the use of compressed breast thickness (CBT) and detector technologies as a guide when determining patient derived DRLs. Secondly, it analyses the agreement between Organ Dose estimated by different digital mammography units and calculated MGD for clinical data. Thirdly, it explores the novel use of MBD in MGD calculations, suggesting a new dose estimation called the actual glandular dose (AGD), and compares MGD to AGD. Methods: DICOM headers were extracted from 52405 anonymised mammograms using 3rd party software. Exposure and QA information were utilised to calculate MGD using 3 methods. LIBRA software was used to estimate MBD for 31097 mammograms. Median, 75th and 95th percentiles were calculated across MGDs obtained for all included data and according to 9 CBT ranges, average population CBT, and for 3 detector technologies. The significance of the differences, correlations, and agreement between MGDs for different CBT ranges, calculation methods, and different density estimation methods were analysed. Conclusions: This thesis have recommended DRLs for mammography in Australia, it shows that MGD is dependent upon CBT and detector technology, hence DRLs were presented as a table for different CBTs and detectors. The work also shows that Organ Doses reported by vendors vary from that calculated using established methodologies. Data produced also show that the use of MGD calculated using standardised glandularities underestimates dose at lower CBTs compared to AGD by up to 10%, hence, underestimating radiation risk. Finally, AGD was proposed; it considers differences in breast composition for individualised radiation-induced risk assessment

    Imaging of the Breast

    Get PDF
    Early detection of breast cancer combined with targeted therapy offers the best outcome for breast cancer patients. This volume deal with a wide range of new technical innovations for improving breast cancer detection, diagnosis and therapy. There is a special focus on improvements in mammographic image quality, image analysis, magnetic resonance imaging of the breast and molecular imaging. A chapter on targeted therapy explores the option of less radical postoperative therapy for women with early, screen-detected breast cancers

    Perceived Quality as Assessment Tool for the Test Case Amore e Psiche Domus in Ostia Antica

    Get PDF
    Recent years have seen the development of many new ways for cultural heritage visualization; with the growing use of “Information and Communications Technology” (ICT) many 3D reconstructions, virtual tours and “Augmented Reality/Virtual Reality” (AR/VR) application has been developed to enrich the contents of museums, archeological sites and historical places. However, today only few cultural assets have an accurate 3D model with a detailed informative content. In fact, the costs due to the creation of virtual content are still high and they can be addressed only for the most iconic or important monuments. Inside this frame the project RECIPE (REsilience in art CIties: Planning for Emergencies) founded by ESA/ESTEC1 use a crowdsourcing approach, involving tourists and interested people, to acquire cheaply the photos necessary to create photogrammetric models. Such a models to be correctly used inside different level of recording and monitoring tasks, require developing procedure to evaluate their quality. This work discusses, with reference to a study case, only how to validate models by proposing a methodology based on dimensional and color error calculation together with structural indices, such as SSIM and PIQE. Besides to avoid influence generate by different cameras, focus and positioning in photos taken by tourists, the used photo data base has been produced with a professional device following the state of art rules in SfM. At least, it is also discussed the possibility to implement the 3D models in a virtual reality environment to increase their diffusion on new multimedia and interactive plat-forms

    An objective measure to quantify discomfort in long duration driving

    Get PDF
    In recent years increased emphasis has been placed on improving seat comfort in automobiles. This is partly due to research showing that prolonged driving is associated with increased risk of musculoskeletal disorders, but largely because driver comfort is now viewed as an increasingly important aspect of the competitive marketing of vehicles. Driving is firmly cemented as a major part of most people s daily life across the world and people are now spending more time in their vehicles than ever before. As urban congestion continues to rise, commuting distances and durations will progressively increase, subjecting drivers to the risks of long duration driving more often. Consequently the automotive industry has invested in designing seats that perform better under increased usage durations and ergonomics has played a vital role in the design of new seats. However, the ability to design a successful seat relies heavily on the capacity to accurately evaluate the comfort of a vehicle seat and one major issue that has been highlighted with the current state of automotive ergonomics research is the standardisation of comfort evaluation techniques. This research aimed to tackle these issues by investigating the effects of long duration driving on discomfort and the range factors associated with driver discomfort. Furthermore, the ultimate goal of this research was develop and evaluate a novel objective measure of driver discomfort that focused on driver seat fidgets and movements (SFMs) with the aim of standardising discomfort evaluation within the automotive industry. Three laboratory studies and one field observation were conducted to address these aims whereby subjective and objective evaluations of discomfort were conducted during long term driving (ranging from 60 - 140 minutes). The results determined that a measure of driver SFMs can be effectively implemented into long duration driving trials to evaluate the effects of long term driving and vibration exposure on driver discomfort and subsequently used to make accurate predictions of overall discomfort. Large positive correlations have been determined between measures of SFMs and subjective ratings of overall discomfort (r2 > 0.9, P < 0.05) and the SFM method has been successfully repeated under a range of driving conditions. Driver seat fidget and movement (SFM) frequency is shown to significantly increase congruently with subjective ratings over the duration of a long term drive as drivers seek to cope with increased discomfort. It is proposed that drivers will record movements in the vehicle seat when discomfort reaches a threshold that is consciously or unconsciously perceived and as the duration of driving accrues, drivers will reach this threshold with increased frequency. A measure of both SFM frequency and total accumulative SFMs have been shown to accurately predict discomfort ratings and provides the basis for discomfort evaluations to be made via remote monitoring, removing the need for subjective assessment. During a long term drive, there becomes a point upon which improvements in seat design become ineffective as extended duration driving will result in discomfort regardless of how well the seat has been designed. It was shown that drivers will move in the vehicle seat to cope with increased discomfort and in addition, another method of combatting the negative effects of long term driving was investigated. Subjective and objective evaluation determined that breaks from driving will reduce discomfort both immediately and upon completion of a long term drive. Furthermore, these benefits were increased when drivers left the vehicle seat as discomfort was reset when drivers took a 10 minute walk. Walking during a break from driving can be considered the ultimate SFM. Drivers are recommended to plan breaks from driving when conducting a long duration journey in order to minimise discomfort and when taking a break, drivers should take a walk rather than remain seated in the vehicle

    Cost-effective non-metric photogrammetry from consumer-grade sUAS: implications for direct georeferencing of structure from motion photogrammetry

    Get PDF
    The declining costs of small Unmanned Aerial Systems (sUAS), in combination with Structure-from-Motion (SfM) photogrammetry have triggered renewed interest in image-based topography reconstruction. However, the potential uptake of sUAS-based topography is limited by the need for ground control acquired with expensive survey equipment. Direct georeferencing (DG) is a workflow that obviates ground control and uses only the camera positions to georeference the SfM results. However, the absence of ground control poses significant challenges in terms of the data quality of the final geospatial outputs. Notably, it is generally accepted that ground control is required to georeference, refine the camera calibration parameters, and remove any artefacts of optical distortion from the topographic model. Here, we present an examination of DG carried out with low-cost consumer-grade sUAS. We begin with a study of surface deformations resulting from systematic perturbations of the radial lens distortion parameters. We then test a number of flight patterns and develop a novel error quantification method to assess the outcomes. Our perturbation analysis shows that there exists families of predictable equifinal solutions of K1-K2 which minimize doming in the output model. The equifinal solutions can be expressed as K2 = f (K1) and they have been observed for both the DJI Inspire 1 and Phantom 3 sUAS platforms. This equifinality relationship can be used as an external reliability check of the self-calibration and allow a DG workflow to produce topography exempt of non-affine deformations and with random errors of 0.1% of the flying height, linear offsets below 10 m and off-vertical tilts below 1°. Whilst not yet of survey-grade quality, these results demonstrate that low-cost sUAS are capable of producing reliable topography products without recourse to expensive survey equipment and we argue that direct georeferencing and low-cost sUAS could transform survey practices in both academic and commercial disciplines

    Assessing 3D metric data of digital surface models for extracting archaeological data from archive stereo-aerial photographs.

    Get PDF
    Archaeological remains are under increasing threat of attrition from natural processes and the continued mechanisation of anthropogenic activities. This research analyses the ability of digital photogrammetry software to reconstruct extant, damaged, and destroyed archaeological earthworks from archive stereo-aerial photographs. Case studies of Flower's Barrow and Eggardon hillforts, both situated in Dorset, UK, are examined using a range of imagery dating from the 1940s to 2010. Specialist photogrammetric software SocetGXPÂź is used to extract digital surface models, and the results compared with airborne and terrestrial laser scanning data to assess their accuracy. Global summary statistics and spatial autocorrelation techniques are used to examine error scales and distributions. Extracted earthwork profiles are compared to both current and historical surveys of each study site. The results demonstrate that metric information relating to earthwork form can be successfully obtained from archival photography. In some instances, these data out-perform airborne laser scanning in the provision of digital surface models with minimal error. The role of archival photography in regaining metric data from upstanding archaeology and the consequent place for this approach to impact heritage management strategies is demonstrated
    • 

    corecore