618 research outputs found

    False positive reduction in CADe using diffusing scale space

    Get PDF
    Segmentation is typically the first step in computer-aided-detection (CADe). The second step is false positive reduction which usually involves computing a large number of features with thresholds set by training over excessive data set. The number of false positives can, in principle, be reduced by extensive noise removal and other forms of image enhancement prior to segmentation. However, this can drastically affect the true positive results and their boundaries. We present a post-segmentation method to reduce the number of false positives by using a diffusion scale space. The method is illustrated using Integral Invariant scale space, though this is not a requirement. It is quite general, does not require any prior information, is fast and easy to compute, and gives very encouraging results. Experiments are performed both on intensity mammograms as well as on VolparaÂź density maps

    Breast Cancer: Modelling and Detection

    Get PDF
    This paper reviews a number of the mathematical models used in cancer modelling and then chooses a specific cancer, breast carcinoma, to illustrate how the modelling can be used in aiding detection. We then discuss mathematical models that underpin mammographic image analysis, which complements models of tumour growth and facilitates diagnosis and treatment of cancer. Mammographic images are notoriously difficult to interpret, and we give an overview of the primary image enhancement technologies that have been introduced, before focusing on a more detailed description of some of our own recent work on the use of physics-based modelling in mammography. This theoretical approach to image analysis yields a wealth of information that could be incorporated into the mathematical models, and we conclude by describing how current mathematical models might be enhanced by use of this information, and how these models in turn will help to meet some of the major challenges in cancer detection

    An Information Processing Approach to Group Performance of Poetry: Implications for Adaptation and Audience Research

    Get PDF
    This investigation explores the relationship between a neurological model of information processing and a group performance of poetry. It focuses upon three considerations: construction of a neurological model of information processing, application of this model to group performance of poetry, and implications for adaptation of poetry and audience research. A neurological model of information processing is constructed which explains perception in terms of the specialized functions of the left and right hemispheres. According to this model, certain aspects of the environment (stimuli) are more efficiently processed by each hemisphere: the left dealing largely with verbal material and the right with nonverbal, visual/spatial material. Furthermore, each hemisphere prefers to operate upon, or process, elements of the environment in a particular manner. The left hemisphere processes material in a sequential, linear and analytic fashion, while the right processes material in a simultaneous, holistic, and intuitive fashion. The model offers a summary of neurological research relevant to a discussion of probable hemisphere involvement for an audience experiencing a group performance of poetry. This model is then applied to the components of group performance of poetry: the poem and its performance. The discussion indicates that although poetry is a linguistic system which contains syntactic order (a left hemisphere element) the language of poetry is such that its richest and most effective processing is realized in the right hemisphere. It is characteristically high in concrete words linked to a perceptual context, rich in imagery, metaphor, and appositional language; and is an evocative, subjective and multifaceted gestalt. Furthermore, performance enhances the right hemisphere elements inherent in poetic expression by giving the audience acoustic manifestations of tone and mood, rhythm and word texture as well as visual/ spatial manifestations of implicit movements. Because the right hemisphere is sensitive to auditory and visual stimuli which express affect (i.e., tone of voice and facial expression), it is involved fully in processing both the poem and its expression through the medium of performance. Implications for adaptation of a text and for audience research are explored describing the audience experience of a group performance of poetry as a cognitive process, in which the synthesizing characteristics of the right hemisphere are crucial in the act of processing the poem at its most resonant and experiential level. Such a description offers the adapter-director a guide for making decisions in adaptation and staging of a poem. It offers the audience researcher a means of describing audience experience in quantifiable terms, of isolating variables which affect the experience, and a source of research methods and relevant data. The application of the neurological model to the field of oral interpretation in this thesis contributes another dimension to the study and appreciation of performance and the aesthetics of literary experience

    Real-time detection of dictionary DGA network traffic using deep learning

    Get PDF
    Botnets and malware continue to avoid detection by static rules engines when using domain generation algorithms (DGAs) for callouts to unique, dynamically generated web addresses. Common DGA detection techniques fail to reliably detect DGA variants that combine random dictionary words to create domain names that closely mirror legitimate domains. To combat this, we created a novel hybrid neural network, Bilbo the `bagging` model, that analyses domains and scores the likelihood they are generated by such algorithms and therefore are potentially malicious. Bilbo is the first parallel usage of a convolutional neural network (CNN) and a long short-term memory (LSTM) network for DGA detection. Our unique architecture is found to be the most consistent in performance in terms of AUC, F1 score, and accuracy when generalising across different dictionary DGA classification tasks compared to current state-of-the-art deep learning architectures. We validate using reverse-engineered dictionary DGA domains and detail our real-time implementation strategy for scoring real-world network logs within a large financial enterprise. In four hours of actual network traffic, the model discovered at least five potential command-and-control networks that commercial vendor tools did not flag

    Creativity and Thinking Skills Integrated into a Science Enrichment Unit on Flooding

    Get PDF
    Floods that used to happen every hundred years are now occurring more frequently. Human influences on the damage inflicted by flooding need to be well-understood by future voters and property-owners. Therefore, the timely topic of flooding was used as the focus of a special multi-grade enrichment short course taught by two university education professors for 26 preK-8th grade high-achieving and creative students. During the course, students listened to guest speakers (city council member, meteorologist, and environmentalist), watched two flood-related videos, read books on floods, viewed electronic presentations related to dams and recent floods, discussed causes, effects, and mitigations of flooding, and devised creative games from recycled materials to teach peers about flood concepts. The de Bono CoRT Breadth thinking skill system was used to organize many of the course activities. The flood lesson activities were relevant to these students who had experienced a flood of the city’s river the previous year and challenged students more than their typical classroom activities, an important finding considering that many gifted students drop out of school because of irrelevant and non-demanding class work. The course broadened students’ knowledge of floods and assisted them in thinking beyond the immediate situation

    Breast cancer risk factors and a novel measure of volumetric breast density: cross-sectional study

    Get PDF
    We conducted a cross-sectional study nested within a prospective cohort of breast cancer risk factors and two novel measures of breast density volume among 590 women who had attended Glasgow University (1948–1968), replied to a postal questionnaire (2001) and attended breast screening in Scotland (1989–2002). Volumetric breast density was estimated using a fully automated computer programme applied to digitised film-screen mammograms, from medio-lateral oblique mammograms at the first-screening visit. This measured the proportion of the breast volume composed of dense (non-fatty) tissue (Standard Mammogram Form (SMF)%) and the absolute volume of this tissue (SMF volume, cm3). Median age at first screening was 54.1 years (range: 40.0–71.5), median SMF volume 70.25 cm3 (interquartile range: 51.0–103.0) and mean SMF% 26.3%, s.d.=8.0% (range: 12.7–58.8%). Age-adjusted logistic regression models showed a positive relationship between age at last menstrual period and SMF%, odds ratio (OR) per year later: 1.05 (95% confidence interval: 1.01–1.08, P=0.004). Number of pregnancies was inversely related to SMF volume, OR per extra pregnancy: 0.78 (0.70–0.86, P<0.001). There was a suggestion of a quadratic relationship between birthweight and SMF%, with lowest risks in women born under 2.5 and over 4 kg. Body mass index (BMI) at university (median age 19) and in 2001 (median age 62) were positively related to SMF volume, OR per extra kg m−2 1.21 (1.15–1.28) and 1.17 (1.09–1.26), respectively, and inversely related to SMF%, OR per extra kg m−2 0.83 (0.79–0.88) and 0.82 (0.76–0.88), respectively, P<0.001. Standard Mammogram Form% and absolute SMF volume are related to several, but not all, breast cancer risk factors. In particular, the positive relationship between BMI and SMF volume suggests that volume of dense breast tissue will be a useful marker in breast cancer studies

    The landscape of human STR variation

    Get PDF
    Short tandem repeats are among the most polymorphic loci in the human genome. These loci play a role in the etiology of a range of genetic diseases and have been frequently utilized in forensics, population genetics, and genetic genealogy. Despite this plethora of applications, little is known about the variation of most STRs in the human population. Here, we report the largest-scale analysis of human STR variation to date. We collected information for nearly 700,000 STR loci across more than 1000 individuals in Phase 1 of the 1000 Genomes Project. Extensive quality controls show that reliable allelic spectra can be obtained for close to 90% of the STR loci in the genome. We utilize this call set to analyze determinants of STR variation, assess the human reference genome’s representation of STR alleles, find STR loci with common loss-of-function alleles, and obtain initial estimates of the linkage disequilibrium between STRs and common SNPs. Overall, these analyses further elucidate the scale of genetic variation beyond classical point mutations.American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi

    Development of an automated detection algorithm for patient motion blur in digital mammograms

    Get PDF
    The purpose is to develop and validate an automated method for detecting image unsharpness caused by patient motion blur in digital mammograms. The goal is that such a tool would facilitate immediate re-taking of blurred images, which has the potential to reduce the number of recalled examinations, and to ensure that sharp, high-quality mammograms are presented for reading. To meet this goal, an automated method was developed based on interpretation of the normalized image Wiener Spectrum. A preliminary algorithm was developed using 25 cases acquired using a single vendor system, read by two expert readers identifying the presence of blur, location, and severity. A predictive blur severity score was established using multivariate modeling, which had an adjusted coefficient of determination, R2 =0.63±0.02, for linear regression against the average reader-scored blur severity. A heatmap of the relative blur magnitude showed good correspondence with reader sketches of blur location, with a Spearman rank correlation of 0.70 between the algorithmestimated area fraction with blur and the maximum of the blur area fraction categories of the two readers. Given these promising results, the algorithm-estimated blur severity score and heatmap are proposed to be used to aid observer interpretation. The use of this automated blur analysis approach, ideally with feedback during an exam, could lead to a reduction in repeat appointments for technical reasons, saving time, cost, potential anxiety, and improving image quality for accurate diagnosis.</p

    Impact of errors in recorded compressed breast thickness measurements on volumetric density classification using volpara v1.5.0 software

    Get PDF
    Purpose: Mammographic density has been demonstrated to predict breast cancer risk. It has been proposed that it could be used for stratifying screening pathways and recommending additional imaging. Volumetric density tools use the recorded compressed breast thickness (CBT) of the breast measured at the x-ray unit in their calculation, however the accuracy of the recorded thickness can vary. The aim of this study was to investigate whether inaccuracies in recorded CBT impact upon volumetric density classification and to examine whether the current quality control (QC) standard is sufficient for assessing mammographic density. Methods: Raw data from 52 digital screening mammograms were included in the study. For each image, the clinically recorded CBT was artificially increased and decreased to simulate measurement error. Increments of 1mm were used up to ±15% error of recorded CBT was achieved. New images were created for each 1mm step in thickness resulting in a total of 974 images which then had Volpara Density Grade (VDG) and volumetric density percentage assigned. Results: A change in VDG was recorded in 38.5% (n= 20) of mammograms when applying ±15% error to the recorded CBT and 11.5 % (n= 6) were within the QC standard prescribed error of ±5mm. Conclusion: The current QC standard of ±5mm error in recorded CBT creates the potential for error in mammographic density measurement. This may lead to inaccurate classification of mammographic density. The current QC standard for assessing mammographic density should be reconsidered
    • 

    corecore