6 research outputs found

    Predicting interval and screen-detected breast cancers from mammographic density defined by different brightness thresholds.

    Get PDF
    BACKGROUND: Case-control studies show that mammographic density is a better risk factor when defined at higher than conventional pixel-brightness thresholds. We asked if this applied to interval and/or screen-detected cancers. METHOD: We conducted a nested case-control study within the prospective Melbourne Collaborative Cohort Study including 168 women with interval and 422 with screen-detected breast cancers, and 498 and 1197 matched controls, respectively. We measured absolute and percent mammographic density using the Cumulus software at the conventional threshold (Cumulus) and two increasingly higher thresholds (Altocumulus and Cirrocumulus, respectively). Measures were transformed and adjusted for age and body mass index (BMI). Using conditional logistic regression and adjusting for BMI by age at mammogram, we estimated risk discrimination by the odds ratio per adjusted standard deviation (OPERA), calculated the area under the receiver operating characteristic curve (AUC) and compared nested models using the likelihood ratio criterion and models with the same number of parameters using the difference in Bayesian information criterion (ΔBIC). RESULTS: For interval cancer, there was very strong evidence that the association was best predicted by Cumulus as a percentage (OPERA = 2.33 (95% confidence interval (CI) 1.85-2.92); all ΔBIC > 14), and the association with BMI was independent of age at mammogram. After adjusting for percent Cumulus, no other measure was associated with risk (all P > 0.1). For screen-detected cancer, however, the associations were strongest for the absolute and percent Cirrocumulus measures (all ΔBIC > 6), and after adjusting for Cirrocumulus, no other measure was associated with risk (all P > 0.07). CONCLUSION: The amount of brighter areas is the best mammogram-based measure of screen-detected breast cancer risk, while the percentage of the breast covered by white or bright areas is the best mammogram-based measure of interval breast cancer risk, irrespective of BMI. Therefore, there are different features of mammographic images that give clinically important information about different outcomes

    Measurement challenge : protocol for international case–control comparison of mammographic measures that predict breast cancer risk

    Get PDF
    Introduction: For women of the same age and body mass index, increased mammographic density is one of the strongest predictors of breast cancer risk. There are multiple methods of measuring mammographic density and other features in a mammogram that could potentially be used in a screening setting to identify and target women at high risk of developing breast cancer. However, it is unclear which measurement method provides the strongest predictor of breast cancer risk. Methods and analysis: The measurement challenge has been established as an international resource to offer a common set of anonymised mammogram images for measurement and analysis. To date, full field digital mammogram images and core data from 1650 cases and 1929 controls from five countries have been collated. The measurement challenge is an ongoing collaboration and we are continuing to expand the resource to include additional image sets across different populations (from contributors) and to compare additional measurement methods (by challengers). The intended use of the measurement challenge resource is for refinement and validation of new and existing mammographic measurement methods. The measurement challenge resource provides a standardised dataset of mammographic images and core data that enables investigators to directly compare methods of measuring mammographic density or other mammographic features in case/control sets of both raw and processed images, for the purposes of the comparing their predictions of breast cancer risk. Ethics and dissemination: Challengers and contributors are required to enter a Research Collaboration Agreement with the University of Melbourne prior to participation in the measurement challenge. The Challenge database of collated data and images are stored in a secure data repository at the University of Melbourne. Ethics approval for the measurement challenge is held at University of Melbourne (HREC ID 0931343.3)

    National culture, public health spending and life insurance consumption: an international comparison

    No full text
    Abstract This study aims to offer insight on the national cultural differences, public health expenditures, and economic freedom that persisted in life insurance expenditure across 28 advanced economies and 21 emerging and developing economies from 2002 to 2017. Our system GMM estimator’s analysis reveals that cultural factors, public health spending, economic freedom, financial development, human development, life expectancy, dependency ratio, and the Muslim religion are the major determinants of life insurance consumption at the aggregate level (i.e., for all sample economies). Between the group of advanced economies and the group of emerging and developing economies, these results, however, differ dramatically. It is noteworthy that cultural factors, such as masculinity and uncertainty avoidance, do not account for life insurance spending in advanced economies but have a statistically significant impact on life insurance consumption in emerging and developing economies. One point of interest is that our findings demonstrate that consumers in advanced nations as well as emerging and developing economies with a higher degree of public health spending and economic freedom tend to spend more on life insurance products. Both international life insurance businesses and governments from all around the world can benefit from the findings

    Hidden Markov Model for recognition of skeletal databased hand movement gestures

    No full text
    The development of computing technology provides more and more methods for human-computer interaction applications. The gesture or motion of a human hand is considered as one of the most basic communications for interacting between people and computers. Recently, the release of 3D cameras such as Microsoft Kinect and Leap Motion has provided many advantage tools to explore computer vision and virtual reality based on RGB-Depth images. The paper focuses on improving approach for detecting, training, and recognizing the state sequences of hand motions automatically. The hand movements of three persons are recorded as the input of a recognition system. These hand movements correspond to five actions: sweeping right to left, sweeping top to bottom, circle motion, square motion, and triangle motion. The skeletal data of hand joint are collected to build an observation database. Desired features of each hand action are extracted from skeleton video frames by using the Principle Component Analysis (PCA) algorithm for training and recognition. A hidden Markov model (HMM) is applied to train the feature data and recognize various states of hand movements. The experimental results showed that the proposed method achieved the average accuracy nearly 95.66% and 91.00% for offline and online recognition, respectively
    corecore