7 research outputs found

    A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms.

    Get PDF
    Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper's main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic

    Evaluation of segmentation and detection in computer aided diagnostic methods on selected mammogram images

    Full text link
    Uvod: Mamografija se je v zadnjih letih razširila kot primarna diagnostična preiskava za detekcijo bolezni dojk, predvsem kot metoda izbora presejalnih programov. Pri sami interpretaciji mamogramov, pravočasnem odkrivanju karcinomov in razlikovanju med benignimi in malignimi tumorskimi masami so v veliko pomoč orodja računalniško vodene diagnoze. Delijo se na Computer Aided Detection (CADe) in Computer Aided Diagnosis (CACx). Omenjena orodja se pri svojem delovanju poslužujejo metod strojnega učenja. Razdelitev slike v homogene regije tekstur je eden prvih korakov pri razumevanju, analizi in poglobljenem vpogledu v dano sliko. Med CAD metode med drugim spadajo tudi segmentacijski algoritmi pri obdelavi digitalnih slik. V nalogi smo osredotočeni na segmentacije z upragovljanjem, segmentacije z določanjem območij in segmentacije z učenjem. Namen: Namen diplomske naloge je ugotoviti, katera izmed izbranih oblik segmentacije v sklopu metod računalniško vodene detekcije najbolj ustrezno segmentira sliko pri izbrani bazi mamografskih slik. Metode dela: Pri pisanju diplomske naloge je bila uporabljena metoda deskripcije pri opisovanju pojmov in metoda kompilacije pri uporabi izpiskov, navedb in citatov drugih avtorjev. Nato je na podlagi knjižnih virov opravljen kvalitativni pregled gradiva z namenom primerjave načinov segmentacije in detekcije interesnih področij na mamografskih slikah izbrani bazi mamografskih slik. Rezultati: Primerjane so študije, ki testirajo svoje metode segmentacije in detekcije na javno dostopni bazi mamografskih slik Digital Database For Screening Mammography (DDSM). Opravljen je pregled rezultatov v obliki tabele in ovrednoteni so posledični izsledki. Komentirali smo prednosti in slabosti različnih metod in predlagali najučinkovitejšo. Razprava in zaključek: Zaključimo lahko, da metode, predlagane s strani pregledanih študij zadovoljivo interpretirajo sliko pri izbrani bazi mamografskih slik, a ne dosegajo enako konsistentnih rezultatov kot zdravniki specialisti. Na podlagi pregledanih raziskav lahko sklepamo, da višje rezultate dosegajo segmentacije, ki uporabljajo pri svojem delovanju strojno učenje in proces rojenja. Težavo pri doseganju ponovljivih in primerljivih rezultatov študij predstavlja uporaba različnih mamografskih slik za analizo in adaptacija nevronskih mrež s strani raziskovalcev. Preden se lahko CADe (angl. Computer Aided Detection) metode uvrstijo med komplementarne tehnike pri diagnostiki mamografskih slik v sami klinični praksi, je potreben nadaljnji razvoj področja in konsistentno doseganje zadovoljivih rezultatov, predvsem visokih vrednosti senzitivnosti, točnosti in AUC.Introduction: Mammography has become the number one detection method of breast cancer in the recent years, especially through various preliminary screening programs. Mammogram analysis through computer aided detection has been established as a big aid to radiologists in early cancer detection rates. Computer aided detection (CADe) represents a segment of Computer aided diagnosis (CAD), both of which employ the methods of machine learning in their workings. One of prerequisites for efficient detection of tumor masses is adequate segmentation of presented breast tissue. This work is focused mainly on threshhold based segmentation, region based segmentation and segmentation based on learning. Purpose: We intended to establish the efficiency of segmentation methods and positive detection rates used in modern computer aided detection processes. Methods: A descriptive method was used to explain the basic concepts of segmentation and detection of cancer tissue in CADe methods through extensive study of available material on current research of the field in question. The results were presented in a qualitative manner with a commentary on efficiency and viability of methods used. Results: Studies, that tested their segmentation and CADe methods on the publicly available database Digital Datbase for Screening Mammography (DDSM), were reviewed. We compared selected studies from the field of computer aided detection and assesed their efficiency in breast tissue segmentation and positive detection rates of cancer mass. Discussion and conclusion: It was concluded that CADe methods adequately segment and detect cancer tissue in mammograms, but do not yet reach the efficiency of trained radiologists. It is evident that methods employing machine learning algorithms and clustering segmentation tend to produce better overall results than the rest of reviewed methods. The studied sources suggest there is a lack of uniform, publicly accessible mammogram databases that could be used to further research the field with practically comparable results. As such, CADe methods and the segmentation processes involved show promise in the future of automatic interpretation of mammography screening

    A New Feature Ensemble with a Multistage Classification Scheme for Breast Cancer Diagnosis

    Get PDF

    Image Area Reduction for Efficient Medical Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) has been one of the most active areas in medical image analysis in the last two decades because of the steadily increase in the number of digital images used. Efficient diagnosis and treatment planning can be supported by developing retrieval systems to provide high-quality healthcare. Extensive research has attempted to improve the image retrieval efficiency. The critical factors when searching in large databases are time and storage requirements. In general, although many methods have been suggested to increase accuracy, fast retrieval has been rather sporadically investigated. In this thesis, two different approaches are proposed to reduce both time and space requirements for medical image retrieval. The IRMA data set is used to validate the proposed methods. Both methods utilized Local Binary Pattern (LBP) histogram features which are extracted from 14,410 X-ray images of IRMA dataset. The first method is image folding that operates based on salient regions in an image. Saliency is determined by a context-aware saliency algorithm which includes folding the image. After the folding process, the reduced image area is used to extract multi-block and multi-scale LBP features and to classify these features by multi-class Support vector machine (SVM). The other method consists of classification and distance-based feature similarity. Images are firstly classified into general classes by utilizing LBP features. Subsequently, the retrieval is performed within the class to locate the most similar images. Between the retrieval and classification processes, LBP features are eliminated by employing the error histogram of a shallow (n/p/n) autoencoder to quantify the retrieval relevance of image blocks. If the region is relevant, the autoencoder gives large error for its decoding. Hence, via examining the autoencoder error of image blocks, irrelevant regions can be detected and eliminated. In order to calculate similarity within general classes, the distance between the LBP features of relevant regions is calculated. The results show that the retrieval time can be reduced, and the storage requirements can be lowered without significant decrease in accuracy

    WEATHER LORE VALIDATION TOOL USING FUZZY COGNITIVE MAPS BASED ON COMPUTER VISION

    Get PDF
    Published ThesisThe creation of scientific weather forecasts is troubled by many technological challenges (Stern & Easterling, 1999) while their utilization is generally dismal. Consequently, the majority of small-scale farmers in Africa continue to consult some forms of weather lore to reach various cropping decisions (Baliscan, 2001). Weather lore is a body of informal folklore (Enock, 2013), associated with the prediction of the weather, and based on indigenous knowledge and human observation of the environment. As such, it tends to be more holistic, and more localized to the farmers’ context. However, weather lore has limitations; for instance, it has an inability to offer forecasts beyond a season. Different types of weather lore exist, utilizing almost all available human senses (feel, smell, sight and hearing). Out of all the types of weather lore in existence, it is the visual or observed weather lore that is mostly used by indigenous societies, to come up with weather predictions. On the other hand, meteorologists continue to treat this knowledge as superstition, partly because there is no means to scientifically evaluate and validate it. The visualization and characterization of visual sky objects (such as moon, clouds, stars, and rainbows) in forecasting weather are significant subjects of research. To realize the integration of visual weather lore in modern weather forecasting systems, there is a need to represent and scientifically substantiate this form of knowledge. This research was aimed at developing a method for verifying visual weather lore that is used by traditional communities to predict weather conditions. To realize this verification, fuzzy cognitive mapping was used to model and represent causal relationships between selected visual weather lore concepts and weather conditions. The traditional knowledge used to produce these maps was attained through case studies of two communities (in Kenya and South Africa).These case studies were aimed at understanding the weather lore domain as well as the causal effects between metrological and visual weather lore. In this study, common astronomical weather lore factors related to cloud physics were identified as: bright stars, dispersed clouds, dry weather, dull stars, feathery clouds, gathering clouds, grey clouds, high clouds, layered clouds, low clouds, stars, medium clouds, and rounded clouds. Relationships between the concepts were also identified and formally represented using fuzzy cognitive maps. On implementing the verification tool, machine vision was used to recognize sky objects captured using a sky camera, while pattern recognition was employed in benchmarking and scoring the objects. A wireless weather station was used to capture real-time weather parameters. The visualization tool was then designed and realized in a form of software artefact, which integrated both computer vision and fuzzy cognitive mapping for experimenting visual weather lore, and verification using various statistical forecast skills and metrics. The tool consists of four main sub-components: (1) Machine vision that recognizes sky objects using support vector machine classifiers using shape-based feature descriptors; (2) Pattern recognition–to benchmark and score objects using pixel orientations, Euclidean distance, canny and grey-level concurrence matrix; (3) Fuzzy cognitive mapping that was used to represent knowledge (i.e. active hebbian learning algorithm was used to learn until convergence); and (4) A statistical computing component was used for verifications and forecast skills including brier score and contingency tables for deterministic forecasts. Rigorous evaluation of the verification tool was carried out using independent (not used in the training and testing phases) real-time images from Bloemfontein, South Africa, and Voi-Kenya. The real-time images were captured using a sky camera with GPS location services. The results of the implementation were tested for the selected weather conditions (for example, rain, heat, cold, and dry conditions), and found to be acceptable (the verified prediction accuracies were over 80%). The recommendation in this study is to apply the implemented method for processing tasks, towards verifying all other types of visual weather lore. In addition, the use of the method developed also requires the implementation of modules for processing and verifying other types of weather lore, such as sounds, and symbols of nature. Since time immemorial, from Australia to Asia, Africa to Latin America, local communities have continued to rely on weather lore observations to predict seasonal weather as well as its effects on their livelihoods (Alcock, 2014). This is mainly based on many years of personal experiences in observing weather conditions. However, when it comes to predictions for longer lead-times (i.e. over a season), weather lore is uncertain (Hornidge & Antweiler, 2012). This uncertainty has partly contributed to the current status where meteorologists and other scientists continue to treat weather lore as superstition (United-Nations, 2004), and not capable of predicting weather. One of the problems in testing the confidence in weather lore in predicting weather is due to wide varieties of weather lore that are found in the details of indigenous sayings, which are tightly coupled to locality and pattern variations(Oviedo et al., 2008). This traditional knowledge is entrenched within the day-to-day socio-economic activities of the communities using it and is not globally available for comparison and validation (Huntington, Callaghan, Fox, & Krupnik, 2004). Further, this knowledge is based on local experience that lacks benchmarking techniques; so that harmonizing and integrating it within the science-based weather forecasting systems is a daunting task (Hornidge & Antweiler, 2012). It is partly for this reason that the question of validation of weather lore has not yet been substantially investigated. Sufficient expanded processes of gathering weather observations, combined with comparison and validation, can produce some useful information. Since forecasting weather accurately is a challenge even with the latest supercomputers (BBC News Magazine, 2013), validated weather lore can be useful if it is incorporated into modern weather prediction systems. Validation of traditional knowledge is a necessary step in the management of building integrated knowledge-based systems. Traditional knowledge incorporated into knowledge-based systems has to be verified for enhancing systems’ reliability. Weather lore knowledge exists in different forms as identified by traditional communities; hence it needs to be tied together for comparison and validation. The development of a weather lore validation tool that can integrate a framework for acquiring weather data and methods of representing the weather lore in verifiable forms can be a significant step in the validation of weather lore against actual weather records using conventional weather-observing instruments. The success of validating weather lore could stimulate the opportunity for integrating acceptable weather lore with modern systems of weather prediction to improve actionable information for decision making that relies on seasonal weather prediction. In this study a hybrid method is developed that includes computer vision and fuzzy cognitive mapping techniques for verifying visual weather lore. The verification tool was designed with forecasting based on mimicking visual perception, and fuzzy thinking based on the cognitive knowledge of humans. The method provides meaning to humanly perceivable sky objects so that computers can understand, interpret, and approximate visual weather outcomes. Questionnaires were administered in two case study locations (KwaZulu-Natal province in South Africa, and Taita-Taveta County in Kenya), between the months of March and July 2015. The two case studies were conducted by interviewing respondents on how visual astronomical and meteorological weather concepts cause weather outcomes. The two case studies were used to identify causal effects of visual astronomical and meteorological objects to weather conditions. This was followed by finding variations and comparisons, between the visual weather lore knowledge in the two case studies. The results from the two case studies were aggregated in terms of seasonal knowledge. The causal links between visual weather concepts were investigated using these two case studies; results were compared and aggregated to build up common knowledge. The joint averages of the majority of responses from the case studies were determined for each set of interacting concepts. The modelling of the weather lore verification tool consists of input, processing components and output. The input data to the system are sky image scenes and actual weather observations from wireless weather sensors. The image recognition component performs three sub-tasks, including: detection of objects (concepts) from image scenes, extraction of detected objects, and approximation of the presence of the concepts by comparing extracted objects to ideal objects. The prediction process involves the use of approximated concepts generated in the recognition component to simulate scenarios using the knowledge represented in the fuzzy cognitive maps. The verification component evaluates the variation between the predictions and actual weather observations to determine prediction errors and accuracy. To evaluate the tool, daily system simulations were run to predict and record probabilities of weather outcomes (i.e. rain, heat index/hotness, dry, cold index). Weather observations were captured periodically using a wireless weather station. This process was repeated several times until there was sufficient data to use for the verification process. To match the range of the predicted weather outcomes, the actual weather observations (measurement) were transformed and normalized to a range [0, 1].In the verification process, comparisons were made between the actual observations and weather outcome prediction values by computing residuals (error values) from the observations. The error values and the squared error were used to compute the Mean Squared Error (MSE), and the Root Mean Squared Error (RMSE), for each predicted weather outcome. Finally, the validity of the visual weather lore verification model was assessed using data from a different geographical location. Actual data in the form of daily sky scenes and weather parameters were acquired from Voi, Kenya, from December 2015 to January 2016.The results on the use of hybrid techniques for verification of weather lore is expected to provide an incentive in integrating indigenous knowledge on weather with modern numerical weather prediction systems for accurate and downscaled weather forecasts
    corecore