19 research outputs found

    Early Detection of Breast Cancer Using Machine Learning Techniques

    Get PDF
    Cancer is the second cause of death in the world. 8.8 million patients died due to cancer in 2015. Breast cancer is the leading cause of death among women. Several types of research have been done on early detection of breast cancer to start treatment and increase the chance of survival. Most of the studies concentrated on mammogram images. However, mammogram images sometimes have a risk of false detection that may endanger the patient’s health. It is vital to find alternative methods which are easier to implement and work with different data sets, cheaper and safer, that can produce a more reliable prediction. This paper proposes a hybrid model combined of several Machine Learning (ML) algorithms including Support Vector Machine (SVM), Artificial Neural Network (ANN), K-Nearest Neighbor (KNN), Decision Tree (DT) for effective breast cancer detection. This study also discusses the datasets used for breast cancer detection and diagnosis. The proposed model can be used with different data types such as image, blood, etc

    The Magic of Vision: Understanding What Happens in the Process

    Get PDF
    How important is the human vision? Simply speaking, it is central for domain\ua0related users to understand a design, a framework, a process, or an application\ua0in terms of human-centered cognition. This thesis focuses on facilitating visual\ua0comprehension for users working with specific industrial processes characterized\ua0by tomography. The thesis illustrates work that was done during the past two\ua0years within three application areas: real-time condition monitoring, tomographic\ua0image segmentation, and affective colormap design, featuring four research papers\ua0of which three published and one under review.The first paper provides effective deep learning algorithms accompanied by\ua0comparative studies to support real-time condition monitoring for a specialized\ua0microwave drying process for porous foams being taken place in a confined chamber.\ua0The tools provided give its users a capability to gain visually-based insights\ua0and understanding for specific processes. We verify that our state-of-the-art\ua0deep learning techniques based on infrared (IR) images significantly benefit condition\ua0monitoring, providing an increase in fault finding accuracy over conventional\ua0methods. Nevertheless, we note that transfer learning and deep residual network\ua0techniques do not yield increased performance over normal convolutional neural\ua0networks in our case.After a drying process, there will be some outputted images which are reconstructed\ua0by sensor data, such as microwave tomography (MWT) sensor. Hence,\ua0how to make users visually judge the success of the process by referring to the\ua0outputted MWT images becomes the core task. The second paper proposes an\ua0automatic segmentation algorithm named MWTS-KM to visualize the desired low\ua0moisture areas of the foam used in the whole process on the MWT images, effectively\ua0enhance users\u27understanding of tomographic image data. We also prove its\ua0performance is superior to two other preeminent methods through a comparative\ua0study.To better boost human comprehension among the reconstructed MWT image,\ua0a colormap deisgn research based on the same segmentation task as in the second\ua0paper is fully elaborated in the third and the fourth papers. A quantitative\ua0evaluation implemented in the third paper shows that different colormaps can\ua0influence the task accuracy in MWT related analytics, and that schemes autumn,\ua0virids, and parula can provide the best performance. As the full extension of\ua0the third paper, the fourth paper introduces a systematic crowdsourced study,\ua0verifying our prior hypothesis that the colormaps triggering affect in the positiveexciting\ua0quadrant in the valence-arousal model are able to facilitate more precise\ua0visual comprehension in the context of MWT than the other three quadrants.\ua0Interestingly, we also discover the counter-finding that colormaps resulting in\ua0affect in the negative-calm quadrant are undesirable. A synthetic colormap design\ua0guideline is brought up to benefit domain related users.In the end, we re-emphasize the importance of making humans beneficial in every\ua0context. Also, we start walking down the future path of focusing on humancentered\ua0machine learning(HCML), which is an emerging subfield of computer\ua0science which combines theexpertise of data-driven ML with the domain knowledge\ua0of HCI. This novel interdisciplinary research field is being explored to support\ua0developing the real-time industrial decision-support system

    Image processing and machine learning techniques used in computer-aided detection system for mammogram screening - a review

    Get PDF
    This paper aims to review the previously developed Computer-aided detection (CAD) systems for mammogram screening because increasing death rate in women due to breast cancer is a global medical issue and it can be controlled only by early detection with regular screening. Till now mammography is the widely used breast imaging modality. CAD systems have been adopted by the radiologists to increase the accuracy of the breast cancer diagnosis by avoiding human errors and experience related issues. This study reveals that in spite of the higher accuracy obtained by the earlier proposed CAD systems for breast cancer diagnosis, they are not fully automated. Moreover, the false-positive mammogram screening cases are high in number and over-diagnosis of breast cancer exposes a patient towards harmful overtreatment for which a huge amount of money is being wasted. In addition, it is also reported that the mammogram screening result with and without CAD systems does not have noticeable difference, whereas the undetected cancer cases by CAD system are increasing. Thus, future research is required to improve the performance of CAD system for mammogram screening and make it completely automated

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    An Explainable Artificial Intelligence Model for the Classification of Breast Cancer

    Get PDF
    Breast cancer is the most common cancer among women and globally affects both genders. The disease arises due to abnormal growth of tissue formed of malignant cells. Early detection of breast cancer is crucial for enhancing the survival rate. Therefore, artificial intelligence has revolutionized healthcare and can serve as a promising tool for early diagnosis. The present study aims to develop a machine-learning model to classify breast cancer and to provide explanations for the model results. This could improve the understanding of the diagnosis and treatment of breast cancer by identifying the most important features of breast cancer tumors and the way they affect the classification task. The best-performing machine-learning model has achieved an accuracy of 97.7% using k-nearest neighbors and a precision of 98.2% based on the Wisconsin breast cancer dataset and an accuracy of 98.6% using the artificial neural network with 94.4% precision based on the Wisconsin diagnostic breast cancer dataset. Hence, this asserts the importance and effectiveness of the proposed approach. The present research explains the model behavior using model-agnostic methods, demonstrating that the bare nuclei feature in the Wisconsin breast cancer dataset and the area’s worst feature Wisconsin diagnostic breast cancer dataset are the most important factors in determining breast cancer malignancy. The work provides extensive insights into the particular characteristics of the diagnosis of breast cancer and suggests possible directions for expected investigation in the future into the fundamental biological mechanisms that underlie the disease’s onset. The findings underline the potential of machine learning to enhance breast cancer diagnosis and therapy planning while emphasizing the importance of interpretability and transparency in artificial intelligence-based healthcare systems

    Deformable models for adaptive radiotherapy planning

    Get PDF
    Radiotherapy is the most widely used treatment for cancer, with 4 out of 10 cancer patients receiving radiotherapy as part of their treatment. The delineation of gross tumour volume (GTV) is crucial in the treatment of radiotherapy. An automatic contouring system would be beneficial in radiotherapy planning in order to generate objective, accurate and reproducible GTV contours. Image guided radiotherapy (IGRT) acquires patient images just before treatment delivery to allow any necessary positional correction. Consequently, real-time contouring system provides an opportunity to adopt radiotherapy on the treatment day. In this thesis, freely deformable models (FDM) and shape constrained deformable models (SCDMs) were used to automatically delineate the GTV for brain cancer and prostate cancer. Level set method (LSM) is a typical FDM which was used to contour glioma on brain MRI. A series of low level image segmentation methodologies are cascaded to form a case-wise fully automatic initialisation pipeline for the level set function. Dice similarity coefficients (DSCs) were used to evaluate the contours. Results shown a good agreement between clinical contours and LSM contours, in 93% of cases the DSCs was found to be between 60% and 80%. The second significant contribution is a novel development to the active shape model (ASM), a profile feature was selected from pre-computed texture features by minimising the Mahalanobis distance (MD) to obtain the most distinct feature for each landmark, instead of conventional image intensity. A new group-wise registration scheme was applied to solve the correspondence definition within the training data. This ASM model was used to delineated prostate GTV on CT. DSCs for this case was found between 0.75 and 0.91 with the mean DSC 0.81. The last contribution is a fully automatic active appearance model (AAM) which captures image appearance near the GTV boundary. The image appearance of inner GTV was discarded to spare the potential disruption caused by brachytherapy seeds or gold markers. This model outperforms conventional AAM at the prostate base and apex region by involving surround organs. The overall mean DSC for this case is 0.85

    Recent Advances in Social Data and Artificial Intelligence 2019

    Get PDF
    The importance and usefulness of subjects and topics involving social data and artificial intelligence are becoming widely recognized. This book contains invited review, expository, and original research articles dealing with, and presenting state-of-the-art accounts pf, the recent advances in the subjects of social data and artificial intelligence, and potentially their links to Cyberspace

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    corecore