577 research outputs found

    Intelligent doctor patient matching: how José Mello saude experiments towards data-driven and patient-centric decision making

    Get PDF
    While data-driven decision-making is generally accepted as a fundamental capability of a competitive firm, many firms are facing difficulties in developing this capability. This case demonstrates how a private healthcare organization, José de Mello Saúde, engages in collaboration with a global university-led program for such capability building, in a pilot project of intelligent doctor-patient matching. The case walks the reader through the entire data science pipeline, from project scoping to data curation, modelling, prototype testing, until implementation. It enables discussions on how to overcome managerial challenges and build the needed capabilities to successfully integrate advanced analytics into the organization’s operations

    Deep neuro‐fuzzy approach for risk and severity prediction using recommendation systems in connected health care

    Get PDF
    Internet of Things (IoT) and Data science have revolutionized the entire technological landscape across the globe. Because of it, the health care ecosystems are adopting the cutting‐edge technologies to provide assistive and personalized care to the patients. But, this vision is incomplete without the adoption of data‐focused mechanisms (like machine learning, big data analytics) that can act as enablers to provide early detection and treatment of patients even without admission in the hospitals. Recently, there has been an increasing trend of providing assistive recommendation and timely alerts regarding the severity of the disease to the patients. Even, remote monitoring of the present day health situation of the patient is possible these days though the analysis of the data generated using IoT devices by doctors. Motivated from these facts, we design a health care recommendation system that provides a multilevel decision‐making related to the risk and severity of the patient diseases. The proposed systems use an all‐disease classification mechanism based on convolutional neural networks to segregate different diseases on the basis of the vital parameters of a patient. After classification, a fuzzy inference system is used to compute the risk levels for the patients. In the last step, based on the information provided by the risk analysis, the patients are provided with the potential recommendation about the severity staging of the associated diseases for timely and suitable treatment. The proposed work has been evaluated using different datasets related to the diseases and the outcomes seem to be promising

    Design of an E-learning system using semantic information and cloud computing technologies

    Get PDF
    Humanity is currently suffering from many difficult problems that threaten the life and survival of the human race. It is very easy for all mankind to be affected, directly or indirectly, by these problems. Education is a key solution for most of them. In our thesis we tried to make use of current technologies to enhance and ease the learning process. We have designed an e-learning system based on semantic information and cloud computing, in addition to many other technologies that contribute to improving the educational process and raising the level of students. The design was built after much research on useful technology, its types, and examples of actual systems that were previously discussed by other researchers. In addition to the proposed design, an algorithm was implemented to identify topics found in large textual educational resources. It was tested and proved to be efficient against other methods. The algorithm has the ability of extracting the main topics from textual learning resources, linking related resources and generating interactive dynamic knowledge graphs. This algorithm accurately and efficiently accomplishes those tasks even for bigger books. We used Wikipedia Miner, TextRank, and Gensim within our algorithm. Our algorithm‘s accuracy was evaluated against Gensim, largely improving its accuracy. Augmenting the system design with the implemented algorithm will produce many useful services for improving the learning process such as: identifying main topics of big textual learning resources automatically and connecting them to other well defined concepts from Wikipedia, enriching current learning resources with semantic information from external sources, providing student with browsable dynamic interactive knowledge graphs, and making use of learning groups to encourage students to share their learning experiences and feedback with other learners.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Luis Sánchez Fernández.- Secretario: Luis de la Fuente Valentín.- Vocal: Norberto Fernández Garcí

    Machine Learning Approaches for Heart Disease Detection: A Comprehensive Review

    Get PDF
    This paper presents a comprehensive review of the application of machine learning algorithms in the early detection of heart disease. Heart disease remains a leading global health concern, necessitating efficient and accurate diagnostic methods. Machine learning has emerged as a promising approach, offering the potential to enhance diagnostic accuracy and reduce the time required for assessments. This review begins by elucidating the fundamentals of machine learning and provides concise explanations of the most prevalent algorithms employed in heart disease detection. It subsequently examines noteworthy research efforts that have harnessed machine learning techniques for heart disease diagnosis. A detailed tabular comparison of these studies is also presented, highlighting the strengths and weaknesses of various algorithms and methodologies. This survey underscores the significant strides made in leveraging machine learning for early heart disease detection and emphasizes the ongoing need for further research to enhance its clinical applicability and efficacy

    Reporting serendipity in biomedical research literature : a mixed-methods analysis

    Get PDF
    As serendipity is an unexpected, anomalous, or inconsistent observation that culminates in a valuable, positive outcome (McCay-Peet & Toms, 2018, pp. 4–6), it can be inferred that effectively supporting serendipity will result in a greater incidence of the desired positive outcomes (McCay-Peet & Toms, 2018, p. 22). In order to effectively support serendipity, however, we must first understand the overall process or experience of serendipity and the factors influencing its attainment. Currently, our understanding and models of the serendipitous experience are based almost exclusively on example collections, compilations of examples of serendipity that authors and researchers have collected as they encounter them (Gries, 2009, p. 9). Unfortunately, reliance on such collections can lead to an over-representation of more vivid and dramatic examples and possible underrepresentation of more common, but less noticeable, exemplars. By applying the principles of corpus research, which involves electronic compilation of examples in existing documents, we can alleviate this problem and obtain a more balanced and representative understanding of serendipitous experiences (Gries, 2009). This three-article dissertation describes the phenomenon of serendipity, as it is recorded in biomedical research articles indexed in the PubMed Central database, in a way that might inform the development of machine compilation systems for the support of serendipity. Within this study, serendipity is generally defined as a process or experience that begins with encountering some type of information. That information is subsequently analyzed and further pursued by an individual with related knowledge, skills, and understanding, and, finally, allows them to realize a valuable outcome. The information encounter that initiates the serendipity experience exhibits qualities of unexpectedness as well as value for the user. In this mixed method study, qualitative content analysis, supported by natural language processing, and concurrent with statistical analysis, is applied to gain a robust understanding of the phenomenon of serendipity that may reveal features of serendipitous experience useful to the development of recommender system algorithms.Includes bibliographical reference

    Computational Intelligence for the Micro Learning

    Get PDF
    The developments of the Web technology and the mobile devices have blurred the time and space boundaries of people’s daily activities, which enable people to work, entertain, and learn through the mobile device at almost anytime and anywhere. Together with the life-long learning requirement, such technology developments give birth to a new learning style, micro learning. Micro learning aims to effectively utilise learners’ fragmented spare time and carry out personalised learning activities. However, the massive volume of users and the online learning resources force the micro learning system deployed in the context of enormous and ubiquitous data. Hence, manually managing the online resources or user information by traditional methods are no longer feasible. How to utilise computational intelligence based solutions to automatically managing and process different types of massive information is the biggest research challenge for realising the micro learning service. As a result, to facilitate the micro learning service in the big data era efficiently, we need an intelligent system to manage the online learning resources and carry out different analysis tasks. To this end, an intelligent micro learning system is designed in this thesis. The design of this system is based on the service logic of the micro learning service. The micro learning system consists of three intelligent modules: learning material pre-processing module, learning resource delivery module and the intelligent assistant module. The pre-processing module interprets the content of the raw online learning resources and extracts key information from each resource. The pre-processing step makes the online resources ready to be used by other intelligent components of the system. The learning resources delivery module aims to recommend personalised learning resources to the target user base on his/her implicit and explicit user profiles. The goal of the intelligent assistant module is to provide some evaluation or assessment services (such as student dropout rate prediction and final grade prediction) to the educational resource providers or instructors. The educational resource providers can further refine or modify the learning materials based on these assessment results
    corecore