5,505 research outputs found

    Beyond the IT Magic Bullet: HIV Prevention Education and Public Policy

    Full text link
    Analytic applications are vital in the assessments of public health and surveillance as these applications can drive resource allocation, community assessment and public policy. Using a dataset of nearly 90,000 patient hospital encounters, the number of instances with an ICD code of HIV and co-morbidities was identified. Blacks accounted for 75 percent of HIV hospital encounters in the dataset. While business analytic applications informed this study of cross-tabulations and interaction effects among race, age and gender, there appears to be a significant relationship among HIV diagnoses and substance abuse. Payer data is informed by the Healthcare Cost and Utilization Project (HCUP), and these findings indicate significant service utilization among those insured by Medicare. More importantly, these issues raise more salient implications among the current health and public policy among HIV care delivery, in general, and among the Black community, in particular. Attention to health and public policy warrants further investigation given that this discourse has shifted to a focus on curvative medicine and away from prevention and education

    Data Mining Applications in Higher Education and Academic Intelligence Management

    Get PDF
    Higher education institutions are nucleus of research and future development acting in a competitive environment, with the prerequisite mission to generate, accumulate and share knowledge. The chain of generating knowledge inside and among external organizations (such as companies, other universities, partners, community) is considered essential to reduce the limitations of internal resources and could be plainly improved with the use of data mining technologies. Data mining has proven to be in the recent years a pioneering field of research and investigation that faces a large variety of techniques applied in a multitude of areas, both in business and higher education, relating interdisciplinary studies and development and covering a large variety of practice. Universities require an important amount of significant knowledge mined from its past and current data sets using special methods and processes. The ways in which information and knowledge are represented and delivered to the university managers are in a continuous transformation due to the involvement of the information and communication technologies in all the academic processes. Higher education institutions have long been interested in predicting the paths of students and alumni (Luan, 2004), thus identifying which students will join particular course programs (Kalathur, 2006), and which students will require assistance in order to graduate. Another important preoccupation is the academic failure among students which has long fuelled a large number of debates. Researchers (Vandamme et al., 2007) attempted to classify students into different clusters with dissimilar risks in exam failure, but also to detect with realistic accuracy what and how much the students know, in order to deduce specific learning gaps (Piementel & Omar, 2005). The distance and on-line education, together with the intelligent tutoring systems and their capability to register its exchanges with students (Mostow et al., 2005) present various feasible information sources for the data mining processes. Studies based on collecting and interpreting the information from several courses could possibly assist teachers and students in the web-based learning setting (Myller et al., 2002). Scientists (Anjewierden et al., 2007) derived models for classifying chat messages using data mining techniques, in order to offer learners real-time adaptive feedback which could result in the improvement of learning environments. In scientific literature there are some studies which seek to classify students in order to predict their final grade based on features extracted from logged data ineducational web-based systems (Minaei-Bidgoli & Punch, 2003). A combination of multiple classifiers led to a significant improvement in classification performance through weighting the feature vectors. The author’s research directions through the data mining practices consist in finding feasible ways to offer the higher education institutions’ managers ample knowledge to prepare new hypothesis, in a short period of time, which was formerly rigid or unachievable, in view of large datasets and earlier methods. Therefore, the aim is to put forward a way to understand the students’ opinions, satisfactions and discontentment in the each element of the educational process, and to predict their preference in certain fields of study, the choice in continuing education, academic failure, and to offer accurate correlations between their knowledge and the requirements in the labor market. Some of the most interesting data mining processes in the educational field are illustrated in the present chapter, in which the author adds own ideas and applications in educational issues using specific data mining techniques. The organization of this chapter is as follows. Section 2 offers an insight of how data mining processes are being applied in the large spectrum of education, presenting recent applications and studies published in the scientific literature, significant to the development of this emerging science. In Section 3 the author introduces his work through a number of new proposed directions and applications conducted over data collected from the students of the Babes-Bolyai University, using specific data mining classification learning and clustering methods. Section 4 presents the integration of data mining processes and their particular role in higher education issues and management, for the conception of an Academic Intelligence Management. Interrelated future research and plans are discussed as a conclusion in Section 5.data mining,data clustering, higher education, decision trees, C4.5 algorithm, k-means, decision support, academic intelligence management

    Know abnormal, find evil : frequent pattern mining for ransomware threat hunting and intelligence

    Get PDF
    Emergence of crypto-ransomware has significantly changed the cyber threat landscape. A crypto ransomware removes data custodian access by encrypting valuable data on victims’ computers and requests a ransom payment to reinstantiate custodian access by decrypting data. Timely detection of ransomware very much depends on how quickly and accurately system logs can be mined to hunt abnormalities and stop the evil. In this paper we first setup an environment to collect activity logs of 517 Locky ransomware samples, 535 Cerber ransomware samples and 572 samples of TeslaCrypt ransomware. We utilize Sequential Pattern Mining to find Maximal Frequent Patterns (MFP) of activities within different ransomware families as candidate features for classification using J48, Random Forest, Bagging and MLP algorithms. We could achieve 99% accuracy in detecting ransomware instances from goodware samples and 96.5% accuracy in detecting family of a given ransomware sample. Our results indicate usefulness and practicality of applying pattern mining techniques in detection of good features for ransomware hunting. Moreover, we showed existence of distinctive frequent patterns within different ransomware families which can be used for identification of a ransomware sample family for building intelligence about threat actors and threat profile of a given target

    Alter ego, state of the art on user profiling: an overview of the most relevant organisational and behavioural aspects regarding User Profiling.

    Get PDF
    This report gives an overview of the most relevant organisational and\ud behavioural aspects regarding user profiling. It discusses not only the\ud most important aims of user profiling from both an organisation’s as\ud well as a user’s perspective, it will also discuss organisational motives\ud and barriers for user profiling and the most important conditions for\ud the success of user profiling. Finally recommendations are made and\ud suggestions for further research are given

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Framework to predict the metabolic syndrome without doing a blood test: based on machine learning for a clinical decision support system

    Get PDF
    Metabolic Syndrome (MetS) is a cluster of risk factors that increase the likelihood of heart disease and diabetes mellitus, and researchers have recently linked it to worse outcomes for the novel Covid-19 disease. It is crucial to get diagnosed with time to take preventive measures, especially for patients in locations without proper laboratories and medical consultations. This work presents a new model to diagnose metabolic syndrome using machine learning and non-biochemical variables that healthcare professionals can obtain from initial consultations. For evaluating and comparing the model, this work also proposes a new methodology for performing research on data mining called RAMAD. The methodology standardizes the novel model’s comparison with similar classification models, using their reported variables and previously obtained data from a study in Colombia, using the holdout and random subsampling validation methods to get performance evaluation indicators between the models. The resulting ANN model used three hidden layers and only Hip Circumference, dichotomous Waist Circumference, and dichotomous blood pressure variables. It gave an Area under Receiver Operating Characteristic curves (AROC) of 87.75% by the International Diabetes Federation (IDF) and 85.12% by Harmonized Diagnosis or Joint Interim Statement (HMS) diagnosis criteria, higher than previous models. Thanks to the new methodology, diagnosis models can be thoroughly documented for appropriate future comparisons, thus benefiting the studied diseases’ diagnosis. Medical personnel needs to know the factors involved in the syndrome to start a treatment. So, this work also presents the segmentation of metabolic syndrome in types related to each biochemical variable. It uses the RAMAD methodology together with several machine learning techniques to design a framework to predict MetS and their several types, without using a blood test and only anthropometric and clinical information. The results showed an excellent system for predicting six MetS types that combine several factors mentioned above that have an AROC with a range of 71% to 96%, and an AROC 82.86%. This thesis finishes with the proposal of using a SCRUM Thinking framework for creating mobile health applications to implement the new models and serve as decision tools for healthcare professionals. The standard and fundamental characteristics were analyzed, finding the quality attributes verified in the framework’s early stages. Keywords — Metabolic Syndrome, Segmentation, Quine–McCluskey, Random Subsampling validation, RAMAD, Machine learning, Framework, International Diabetes Federation (IDF), Harmonized Diagnosis or Joint Interim Statement (HMS).DoctoradoDoctor en Ingeniería de Sistemas y Computació

    The Impact of Information and Communication Technology on Internal Control’s Prevention and Detection of Fraud

    Get PDF
    This study explores the Impact of Information and Communication Technology (ICT) on internal control effectiveness in preventing and detecting fraud within the financial sector of a developing economy – Nigeria. Using a triangulation of questionnaire and interview techniques to investigate the internal control activities of Nigerian Internal Auditors in relation to their use of ICT in fraud prevention and detection, the study made use of cross-tabulations, correlation coefficients and one-way ANOVAs for the analysis of quantitative data, while thematic analysis was adopted for the qualitative aspects. The Technology Acceptance Model (TAM) and Omoteso et al.’s Three-Layered Model (TLM) were used to underpin the study in order to provide theoretical considerations of the issues involved. The study’s findings show that Nigerian Internal Auditors are increasingly adopting IT-based tools and techniques in their internal control activities. Secondly, the use of ICT-based tools and techniques in internal control positively impacts on Internal Auditors’ independence and objectivity. Also, the study’s findings indicate that Internal Auditors’ use of ICT-based tools and techniques has the potential of preventing electronic fraud, and such ICT-based tools and techniques are effective in detecting electronic fraud. However, continuous online auditing was found to be effective in preventing fraud, but not suited for fraud detection in financial businesses. This exploratory study sheds light on the impact of ICT usage on internal control’s effectiveness and on internal auditors’ independence. The study contributes to the debate on the significance of ICT adoption in accounting disciplines by identifying perceived benefits, organisational readiness, trust and external pressure as variables that could affect Internal Auditors’ use of ICT. Above all, this research was able to produce a new model: the Technology Effectiveness Planning and Evaluation Model (TEPEM), for the study of ICT adoption in internal control effectiveness for prevention and detection of fraud. As a result of its planning capability for external contingencies, the model is useful for the explanation of studies involving ICT in a unique macro environment of developing economies such as Nigeria, where electricity generation is in short supply and regulatory activities unpredictable. The model proposes that technology effectiveness (in the prevention and the detection of fraud) is a function of TAM variables (such as perceived benefits, organisational readiness, trust, external pressures), contingent factors (size of organisation, set-up and maintenance cost, staff training and infrastructural readiness), and an optimal mix of human and technological capabilitie
    corecore