11,422 research outputs found

    Using Data Mining to Investigate the Behavior of Video Rental Customers

    Get PDF

    Gamification Analytics: Support for Monitoring and Adapting Gamification Designs

    Get PDF
    Inspired by the engaging effects in video games, gamification aims at motivating people to show desired behaviors in a variety of contexts. During the last years, gamification influenced the design of many software applications in the consumer as well as enterprise domain. In some cases, even whole businesses, such as Foursquare, owe their success to well-designed gamification mechanisms in their product. Gamification also attracted the interest of academics from fields, such as human-computer interaction, marketing, psychology, and software engineering. Scientific contributions comprise psychological theories and models to better understand the mechanisms behind successful gamification, case studies that measure the psychological and behavioral outcomes of gamification, methodologies for gamification projects, and technical concepts for platforms that support implementing gamification in an efficient manner. Given a new project, gamification experts can leverage the existing body of knowledge to reuse previous, or derive new gamification ideas. However, there is no one size fits all approach for creating engaging gamification designs. Gamification success always depends on a wide variety of factors defined by the characteristics of the audience, the gamified application, and the chosen gamification design. In contrast to researchers, gamification experts in the industry rarely have the necessary skills and resources to assess the success of their gamification design systematically. Therefore, it is essential to provide them with suitable support mechanisms, which help to assess and improve gamification designs continuously. Providing suitable and efficient gamification analytics support is the ultimate goal of this thesis. This work presents a study with gamification experts that identifies relevant requirements in the context of gamification analytics. Given the identified requirements and earlier work in the analytics domain, this thesis then derives a set of gamification analytics-related activities and uses them to extend an existing process model for gamification projects. The resulting model can be used by experts to plan and execute their gamification projects with analytics in mind. Next, this work identifies existing tools and assesses them with regards to their applicability in gamification projects. The results can help experts to make objective technology decisions. However, they also show that most tools have significant gaps towards the identified user requirements. Consequently, a technical concept for a suitable realization of gamification analytics is derived. It describes a loosely coupled analytics service that helps gamification experts to seamlessly collect and analyze gamification-related data while minimizing dependencies to IT experts. The concept is evaluated successfully via the implementation of a prototype and application in two real-world gamification projects. The results show that the presented gamification analytics concept is technically feasible, applicable to actual projects, and also valuable for the systematic monitoring of gamification success

    Is Algorithmic Affirmative Action Legal?

    Get PDF
    This Article is the first to comprehensively explore whether algorithmic affirmative action is lawful. It concludes that both statutory and constitutional antidiscrimination law leave room for race-aware affirmative action in the design of fair algorithms. Along the way, the Article recommends some clarifications of current doctrine and proposes the pursuit of formally race-neutral methods to achieve the admittedly race-conscious goals of algorithmic affirmative action. The Article proceeds as follows. Part I introduces algorithmic affirmative action. It begins with a brief review of the bias problem in machine learning and then identifies multiple design options for algorithmic fairness. These designs are presented at a theoretical level, rather than in formal mathematical detail. It also highlights some difficult truths that stakeholders, jurists, and legal scholars must understand about accuracy and fairness trade-offs inherent in fairness solutions. Part II turns to the legality of algorithmic affirmative action, beginning with the statutory challenge under Title VII of the Civil Rights Act. Part II argues that voluntary algorithmic affirmative action ought to survive a disparate treatment challenge under Ricci and under the antirace-norming provision of Title VII. Finally, Part III considers the constitutional challenge to algorithmic affirmative action by state actors. It concludes that at least some forms of algorithmic affirmative action, to the extent they are racial classifications at all, ought to survive strict scrutiny as narrowly tailored solutions designed to mitigate the effects of past discrimination

    A comparison of the utility of data mining algorithms in an open distance learning context

    Get PDF
    The use of data mining within the higher education context has, increasingly, been gaining traction. A parallel examination of the accuracy, robustness and utility of the algorithms applied within data mining is argued as a necessary step toward entrenching the use of EDM. This paper provides a comparative analysis of various classification algorithms within an Open Distance Learning institution in South Africa. The study compares the performance of the ZeroR, OneR, NaĂŻve Bayes, IBk, Simple Logistic Regression and the J48 in classifying students within a cohort over an 8 year timespan. The initial results appear to show that, given the data quality and structure of the institution under study, the J48 most consistently performed with the highest levels of accuracy

    Program and Abstracts Celebration of Student Scholarship, 2014

    Get PDF
    Program and Abstracts from the Celebration of Student Scholarship on April 23, 2014

    Artificial neural networks in academic performance prediction: Systematic implementation and predictor evaluation

    Get PDF
    The applications of artificial intelligence in education have increased in recent years. However, further conceptual and methodological understanding is needed to advance the systematic implementation of these approaches. The first objective of this study is to test a systematic procedure for implementing artificial neural networks to predict academic performance in higher education. The second objective is to analyze the importance of several well-known predictors of academic performance in higher education. The sample included 162,030 students of both genders from private and public universities in Colombia. The findings suggest that it is possible to systematically implement artificial neural networks to classify students’ academic performance as either high (accuracy of 82%) or low (accuracy of 71%). Artificial neural networks outperform other machine-learning algorithms in evaluation metrics such as the recall and the F1 score. Furthermore, it is found that prior academic achievement, socioeconomic conditions, and high school characteristics are important predictors of students’ academic performance in higher education. Finally, this study discusses recommendations for implementing artificial neural networks and several considerations for the analysis of academic performance in higher education.Fil: Rodríguez Hernández, Carlos Felipe. Katholikie Universiteit Leuven; BélgicaFil: Musso, Mariel Fernanda. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en Psicología Matemática y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Kyndt, Eva. Swinburne University Of Technology; Australia. Universiteit Antwerp; BélgicaFil: Cascallar, Eduardo. Katholikie Universiteit Leuven; Bélgic

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel
    • …
    corecore