2,624 research outputs found
REDIR: Automated Static Detection of Obfuscated Anti-Debugging Techniques
Reverse Code Engineering (RCE) to detect anti-debugging techniques in software is a very difficult task. Code obfuscation is an anti-debugging technique makes detection even more challenging. The Rule Engine Detection by Intermediate Representation (REDIR) system for automated static detection of obfuscated anti-debugging techniques is a prototype designed to help the RCE analyst improve performance through this tedious task. Three tenets form the REDIR foundation. First, Intermediate Representation (IR) improves the analyzability of binary programs by reducing a large instruction set down to a handful of semantically equivalent statements. Next, an Expert System (ES) rule-engine searches the IR and initiates a sensemaking process for anti-debugging technique detection. Finally, an IR analysis process confirms the presence of an anti-debug technique. The REDIR system is implemented as a debugger plug-in. Within the debugger, REDIR interacts with a program in the disassembly view. Debugger users can instantly highlight anti-debugging techniques and determine if the presence of a debugger will cause a program to take a conditional jump or fall through to the next instruction
Computational model of negotiation skills in virtual artificial agents
Negotiation skills represent crucial abilities for engaging in effective social interactions in formal and informal settings. Serious games, intelligent systems and virtual agents can provide solid tools upon which one-to-one training and assessment can be reliably made available. The aim of the present work is to fill the gap between the recent growing interest towards soft skills, and the lack of a robust and modern methodology for supporting their investigation. A computational model for the development of Enact, a 3D virtual intelligent platform for training and testing negotiation skills, will be presented. The serious game allows users to interact with simulated peers in scenarios depicting daily life situations and receive a psychological assessment and adaptive training reflecting their negotiation abilities. To pursue this goal, this work has gone through different research stages, each with a unique methodology, results and discussion described in its specific section. In the first phase, the platform was designed to operationalize the examined negotiation theory, developed and assessed. The negotiation styles considered, consistently with previous findings, have been found not to correlate with personality traits, coping strategies and perceived self-efficacy. The serious game has been widely tested for its usability and underwent two development and release stages aimed at improving its accuracy, usability and likeability. The variables measured by the platform have been found to predict in all cases at least two of the negotiation styles considered. Concerning the user feedback, the game has been judged as useful, more pleasant than the traditional test, and the perceived time spent on the game resulted significantly lower than the real time spent. In the second stage of this research, the game scenarios were used to collect a dataset of documents containing natural language negotiations between users and the virtual agents. The dataset was used to assess the correlations between the personal pronouns' use and the negotiation styles. Results showed that more engaged styles generally used pronouns with a significantly higher frequency than less engaged styles. Styles with a high concern for self showed a higher frequency of singular personal pronouns while styles with a high concern for others used significantly more relational pronouns. The corpus of documents was also used to perform multiclass classification on the negotiation styles using machine learning. Both linear (SVM) and non-linear models (MNB, CNN) performed reliably with a state-of-the-art accuracy
Recommended from our members
Developing sustainable business models for institutions’ provision of open educational resources: Learning from OpenLearn users’ motivations and experiences
Universities across the globe have, for some time, been exploring the possibilities for achieving public benefit and generating business and visibility through releasing and sharing open educational resources (OER). Many have written about the need to develop sustainable and profitable business models around the production and release of OER. Downes (2006), for example, has questioned the financial sustainability of OER production at scale. Many of the proposed business models focus on OER’s value in generating revenue and detractors of OER have questioned whether they are in competition with formal education.
This paper reports on a study intended to broaden the conversation about OER business models to consider the motivations and experiences of OER users as the basis for making a better informed decision about whether OER and formal learning are competitive or complementary with each other. The study focused on OpenLearn - the Open University’s (OU) web-based platform for OER, which hosts hundreds of online courses and videos and is accessed by over 3,000,000 users a year. A large scale survey and follow-up interviews with OpenLearn users worldwide revealed that university provided OER can offer learners a bridge to formal education, allowing them to try out a subject before registering on a formal course and to build confidence in their abilities as learners. In addition, it was found that using OER during formal paid-for study can improve learners’ performance and self-reliance, leading to increased retention and satisfaction with the learning experience
Recommended from our members
Open educational resources for all? Comparing user motivations and characteristics across The Open University’s iTunes U channel and OpenLearn platform.
With the rise in access to mobile multimedia devices, educational institutions have exploited the iTunes U platform as an additional channel to provide free educational resources with the aim of profile-raising and breaking down barriers to education. For those prepared to invest in content preparation, it is possible to produce interactive, portable material that can be made available globally. Commentators have questioned both the financial implications for platform-specific content production, and the availability of devices for learners to access it (Osborne, 2012).
The Open University (OU) makes its free educational resources available on iTunes U and via its web-based open educational resources (OER) platform, OpenLearn. The OU’s OER on iTunes U reached the 60 million download mark in 2013; its OpenLearn platform boasts 27 million unique visitors since 2006. This paper reports the results of a large-scale study of users of the OU’s iTunes U channel and OpenLearn platform. A survey of several thousand users revealed key differences in demographics between those accessing OER via the web and via iTunes U. In addition, the data allowed comparison between three groups: formal learners, informal learners and educators.
The study raises questions about whether university-provided OER meet the needs of users and makes recommendations for how content can be modified to suit their needs. As the publishing of OER becomes core to business, we reflect on reasons why understanding users’ motivations and demographics is vital, allowing for needs-led resource provision and content that is adapted to best achieve learner satisfaction, and to deliver institutions’ social mission
Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education
In the rapidly evolving landscape of education, digital technologies have
repeatedly disrupted traditional pedagogical methods. This paper explores the
latest of these disruptions: the potential integration of large language models
(LLMs) and chatbots into graduate engineering education. We begin by tracing
historical and technological disruptions to provide context and then introduce
key terms such as machine learning and deep learning and the underlying
mechanisms of recent advancements, namely attention/transformer models and
graphics processing units. The heart of our investigation lies in the
application of an LLM-based chatbot in a graduate fluid mechanics course. We
developed a question bank from the course material and assessed the chatbot's
ability to provide accurate, insightful responses. The results are encouraging,
demonstrating not only the bot's ability to effectively answer complex
questions but also the potential advantages of chatbot usage in the classroom,
such as the promotion of self-paced learning, the provision of instantaneous
feedback, and the reduction of instructors' workload. The study also examines
the transformative effect of intelligent prompting on enhancing the chatbot's
performance. Furthermore, we demonstrate how powerful plugins like Wolfram
Alpha for mathematical problem-solving and code interpretation can
significantly extend the chatbot's capabilities, transforming it into a
comprehensive educational tool. While acknowledging the challenges and ethical
implications surrounding the use of such AI models in education, we advocate
for a balanced approach. The use of LLMs and chatbots in graduate education can
be greatly beneficial but requires ongoing evaluation and adaptation to ensure
ethical and efficient use.Comment: 44 pages, 16 figures, preprint for PLOS ON
Exploring and Evaluating the Scalability and Efficiency of Apache Spark using Educational Datasets
Research into the combination of data mining and machine learning technology with web-based education systems (known as education data mining, or EDM) is becoming imperative in order to enhance the quality of education by moving beyond traditional methods. With the worldwide growth of the Information Communication Technology (ICT), data are becoming available at a significantly large volume, with high velocity and extensive variety. In this thesis, four popular data mining methods are applied to Apache Spark, using large volumes of datasets from Online Cognitive Learning Systems to explore the scalability and efficiency of Spark. Various volumes of datasets are tested on Spark MLlib with different running configurations and parameter tunings. The thesis convincingly presents useful strategies for allocating computing resources and tuning to take full advantage of the in-memory system of Apache Spark to conduct the tasks of data mining and machine learning. Moreover, it offers insights that education experts and data scientists can use to manage and improve the quality of education, as well as to analyze and discover hidden knowledge in the era of big data
Deeper Understanding of Tutorial Dialogues and Student Assessment
Bloom (1984) reported two standard deviation improvement with human tutoring which inspired many researchers to develop Intelligent Tutoring Systems (ITSs) that are as effective as human tutoring. However, recent studies suggest that the 2-sigma result was misleading and that current ITSs are as good as human tutors. Nevertheless, we can think of 2 standard deviations as the benchmark for tutoring effectiveness of ideal expert tutors. In the case of ITSs, there is still the possibility that ITSs could be better than humans.One way to improve the ITSs would be identifying, understanding, and then successfully implementing effective tutorial strategies that lead to learning gains. Another step towards improving the effectiveness of ITSs is an accurate assessment of student responses. However, evaluating student answers in tutorial dialogues is challenging. The student answers often refer to the entities in the previous dialogue turns and problem description. Therefore, the student answers should be evaluated by taking dialogue context into account. Moreover, the system should explain which parts of the student answer are correct and which are incorrect. Such explanation capability allows the ITSs to provide targeted feedback to help students reflect upon and correct their knowledge deficits. Furthermore, targeted feedback increases learners\u27 engagement, enabling them to persist in solving the instructional task at hand on their own. In this dissertation, we describe our approach to discover and understand effective tutorial strategies employed by effective human tutors while interacting with learners. We also present various approaches to automatically assess students\u27 contributions using general methods that we developed for semantic analysis of short texts. We explain our work using generic semantic similarity approaches to evaluate the semantic similarity between individual learner contributions and ideal answers provided by experts for target instructional tasks. We also describe our method to assess student performance based on tutorial dialogue context, accounting for linguistic phenomena such as ellipsis and pronouns. We then propose an approach to provide an explanatory capability for assessing student responses. Finally, we recommend a novel method based on concept maps for jointly evaluating and interpreting the correctness of student responses
The e-revolution and post-compulsory education: using e-business models to deliver quality education
The best practices of e-business are revolutionising not just technology itself but the whole process through which services are provided; and from which important lessons can be learnt by post-compulsory educational institutions. This book aims to move debates about ICT and higher education beyond a simple focus on e-learning by considering the provision of post-compulsory education as a whole. It considers what we mean by e-business, why e-business approaches are relevant to universities and colleges and the key issues this raises for post-secondary education
- …