7,339 research outputs found

    InGAME international pathway to collaboration: Collaboration in Games UK-China

    Get PDF
    In 2019 the Arts &amp; Humanities Research Council (AHRC) funded a series of projects as part of its UK-China Creative Partnerships Programme. Led by Abertay University in partnership with academic and industry partners across the UK and China, InGAME International was funded through this AHRC programme with the aim of studying the potential for UK-China cooperation and collaboration in the computer games sector. The project is linked to the AHRC Creative Industries Cluster, InGAME: Innovation for Games and Media Enterprise, which is also led by Abertay University in partnership with the University of Dundee and University of St Andrews. The games industry is one of the largest and fastest growing sectors in both the UK and the Chinese creative economies. In 2023, China was the largest gaming market globally with revenue forecast at 82.064billioncomparedwith82.064 billion compared with 7.94 billion in the UK (Statista, 2023). The growth in China’s market has long been the source of appeal for UK game developers and publishers seeking new routes to market. However, the divergence between the UK and China in terms of market profile, consumption patterns, leading companies, technologies, regulation, licensing, management, and business culture has presented ongoing difficulties for any UK based developer interested in engagement in- or with- China. It is from this basis that the current study sought to consolidate industry, legal, and regulatory knowhow with a view to providing a valuable resource to games professionals and researchers who have interests in UK-China collaboration. This Pathway to Collaboration report curates the cumulative knowledge and insight generated during the InGAME International programme, with an intended audience of games industry professionals and researchers interested in UK-China collaboration. At the heart of the research is an unprecedented qualitative study that involved in-depth interviews with 47 leading experts from the UK, China and other territories and with knowledge of games development, business, publishing, marketing, localisation, IP, copyright, regulation, markets, and sales. This report is the first comprehensive qualitative study to investigate the intersection between the UK and China games industries and markets at this scale and depth, providing readers with an invaluable, interactive resource that will support professionals and researchers to initiate new collaborations between the two nations.</p

    Digitalization and Development

    Get PDF
    This book examines the diffusion of digitalization and Industry 4.0 technologies in Malaysia by focusing on the ecosystem critical for its expansion. The chapters examine the digital proliferation in major sectors of agriculture, manufacturing, e-commerce and services, as well as the intermediary organizations essential for the orderly performance of socioeconomic agents. The book incisively reviews policy instruments critical for the effective and orderly development of the embedding organizations, and the regulatory framework needed to quicken the appropriation of socioeconomic synergies from digitalization and Industry 4.0 technologies. It highlights the importance of collaboration between government, academic and industry partners, as well as makes key recommendations on how to encourage adoption of IR4.0 technologies in the short- and long-term. This book bridges the concepts and applications of digitalization and Industry 4.0 and will be a must-read for policy makers seeking to quicken the adoption of its technologies

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution

    Get PDF
    Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners

    Digital Innovations for a Circular Plastic Economy in Africa

    Get PDF
    Plastic pollution is one of the biggest challenges of the twenty-first century that requires innovative and varied solutions. Focusing on sub-Saharan Africa, this book brings together interdisciplinary, multi-sectoral and multi-stakeholder perspectives exploring challenges and opportunities for utilising digital innovations to manage and accelerate the transition to a circular plastic economy (CPE). This book is organised into three sections bringing together discussion of environmental conditions, operational dimensions and country case studies of digital transformation towards the circular plastic economy. It explores the environment for digitisation in the circular economy, bringing together perspectives from practitioners in academia, innovation, policy, civil society and government agencies. The book also highlights specific country case studies in relation to the development and implementation of different innovative ideas to drive the circular plastic economy across the three sub-Saharan African regions. Finally, the book interrogates the policy dimensions and practitioner perspectives towards a digitally enabled circular plastic economy. Written for a wide range of readers across academia, policy and practice, including researchers, students, small and medium enterprises (SMEs), digital entrepreneurs, non-governmental organisations (NGOs) and multilateral agencies, policymakers and public officials, this book offers unique insights into complex, multilayered issues relating to the production and management of plastic waste and highlights how digital innovations can drive the transition to the circular plastic economy in Africa. The Open Access version of this book, available at https://www.taylorfrancis.com, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives (CC-BY-NC-ND) 4.0 license

    Software Design Change Artifacts Generation through Software Architectural Change Detection and Categorisation

    Get PDF
    Software is solely designed, implemented, tested, and inspected by expert people, unlike other engineering projects where they are mostly implemented by workers (non-experts) after designing by engineers. Researchers and practitioners have linked software bugs, security holes, problematic integration of changes, complex-to-understand codebase, unwarranted mental pressure, and so on in software development and maintenance to inconsistent and complex design and a lack of ways to easily understand what is going on and what to plan in a software system. The unavailability of proper information and insights needed by the development teams to make good decisions makes these challenges worse. Therefore, software design documents and other insightful information extraction are essential to reduce the above mentioned anomalies. Moreover, architectural design artifacts extraction is required to create the developer’s profile to be available to the market for many crucial scenarios. To that end, architectural change detection, categorization, and change description generation are crucial because they are the primary artifacts to trace other software artifacts. However, it is not feasible for humans to analyze all the changes for a single release for detecting change and impact because it is time-consuming, laborious, costly, and inconsistent. In this thesis, we conduct six studies considering the mentioned challenges to automate the architectural change information extraction and document generation that could potentially assist the development and maintenance teams. In particular, (1) we detect architectural changes using lightweight techniques leveraging textual and codebase properties, (2) categorize them considering intelligent perspectives, and (3) generate design change documents by exploiting precise contexts of components’ relations and change purposes which were previously unexplored. Our experiment using 4000+ architectural change samples and 200+ design change documents suggests that our proposed approaches are promising in accuracy and scalability to deploy frequently. Our proposed change detection approach can detect up to 100% of the architectural change instances (and is very scalable). On the other hand, our proposed change classifier’s F1 score is 70%, which is promising given the challenges. Finally, our proposed system can produce descriptive design change artifacts with 75% significance. Since most of our studies are foundational, our approaches and prepared datasets can be used as baselines for advancing research in design change information extraction and documentation

    Posthuman Creative Styling can a creative writer’s style of writing be described as procedural?

    Get PDF
    This thesis is about creative styling — the styling a creative writer might use to make their writing unique. It addresses the question as to whether such styling can be described as procedural. Creative styling is part of the technique a creative writer uses when writing. It is how they make the text more ‘lively’ by use of tips and tricks they have either learned or discovered. In essence these are rules, ones the writer accrues over time by their practice. The thesis argues that the use and invention of these rules can be set as procedures. and so describe creative styling as procedural. The thesis follows from questioning why it is that machines or algorithms have, so far, been incapable of producing creative writing which has value. Machine-written novels do not abound on the bookshelves and writing styled by computers is, on the whole, dull in comparison to human-crafted literature. It came about by thinking how it would be possible to reach a point where writing by people and procedural writing are considered to have equal value. For this reason the thesis is set in a posthuman context, where the differences between machines and people are erased. The thesis uses practice to inform an original conceptual space model, based on quality dimensions and dynamic-inter operation of spaces. This model gives an example of the procedures which a posthuman creative writer uses when engaged in creative styling. It suggests an original formulation for the conceptual blending of conceptual spaces, based on the casting of qualities from one space to another. In support of and informing its arguments are ninety-nine examples of creative writing practice which show the procedures by which style has been applied, created and assessed. It provides a route forward for further joint research into both computational and human-coded creative writing

    A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges

    Full text link
    Measuring and evaluating source code similarity is a fundamental software engineering activity that embraces a broad range of applications, including but not limited to code recommendation, duplicate code, plagiarism, malware, and smell detection. This paper proposes a systematic literature review and meta-analysis on code similarity measurement and evaluation techniques to shed light on the existing approaches and their characteristics in different applications. We initially found over 10000 articles by querying four digital libraries and ended up with 136 primary studies in the field. The studies were classified according to their methodology, programming languages, datasets, tools, and applications. A deep investigation reveals 80 software tools, working with eight different techniques on five application domains. Nearly 49% of the tools work on Java programs and 37% support C and C++, while there is no support for many programming languages. A noteworthy point was the existence of 12 datasets related to source code similarity measurement and duplicate codes, of which only eight datasets were publicly accessible. The lack of reliable datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm languages are the main challenges in the field. Emerging applications of code similarity measurement concentrate on the development phase in addition to the maintenance.Comment: 49 pages, 10 figures, 6 table

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks

    Full text link
    Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%.Comment: 20 page
    corecore