1,061 research outputs found

    Mobile heritage practices. Implications for scholarly research, user experience design, and evaluation methods using mobile apps.

    Get PDF
    Mobile heritage apps have become one of the most popular means for audience engagement and curation of museum collections and heritage contexts. This raises practical and ethical questions for both researchers and practitioners, such as: what kind of audience engagement can be built using mobile apps? what are the current approaches? how can audience engagement with these experience be evaluated? how can those experiences be made more resilient, and in turn sustainable? In this thesis I explore experience design scholarships together with personal professional insights to analyse digital heritage practices with a view to accelerating thinking about and critique of mobile apps in particular. As a result, the chapters that follow here look at the evolution of digital heritage practices, examining the cultural, societal, and technological contexts in which mobile heritage apps are developed by the creative media industry, the academic institutions, and how these forces are shaping the user experience design methods. Drawing from studies in digital (critical) heritage, Human-Computer Interaction (HCI), and design thinking, this thesis provides a critical analysis of the development and use of mobile practices for the heritage. Furthermore, through an empirical and embedded approach to research, the thesis also presents auto-ethnographic case studies in order to show evidence that mobile experiences conceptualised by more organic design approaches, can result in more resilient and sustainable heritage practices. By doing so, this thesis encourages a renewed understanding of the pivotal role of these practices in the broader sociocultural, political and environmental changes.AHRC REAC

    Distributed Ledger Technology (DLT) Applications in Payment, Clearing, and Settlement Systems:A Study of Blockchain-Based Payment Barriers and Potential Solutions, and DLT Application in Central Bank Payment System Functions

    Get PDF
    Payment, clearing, and settlement systems are essential components of the financial markets and exert considerable influence on the overall economy. While there have been considerable technological advancements in payment systems, the conventional systems still depend on centralized architecture, with inherent limitations and risks. The emergence of Distributed ledger technology (DLT) is being regarded as a potential solution to transform payment and settlement processes and address certain challenges posed by the centralized architecture of traditional payment systems (Bank for International Settlements, 2017). While proof-of-concept projects have demonstrated the technical feasibility of DLT, significant barriers still hinder its adoption and implementation. The overarching objective of this thesis is to contribute to the developing area of DLT application in payment, clearing and settlement systems, which is still in its initial stages of applications development and lacks a substantial body of scholarly literature and empirical research. This is achieved by identifying the socio-technical barriers to adoption and diffusion of blockchain-based payment systems and the solutions proposed to address them. Furthermore, the thesis examines and classifies various applications of DLT in central bank payment system functions, offering valuable insights into the motivations, DLT platforms used, and consensus algorithms for applicable use cases. To achieve these objectives, the methodology employed involved a systematic literature review (SLR) of academic literature on blockchain-based payment systems. Furthermore, we utilized a thematic analysis approach to examine data collected from various sources regarding the use of DLT applications in central bank payment system functions, such as central bank white papers, industry reports, and policy documents. The study's findings on blockchain-based payment systems barriers and proposed solutions; challenge the prevailing emphasis on technological and regulatory barriers in the literature and industry discourse regarding the adoption and implementation of blockchain-based payment systems. It highlights the importance of considering the broader socio-technical context and identifying barriers across all five dimensions of the social technical framework, including technological, infrastructural, user practices/market, regulatory, and cultural dimensions. Furthermore, the research identified seven DLT applications in central bank payment system functions. These are grouped into three overarching themes: central banks' operational responsibilities in payment and settlement systems, issuance of central bank digital money, and regulatory oversight/supervisory functions, along with other ancillary functions. Each of these applications has unique motivations or value proposition, which is the underlying reason for utilizing in that particular use case

    Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study

    Get PDF
    This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives

    PolyFlowBuilder: An Intuitive Tool for Academic Planning at Cal Poly San Luis Obispo

    Get PDF
    PolyFlowBuilder is a web application that lets users create visually intuitive flowcharts to aid in academic planning at Cal Poly. These flowcharts can be customized in a variety of ways to accurately represent complex academic plans, such as double majors, minors, taking courses out- of-order, etc. The original version of PolyFlowBuilder, released Summer 2020, was not written for continued expansion and growth. Therefore, a complete rewrite was determined to be necessary to enable the project to grow in the future. This report details the process to completely rewrite the existing version of PolyFlowBuilder over the course of six months, using NodeJS, SvelteKit, TypeScript, MySQL, Prisma, and TailwindCSS + DaisyUI for the primary tech stack. The project was determined to be largely successful by a variety of holistic evaluation criteria, with the main limiting factor to complete success being time constraints. The rewritten version of PolyFlowBuilder will ensure the project’s continued success

    Workshop Proceedings of the 12th edition of the KONVENS conference

    Get PDF
    The 2014 issue of KONVENS is even more a forum for exchange: its main topic is the interaction between Computational Linguistics and Information Science, and the synergies such interaction, cooperation and integrated views can produce. This topic at the crossroads of different research traditions which deal with natural language as a container of knowledge, and with methods to extract and manage knowledge that is linguistically represented is close to the heart of many researchers at the Institut für Informationswissenschaft und Sprachtechnologie of Universität Hildesheim: it has long been one of the institute’s research topics, and it has received even more attention over the last few years

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    Intelligent interface agents for biometric applications

    Get PDF
    This thesis investigates the benefits of applying the intelligent agent paradigm to biometric identity verification systems. Multimodal biometric systems, despite their additional complexity, hold the promise of providing a higher degree of accuracy and robustness. Multimodal biometric systems are examined in this work leading to the design and implementation of a novel distributed multi-modal identity verification system based on an intelligent agent framework. User interface design issues are also important in the domain of biometric systems and present an exceptional opportunity for employing adaptive interface agents. Through the use of such interface agents, system performance may be improved, leading to an increase in recognition rates over a non-adaptive system while producing a more robust and agreeable user experience. The investigation of such adaptive systems has been a focus of the work reported in this thesis. The research presented in this thesis is divided into two main parts. Firstly, the design, development and testing of a novel distributed multi-modal authentication system employing intelligent agents is presented. The second part details design and implementation of an adaptive interface layer based on interface agent technology and demonstrates its integration with a commercial fingerprint recognition system. The performance of these systems is then evaluated using databases of biometric samples gathered during the research. The results obtained from the experimental evaluation of the multi-modal system demonstrated a clear improvement in the accuracy of the system compared to a unimodal biometric approach. The adoption of the intelligent agent architecture at the interface level resulted in a system where false reject rates were reduced when compared to a system that did not employ an intelligent interface. The results obtained from both systems clearly express the benefits of combining an intelligent agent framework with a biometric system to provide a more robust and flexible application

    Translating Islamic Law: the postcolonial quest for minority representation

    Get PDF
    This research sets out to investigate how culture-specific or signature concepts are rendered in English-language discourse on Islamic, or ‘shariʿa’ law, which has Arabic roots. A large body of literature has investigated Islamic law from a technical perspective. However, from the perspective of linguistics and translation studies, little attention has been paid to the lexicon that makes up this specialised discourse. Much of the commentary has so far been prescriptive, with limited empirical evidence. This thesis aims to bridge this gap by exploring how ‘culturalese’ (i.e., ostensive cultural discourse) travels through language, as evidenced in the self-built Islamic Law Corpus (ILC), a 9-million-word monolingual English corpus, covering diverse genres on Islamic finance and family law. Using a mixed methods design, the study first quantifies the different linguistic strategies used to render shariʿa-based concepts in English, in order to explore ‘translation’ norms based on linguistic frequency in the corpus. This quantitative analysis employs two models: profile-based correspondence analysis, which considers the probability of lexical variation in expressing a conceptual category, and logistic regression (using MATLAB programming software), which measures the influence of the explanatory variables ‘genre’, ‘legal function’ and ‘subject field’ on the choice between an Arabic loanword and an endogenous English lexeme, i.e., a close English equivalent. The findings are then interpreted qualitatively in the light of postcolonial translation agendas, which aim to preserve intangible cultural heritage and promote the representation of minoritised groups. The research finds that the English-language discourse on Islamic law is characterised by linguistic borrowing and glossing, implying an ideologically driven variety of English that can be usefully labelled as a kind of ‘Islamgish’ (blending ‘Islamic’ and ‘English’) aimed at retaining symbols of linguistic hybridity. The regression analysis confirms the influence of the above-mentioned contextual factors on the use of an Arabic loanword versus English alternatives

    Impact of Personalized Interactive Storytelling on Suspension of Disbelief in Clinical Simulation

    Get PDF
    The literature review found suspension of disbelief (SOD) in clinical simulation heavily weighted on educators alone within high-fidelity environments. The project examined a co-created narrative background story applied to a simulated patient’s clinical profile to determine achieving an improved connectedness toward the simulated patient leading to enhanced SOD and enhanced levels of learning and reaction. The studied population was third-semester associate degree nursing students over 18 years of age with prior clinical simulation experience who were not repeating the semester. The research methodology used a quantitative experimental design with cluster sampling, randomization, and post-Likert-scored questionnaires. The intervention group co-created personalized storytelling narratives for the simulated patient’s clinical profile. After the clinical simulation activity, both intervention and control groups completed questionnaires examining their ability to achieve SOD during the activity and their levels and reaction and learning. Results using two-tailed t tests indicated the intervention revealed an enhanced level of presence during the participation. The improved presence revealed a positive, engaging experience applicable to future nursing roles and enhanced knowledge, skills, and confidence. Conclusions were drawn that applying co-created storytelling to a simulated patient’s clinical profile improves presence, suggesting an enhanced ability to achieve SOD during the activity. Recommendations for future research projects include studying storytelling in clinical simulation with a larger sample size and having participants create an entire clinical profile, analyzing the influence of emotional position toward simulation on SOD, and maintaining usage of intervention once learned
    corecore