7,010 research outputs found

    Machine learning and mixed reality for smart aviation: applications and challenges

    Get PDF
    The aviation industry is a dynamic and ever-evolving sector. As technology advances and becomes more sophisticated, the aviation industry must keep up with the changing trends. While some airlines have made investments in machine learning and mixed reality technologies, the vast majority of regional airlines continue to rely on inefficient strategies and lack digital applications. This paper investigates the state-of-the-art applications that integrate machine learning and mixed reality into the aviation industry. Smart aerospace engineering design, manufacturing, testing, and services are being explored to increase operator productivity. Autonomous systems, self-service systems, and data visualization systems are being researched to enhance passenger experience. This paper investigate safety, environmental, technological, cost, security, capacity, and regulatory challenges of smart aviation, as well as potential solutions to ensure future quality, reliability, and efficiency

    Cyberbullying in educational context

    Get PDF
    Kustenmacher and Seiwert (2004) explain a man’s inclination to resort to technology in his interaction with the environment and society. Thus, the solution to the negative consequences of Cyberbullying in a technologically dominated society is represented by technology as part of the technological paradox (Tugui, 2009), in which man has a dual role, both slave and master, in the interaction with it. In this respect, it is noted that, notably after 2010, there have been many attempts to involve artificial intelligence (AI) to recognize, identify, limit or avoid the manifestation of aggressive behaviours of the CBB type. For an overview of the use of artificial intelligence in solving various problems related to CBB, we extracted works from the Scopus database that respond to the criterion of the existence of the words “cyberbullying” and “artificial intelligence” in the Title, Keywords and Abstract. These articles were the subject of the content analysis of the title and, subsequently, only those that are identified as a solution in the process of recognizing, identifying, limiting or avoiding the manifestation of CBB were kept in the following Table where we have these data synthesized and organized by years

    Human-Centered Approach to Technology to Combat Human Trafficking

    Get PDF
    Human trafficking is a serious crime that continues to plague the United States. With the rise of computing technologies, the internet has become one of the main mediums through which this crime is facilitated. Fortunately, these online activities leave traces which are invaluable to law enforcement agencies trying to stop human trafficking. However, identifying and intervening with these cases is still a challenging task. The sheer volume of online activity makes it difficult for law enforcement to efficiently identify any potential leads. To compound this issue, traffickers are constantly changing their techniques online to evade detection. Thus, there is a need for tools to efficiently sift through all this online data and narrow down the number of potential leads that a law enforcement agency can deal with. While some tools and prior research do exist for this purpose, none of these tools adequately address law enforcement user needs for information visualizations and spatiotemporal analysis. Thus to address these gaps, this thesis contributes an empirical study of technology and human trafficking. Through in-depth qualitative interviews, systemic literature analysis, and a user-centered design study, this research outlines the challenges and design considerations for developing sociotechnical tools for anti-trafficking efforts. This work further contributes to the greater understanding of the prosecution efforts within the anti-trafficking domain and concludes with the development of a visual analytics prototype that incorporates these design considerations.Ph.D

    Blockchain Technology: Disruptor or Enhnancer to the Accounting and Auditing Profession

    Get PDF
    The unique features of blockchain technology (BCT) - peer-to-peer network, distribution ledger, consensus decision-making, transparency, immutability, auditability, and cryptographic security - coupled with the success enjoyed by Bitcoin and other cryptocurrencies have encouraged many to assume that the technology would revolutionise virtually all aspects of business. A growing body of scholarship suggests that BCT would disrupt the accounting and auditing fields by changing accounting practices, disintermediating auditors, and eliminating financial fraud. BCT disrupts audits (Lombard et al.,2021), reduces the role of audit firms (Yermack 2017), undermines accountants' roles with software developers and miners (Fortin & Pimentel 2022); eliminates many management functions, transforms businesses (Tapscott & Tapscott, 2017), facilitates a triple-entry accounting system (Cai, 2021), and prevents fraudulent transactions (Dai, et al., 2017; Rakshit et al., 2022). Despite these speculations, scholars have acknowledged that the application of BCT in the accounting and assurance industry is underexplored and many existing studies are said to lack engagement with practitioners (Dai & Vasarhelyi, 2017; Lombardi et al., 2021; Schmitz & Leoni, 2019). This study empirically explored whether BCT disrupts or enhances accounting and auditing fields. It also explored the relevance of audit in a BCT environment and the effectiveness of the BCT mechanism for fraud prevention and detection. The study further examined which technical skillsets accountants and auditors require in a BCT environment, and explored the incentives, barriers, and unintended consequences of the adoption of BCT in the accounting and auditing professions. The current COVID-19 environment was also investigated in terms of whether the pandemic has improved BCT adoption or not. A qualitative exploratory study used semi-structured interviews to engage practitioners from blockchain start-ups, IT experts, financial analysts, accountants, auditors, academics, organisational leaders, consultants, and editors who understood the technology. With the aid of NVIVO qualitative analysis software, the views of 44 participants from 13 countries: New Zealand, Australia, United States, United Kingdom, Canada, Germany, Italy, Ireland, Hong Kong, India, Pakistan, United Arab Emirates, and South Africa were analysed. The Technological, Organisational, and Environmental (TOE) framework with consequences of innovation context was adopted for this study. This expanded TOE framework was used as the theoretical lens to understand the disruption of BCT and its adoption in the accounting and auditing fields. Four clear patterns emerged. First, BCT is an emerging tool that accountants and auditors use mainly to analyse financial records because technology cannot disintermediate auditors from the financial system. Second, the technology can detect anomalies but cannot prevent financial fraud. Third, BCT has not been adopted by any organisation for financial reporting and accounting purposes, and accountants and auditors do not require new skillsets or an understanding of the BCT programming language to be able to operate in a BCT domain. Fourth, the advent of COVID-19 has not substantially enhanced the adoption of BCT. Additionally, this study highlights the incentives, barriers, and unintended consequences of adopting BCT as financial technology (FinTech). These findings shed light on important questions about BCT disrupting and disintermediating auditors, the extent of adoption in the accounting industry, preventing fraud and anomalies, and underscores the notion that blockchain, as an emerging technology, currently does not appear to be substantially disrupting the accounting and auditing profession. This study makes methodological, theoretical, and practical contributions. At the methodological level, the study adopted the social constructivist-interpretivism paradigm with an exploratory qualitative method to engage and understand BCT as a disruptive innovation in the accounting industry. The engagement with practitioners from diverse fields, professions, and different countries provides a distinctive and innovative contribution to methodological and practical knowledge. At the theoretical level, the findings contribute to the literature by offering an integrated conceptual TOE framework. The framework offers a reference for practitioners, academics and policymakers seeking to appraise comprehensive factors influencing BCT adoption and its likely unintended consequences. The findings suggest that, at present, no organisations are using BCT for financial reporting and accounting systems. This study contributes to practice by highlighting the differences between initial expectations and practical applications of what BCT can do in the accounting and auditing fields. The study could not find any empirical evidence that BCT will disrupt audits, eliminate the roles of auditors in a financial system, and prevent and detect financial fraud. Also, there was no significant evidence that accountants and auditors required higher-level skillsets and an understanding of BCT programming language to be able to use the technology. Future research should consider the implications of an external audit firm as a node in a BCT network on the internal audit functions. It is equally important to critically examine the relevance of including programming languages or codes in the curriculum of undergraduate accounting students. Future research could also empirically evaluate if a BCT-enabled triple-entry system could prevent financial statements and management fraud

    Investigating self-perception of emotion in individuals with non-epileptic seizures (NES)

    Get PDF
    Emotional processing difficulties are hypothesised to be involved in the aetiology and maintenance of non-epileptic seizures (NES). This thesis aimed to explore the relationship between aspects of emotional processing: interoception, alexithymia and executive functioning, in people with NES in comparison with healthy controls and to understand how people with NES experience their symptoms, live with their condition, and perceive the role of life events in relation to their seizures. Study 1 reviewed the evidence for a relationship between interoception and other key emotional factors in studies which employed heartbeat perception tasks to measure interoception. Study quality was found to be generally poor, with no consistent evidence for significant findings between interoception and emotional factors, including alexithymia, depression, and anxiety. Study 2 was a cross-sectional, online, study to investigate an interactional model of emotion processing, exploring relationships between interoceptive sensibility, alexithymia, and executive functioning (attentional bias) in NES participants and healthy controls. Measures included the Body Perception Questionnaire (BPQ-VSF), the Toronto Alexithymia Scale-20 (TAS-20) and the emotional Stroop task (eStroop). The NES group, compared to controls, reported higher BPQ-VSF and TAS-20 scores. There were no significant correlations between any of the measures of interest in either the NES or control group. There was no evidence to support the proposed model. Study 3 was a qualitative study using Interpretative Phenomenological Analysis to explore: how individuals with NES respond emotionally to recent life events; and how these events impact on seizures. Six themes were developed from the analysis which described how NES affected many aspects of people’s lives. Four models captured the different ways in which people perceived the relationship between life stressors, their emotional responses, and their seizures: event->emotional response-> seizure; event-> emotional response -x-> no seizure; no event ->emotional reaction/experience -> seizure; and no event -x->no emotional response->seizure

    Regulatory responses to addressing and preventing sexual assault and harassment in Australian university settings

    Full text link
    Over the past decade, the Australian university sector and regulatory bodies have implemented a range of actions to improve the management and prevention of sexual assault and sexual harassment in Australian university settings. Despite these concerted efforts, little progress has been made in reducing campus sexual violence or in achieving institutional accountability. To date, research on campus sexual violence in Australia has focused on the experiences of students and staff (such as prevalence surveys and the impact of sexual violence on educational outcomes) or institutional responses (such as policy frameworks, reporting mechanisms and support services). This dissertation offers a new perspective by taking a system-wide structural approach to consider the entire regulatory community. Through the lens of theories of responsive and smart regulation, this thesis critically examines the regulatory initiatives adopted by various actors during the period 2011-2021. Addressing a gap in the literature, I offer an analysis of how regulatory theory does not adequately explain the vital role of civil society activists in creating momentum and initiating reform in this area. Drawing on legislative reviews, analysis of primary documents and 24 interviews with representatives drawn from across the regulatory community, the dissertation reveals how a lack of political will and the absence of even a latent threat of genuine enforceable institutional accountability – a ‘benign big gun’ in responsive regulatory theory – has undermined regulatory efforts across the whole sector. This dissertation also identifies the role that regulatory ritualism has played in stymying systemic change to respond to and prevent sexual violence in the Australian university sector, extending the existing literature by proposing two new applications of regulatory ritualism, language ritualism and announcement ritualism, and providing examples of where this has occurred. This dissertation argues that substantive progress in tackling sexual assault and sexual harassment in Australian university settings has stalled due to an over-reliance on the self-regulating university sector to lead the reform effort, the failure of enforced self-regulation models led by regulatory agencies, the indifference of governments and sector-wide regulatory ritualism which has seen institutions adopt tokenistic rather than substantive responses. To address these factors and improve institutional accountability, I argue that genuine systemic reform will require political leadership, more robust application of existing legislative and regulatory tools towards effective enforcement, and innovative exploration of other legal and regulatory approaches

    A fault tolerant, peer-to-peer based scheduler for home grids

    Get PDF
    This thesis presents a fault-tolerant, Peer-to-Peer (P2P) based grid scheduling system for highly dynamic and highly heterogeneous environments, such as home networks, where we can find a variety of devices (laptops, PCs, game consoles, etc.) and networks. The number of devices found in a house that are capable of processing data has been increasing in the last few years. However, being able to process data does not mean that these devices are powerful, and, in a home environment, there will be a demand for some applications that need significant computing resources, beyond the capabilities of a single domestic device, such as a set top box (examples of such applications are TV recommender systems, image processing and photo indexing systems). A computational grid is a possible solution for this problem, but the constrained environment in the home makes it difficult to use conventional grid scheduling technologies, which demand a powerful infrastructure. Our solution is based on the distribution of the matchmaking task among providers, leaving the final allocation decision to a central scheduler that can be running on a limited device without a big loss in performance. We evaluate our solution by simulating different scenarios and configurations against the Opportunistic Load Balance (OLB) scheduling heuristic, which we found to be the best option for home grids from the existing solutions that we analysed. The results have shown that our solution performs similar or better to OLB. Furthermore, our solution also provides fault tolerance, which is not achieved with OLB, and we have formally verified the behaviour our solution against two cases of network partition failure

    TARGETED, REALISTIC AND NATURAL FAULT INJECTION : (USING BUG REPORTS AND GENERATIVE LANGUAGE MODELS)

    Get PDF
    Artificial faults have been proven useful to ensure software quality, enabling the simulation of its behaviour in erroneous situations, and thereby evaluating its robustness and its impact on the surrounding components in the presence of faults. Similarly, by introducing these faults in the testing phase, they can serve as a proxy to measure the fault revelation and thoroughness of current test suites, and provide developers with testing objectives, as writing tests to detect them helps reveal and prevent eventual similar real ones. This approach – mutation testing – has gained increasing fame and interest among researchers and practitioners since its appearance in the 1970s, and operates typically by introducing small syntactic transformations (using mutation operators) to the target program, aiming at producing multiple faulty versions of it (mutants). These operators are generally created based on the grammar rules of the target programming language and then tuned through empirical studies in order to reduce the redundancy and noise among the induced mutants. Having limited knowledge of the program context or the relevant locations to mutate, these patterns are applied in a brute-force manner on the full code base of the program, producing numerous mutants and overwhelming the developers with a costly overhead of test executions and mutants analysis efforts. For this reason, although proven useful in multiple software engineering applications, the adoption of mutation testing remains limited in practice. Another key challenge of mutation testing is the misrepresentation of real bugs by the induced artificial faults. Indeed, this can make the results of any relying application questionable or inaccurate. To tackle this challenge, researchers have proposed new fault-seeding techniques that aim at mimicking real faults. To achieve this, they suggest leveraging the knowledge base of previous faults to inject new ones. Although these techniques produce promising results, they do not solve the high-cost issue or even exacerbate it by generating more mutants with their extended patterns set. Along the same lines of research, we start addressing the aforementioned challenges – regarding the cost of the injection campaign and the representativeness of the artificial faults – by proposing IBIR; a targeted fault injection which aims at mimicking real faulty behaviours. To do so, IBIR uses information retrieved from bug reports (to select relevant code locations to mutate) and fault patterns created by inverting fix patterns, which have been introduced and tuned based on real bug fixes mined from different repositories. We implemented this approach, and showed that it outperforms the fault injection performed by traditional mutation testing in terms of semantic similarity with the originally targeted fault (described in the bug report), when applied at either project or class levels of granularity, and provides better, statistically significant, estimations of test effectiveness (fault detection). Additionally, when injecting only 10 faults, IBIR couples with more real bugs than mutation testing even when injecting 1000 faults. Although effective in emulating real faults, IBIR’s approach depends strongly on the quality and existence of bug reports, which when absent can reduce its performance to that of traditional mutation testing approaches. In the absence of such prior and with the same objective of injecting few relevant faults, we suggest accounting for the project’s context and the actual developer’s code distribution to generate more “natural” mutants, in a sense where they are understandable and more likely to occur. To this end, we propose the usage of code from real programs as a knowledge base to inject faults instead of the language grammar or previous bugs knowledge, such as bug reports and bug fixes. Particularly, we leverage the code knowledge and capability of pre-trained generative language models (i.e. CodeBERT) in capturing the code context and predicting developer-like code alternatives, to produce few faults in diverse locations of the input program. This way the approach development and maintenance does not require any major effort, such as creating or inferring fault patterns or training a model to learn how to inject faults. In fact, to inject relevant faults in a given program, our approach masks tokens (one at a time) from its code base and uses the model to predict them, then considers the inaccurate predictions as probable developer-like mistakes, forming the output mutants set. Our results show that these mutants induce test suites with higher fault detection capability, in terms of effectiveness and cost-efficiency than conventional mutation testing. Next, we turn our interest to the code comprehension of pre-trained language models, particularly their capability in capturing the naturalness aspect of code. This measure has been proven very useful to distinguish unusual code which can be a symptom of code smell, low readability, bugginess, bug-proneness, etc, thereby indicating relevant locations requiring prior attention from developers. Code naturalness is typically predicted using statistical language models like n-gram, to approximate how surprising a piece of code is, based on the fact that code, in small snippets, is repetitive. Although powerful, training such models on a large code corpus can be tedious, time-consuming and sensitive to code patterns (and practices) encountered during training. Consequently, these models are often trained on a small corpus and thus only estimate the language naturalness relative to a specific style of programming or type of project. To overcome these issues, we propose the use of pre-trained generative language models to infer code naturalness. Thus, we suggest inferring naturalness by masking (omitting) code tokens, one at a time, of code sequences, and checking the models’ ability to predict them. We implement this workflow, named CodeBERT-NT, and evaluate its capability to prioritize buggy lines over non-buggy ones when ranking code based on its naturalness. Our results show that our approach outperforms both, random-uniform- and complexity-based ranking techniques, and yields comparable results to the n-gram models, although trained in an intra-project fashion. Finally, We provide the implementation of tools and libraries enabling the code naturalness measuring and fault injection by the different approaches and provide the required resources to compare their effectiveness in emulating real faults and guiding the testing towards higher fault detection techniques. This includes the source code of our proposed approaches and replication packages of our conducted studies
    corecore