1,351 research outputs found

    Dataflow Programming and Acceleration of Computationally-Intensive Algorithms

    Get PDF
    The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Digital Innovations for a Circular Plastic Economy in Africa

    Get PDF
    Plastic pollution is one of the biggest challenges of the twenty-first century that requires innovative and varied solutions. Focusing on sub-Saharan Africa, this book brings together interdisciplinary, multi-sectoral and multi-stakeholder perspectives exploring challenges and opportunities for utilising digital innovations to manage and accelerate the transition to a circular plastic economy (CPE). This book is organised into three sections bringing together discussion of environmental conditions, operational dimensions and country case studies of digital transformation towards the circular plastic economy. It explores the environment for digitisation in the circular economy, bringing together perspectives from practitioners in academia, innovation, policy, civil society and government agencies. The book also highlights specific country case studies in relation to the development and implementation of different innovative ideas to drive the circular plastic economy across the three sub-Saharan African regions. Finally, the book interrogates the policy dimensions and practitioner perspectives towards a digitally enabled circular plastic economy. Written for a wide range of readers across academia, policy and practice, including researchers, students, small and medium enterprises (SMEs), digital entrepreneurs, non-governmental organisations (NGOs) and multilateral agencies, policymakers and public officials, this book offers unique insights into complex, multilayered issues relating to the production and management of plastic waste and highlights how digital innovations can drive the transition to the circular plastic economy in Africa. The Open Access version of this book, available at https://www.taylorfrancis.com, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives (CC-BY-NC-ND) 4.0 license

    Applying machine learning: a multi-role perspective

    Get PDF
    Machine (and deep) learning technologies are more and more present in several fields. It is undeniable that many aspects of our society are empowered by such technologies: web searches, content filtering on social networks, recommendations on e-commerce websites, mobile applications, etc., in addition to academic research. Moreover, mobile devices and internet sites, e.g., social networks, support the collection and sharing of information in real time. The pervasive deployment of the aforementioned technological instruments, both hardware and software, has led to the production of huge amounts of data. Such data has become more and more unmanageable, posing challenges to conventional computing platforms, and paving the way to the development and widespread use of the machine and deep learning. Nevertheless, machine learning is not only a technology. Given a task, machine learning is a way of proceeding (a way of thinking), and as such can be approached from different perspectives (points of view). This, in particular, will be the focus of this research. The entire work concentrates on machine learning, starting from different sources of data, e.g., signals and images, applied to different domains, e.g., Sport Science and Social History, and analyzed from different perspectives: from a non-data scientist point of view through tools and platforms; setting a problem stage from scratch; implementing an effective application for classification tasks; improving user interface experience through Data Visualization and eXtended Reality. In essence, not only in a quantitative task, not only in a scientific environment, and not only from a data-scientist perspective, machine (and deep) learning can do the difference

    Next Generation Business Ecosystems: Engineering Decentralized Markets, Self-Sovereign Identities and Tokenization

    Get PDF
    Digital transformation research increasingly shifts from studying information systems within organizations towards adopting an ecosystem perspective, where multiple actors co-create value. While digital platforms have become a ubiquitous phenomenon in consumer-facing industries, organizations remain cautious about fully embracing the ecosystem concept and sharing data with external partners. Concerns about the market power of platform orchestrators and ongoing discussions on privacy, individual empowerment, and digital sovereignty further complicate the widespread adoption of business ecosystems, particularly in the European Union. In this context, technological innovations in Web3, including blockchain and other distributed ledger technologies, have emerged as potential catalysts for disrupting centralized gatekeepers and enabling a strategic shift towards user-centric, privacy-oriented next-generation business ecosystems. However, existing research efforts focus on decentralizing interactions through distributed network topologies and open protocols lack theoretical convergence, resulting in a fragmented and complex landscape that inadequately addresses the challenges organizations face when transitioning to an ecosystem strategy that harnesses the potential of disintermediation. To address these gaps and successfully engineer next-generation business ecosystems, a comprehensive approach is needed that encompasses the technical design, economic models, and socio-technical dynamics. This dissertation aims to contribute to this endeavor by exploring the implications of Web3 technologies on digital innovation and transformation paths. Drawing on a combination of qualitative and quantitative research, it makes three overarching contributions: First, a conceptual perspective on \u27tokenization\u27 in markets clarifies its ambiguity and provides a unified understanding of the role in ecosystems. This perspective includes frameworks on: (a) technological; (b) economic; and (c) governance aspects of tokenization. Second, a design perspective on \u27decentralized marketplaces\u27 highlights the need for an integrated understanding of micro-structures, business structures, and IT infrastructures in blockchain-enabled marketplaces. This perspective includes: (a) an explorative literature review on design factors; (b) case studies and insights from practitioners to develop requirements and design principles; and (c) a design science project with an interface design prototype of blockchain-enabled marketplaces. Third, an economic perspective on \u27self-sovereign identities\u27 (SSI) as micro-structural elements of decentralized markets. This perspective includes: (a) value creation mechanisms and business aspects of strategic alliances governing SSI ecosystems; (b) business model characteristics adopted by organizations leveraging SSI; and (c) business model archetypes and a framework for SSI ecosystem engineering efforts. The dissertation concludes by discussing limitations as well as outlining potential avenues for future research. These include, amongst others, exploring the challenges of ecosystem bootstrapping in the absence of intermediaries, examining the make-or-join decision in ecosystem emergence, addressing the multidimensional complexity of Web3-enabled ecosystems, investigating incentive mechanisms for inter-organizational collaboration, understanding the role of trust in decentralized environments, and exploring varying degrees of decentralization with potential transition pathways

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    Luck of the Draw III: Using AI to Examine Decision‐Making in Federal Court Stays of Removal

    Get PDF
    This article examines decision‐making in Federal Court of Canada immigration law applications for stays of removal, focusing on how the rates at which stays are granted depend on which judge decides the case. The article deploys a form of computational natural language processing, using a large‐language model machine learning process (GPT‐3) to extract data from online Federal Court dockets. The article reviews patterns in outcomes in thousands of stay of removal applications identified through this process and reveals a wide range in stay grant rates across many judges. The article argues that the Federal Court should take measures to encourage more consistency in stay decision‐making and cautions against relying heavily on stays of removal to ensure that deportation complies with constitutional procedural justice protections. The article is also a demonstration of how machine learning can be used to pursue empirical legal research projects that would have been cost‐prohibitive or technically challenging only a few years ago – and shows how technology that is increasingly used to enhance the power of the state at the expense of marginalized migrants can instead be used to scrutinize legal decision‐making in the immigration law field, hopefully in ways that enhance the rights of migrants. The article also contributes to the broader field of computational legal research in Canada by making available to other non‐commercial researchers the code used for the project, as well as a large dataset of Federal Court dockets

    Strengthening Privacy and Cybersecurity through Anonymization and Big Data

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    LARD -- Landing Approach Runway Detection -- Dataset for Vision Based Landing

    Full text link
    As the interest in autonomous systems continues to grow, one of the major challenges is collecting sufficient and representative real-world data. Despite the strong practical and commercial interest in autonomous landing systems in the aerospace field, there is a lack of open-source datasets of aerial images. To address this issue, we present a dataset-lard-of high-quality aerial images for the task of runway detection during approach and landing phases. Most of the dataset is composed of synthetic images but we also provide manually labelled images from real landing footages, to extend the detection task to a more realistic setting. In addition, we offer the generator which can produce such synthetic front-view images and enables automatic annotation of the runway corners through geometric transformations. This dataset paves the way for further research such as the analysis of dataset quality or the development of models to cope with the detection tasks. Find data, code and more up-to-date information at https://github.com/deel-ai/LAR

    Towards a tricorder: clinical, health economic, and ethical investigation of point-of-care artificial intelligence electrocardiogram for heart failure

    Get PDF
    Heart failure (HF) is an international public health priority and a focus of the NHS Long Term Plan. There is a particular need in primary care for screening and early detection of heart failure with reduced ejection fraction (HFrEF) – the most common and serious HF subtype, and the only one with an abundant evidence base for effective therapies. Digital health technologies (DHTs) integrating artificial intelligence (AI) could improve diagnosis of HFrEF. Specifically, through a convergence of DHTs and AI, a single-lead electrocardiogram (ECG) can be recorded by a smart stethoscope and interrogated by AI (AI-ECG) to potentially serve as a point-of-care HFrEF test. However, there are concerning evidence gaps for such DHTs applying AI; across intersecting clinical, health economic, and ethical considerations. My thesis therefore investigates hypotheses that AI-ECG is 1.) Reliable, accurate, unbiased, and can be patient self-administered, 2.) Of justifiable health economic impact for primary care deployment, and 3.) Appropriate across ethical domains for deployment as a tool for patient self-administered screening. The theoretical basis for this work is presented in the Introduction (Chapter 1). Chapter 2 describes the first large-scale, multi-centre independent external validation study of AI-ECG, prospectively recruiting 1,050 patients and highlighting impressive performance: area under the curve, sensitivity, and specificity up to 0·91 (95% confidence interval: 0·88–0·95), 91·9% (78·1–98·3), and 80·2% (75·5–84·3) respectively; and absence of bias by age, sex, and ethnicity. Performance was independent of operator, and usability of the tool extended to patients being able to self-examine. Chapter 3 presents a clinical and health economic outcomes analysis using a contemporary digital repository of 2.5 million NHS patient records. A propensity-matched cohort was derived using all patients diagnosed with HF from 2015-2020 (n = 34,208). Novel findings included the unacceptable reality that 70% of index HF diagnoses are made through hospitalisation; where index diagnosis through primary care conferred a medium-term survival advantage and long-term cost saving (£2,500 per patient). This underpins a health economic model for the deployment of AI-ECG across primary care. Chapter 4 approaches a normative ethical analysis focusing on equity, agency, data rights, and responsibility for safe, effective, and trustworthy implementation of an unprecedented at-home patient self-administered AI-ECG screening programme. I propose approaches to mitigating any potential harms, towards preserving and promoting trust, patient engagement, and public health. Collectively, this thesis marks novel work highlighting AI-ECG as tool with the potential to address major cardiovascular public health priorities. Scrutiny through complimentary clinical, health economic, and ethical considerations can directly serve patients and health systems by blueprinting best-practice for the evaluation and implementation of DHTs integrating AI – building the conviction needed to realise the full potential of such technologies.Open Acces
    corecore