1,252 research outputs found

    Dataflow Programming and Acceleration of Computationally-Intensive Algorithms

    Get PDF
    The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping

    A Comprehensive Review on Audio based Musical Instrument Recognition: Human-Machine Interaction towards Industry 4.0

    Get PDF
    Over the last two decades, the application of machine technology has shifted from industrial to residential use. Further, advances in hardware and software sectors have led machine technology to its utmost application, the human-machine interaction, a multimodal communication. Multimodal communication refers to the integration of various modalities of information like speech, image, music, gesture, and facial expressions. Music is the non-verbal type of communication that humans often use to express their minds. Thus, Music Information Retrieval (MIR) has become a booming field of research and has gained a lot of interest from the academic community, music industry, and vast multimedia users. The problem in MIR is accessing and retrieving a specific type of music as demanded from the extensive music data. The most inherent problem in MIR is music classification. The essential MIR tasks are artist identification, genre classification, mood classification, music annotation, and instrument recognition. Among these, instrument recognition is a vital sub-task in MIR for various reasons, including retrieval of music information, sound source separation, and automatic music transcription. In recent past years, many researchers have reported different machine learning techniques for musical instrument recognition and proved some of them to be good ones. This article provides a systematic, comprehensive review of the advanced machine learning techniques used for musical instrument recognition. We have stressed on different audio feature descriptors of common choices of classifier learning used for musical instrument recognition. This review article emphasizes on the recent developments in music classification techniques and discusses a few associated future research problems

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Rethink Digital Health Innovation: Understanding Socio-Technical Interoperability as Guiding Concept

    Get PDF
    Diese Dissertation sucht nach einem theoretischem Grundgerüst, um komplexe, digitale Gesundheitsinnovationen so zu entwickeln, dass sie bessere Erfolgsaussichten haben, auch in der alltäglichen Versorgungspraxis anzukommen. Denn obwohl es weder am Bedarf von noch an Ideen für digitale Gesundheitsinnovationen mangelt, bleibt die Flut an erfolgreich in der Praxis etablierten Lösungen leider aus. Dieser unzureichende Diffusionserfolg einer entwickelten Lösung - gern auch als Pilotitis pathologisiert - offenbart sich insbesondere dann, wenn die geplante Innovation mit größeren Ambitionen und Komplexität verbunden ist. Dem geübten Kritiker werden sofort ketzerische Gegenfragen in den Sinn kommen. Beispielsweise was denn unter komplexen, digitalen Gesundheitsinnovationen verstanden werden soll und ob es überhaupt möglich ist, eine universale Lösungsformel zu finden, die eine erfolgreiche Diffusion digitaler Gesundheitsinnovationen garantieren kann. Beide Fragen sind nicht nur berechtigt, sondern münden letztlich auch in zwei Forschungsstränge, welchen ich mich in dieser Dissertation explizit widme. In einem ersten Block erarbeite ich eine Abgrenzung jener digitalen Gesundheitsinnovationen, welche derzeit in Literatur und Praxis besondere Aufmerksamkeit aufgrund ihres hohen Potentials zur Versorgungsverbesserung und ihrer resultierenden Komplexität gewidmet ist. Genauer gesagt untersuche ich dominante Zielstellungen und welche Herausforderung mit ihnen einhergehen. Innerhalb der Arbeiten in diesem Forschungsstrang kristallisieren sich vier Zielstellungen heraus: 1. die Unterstützung kontinuierlicher, gemeinschaftlicher Versorgungsprozesse über diverse Leistungserbringer (auch als inter-organisationale Versorgungspfade bekannt); 2. die aktive Einbeziehung der Patient:innen in ihre Versorgungsprozesse (auch als Patient Empowerment oder Patient Engagement bekannt); 3. die Stärkung der sektoren-übergreifenden Zusammenarbeit zwischen Wissenschaft und Versorgungpraxis bis hin zu lernenden Gesundheitssystemen und 4. die Etablierung daten-zentrierter Wertschöpfung für das Gesundheitswesen aufgrund steigender bzgl. Verfügbarkeit valider Daten, neuen Verarbeitungsmethoden (Stichwort Künstliche Intelligenz) sowie den zahlreichen Nutzungsmöglichkeiten. Im Fokus dieser Dissertation stehen daher weniger die autarken, klar abgrenzbaren Innovationen (bspw. eine Symptomtagebuch-App zur Beschwerdedokumentation). Vielmehr adressiert diese Doktorarbeit jene Innovationsvorhaben, welche eine oder mehrere der o.g. Zielstellung verfolgen, ein weiteres technologisches Puzzleteil in komplexe Informationssystemlandschaften hinzufügen und somit im Zusammenspiel mit diversen weiteren IT-Systemen zur Verbesserung der Gesundheitsversorgung und/ oder ihrer Organisation beitragen. In der Auseinandersetzung mit diesen Zielstellungen und verbundenen Herausforderungen der Systementwicklung rückte das Problem fragmentierter IT-Systemlandschaften des Gesundheitswesens in den Mittelpunkt. Darunter wird der unerfreuliche Zustand verstanden, dass unterschiedliche Informations- und Anwendungssysteme nicht wie gewünscht miteinander interagieren können. So kommt es zu Unterbrechungen von Informationsflüssen und Versorgungsprozessen, welche anderweitig durch fehleranfällige Zusatzaufwände (bspw. Doppeldokumentation) aufgefangen werden müssen. Um diesen Einschränkungen der Effektivität und Effizienz zu begegnen, müssen eben jene IT-System-Silos abgebaut werden. Alle o.g. Zielstellungen ordnen sich dieser defragmentierenden Wirkung unter, in dem sie 1. verschiedene Leistungserbringer, 2. Versorgungsteams und Patient:innen, 3. Wissenschaft und Versorgung oder 4. diverse Datenquellen und moderne Auswertungstechnologien zusammenführen wollen. Doch nun kommt es zu einem komplexen Ringschluss. Einerseits suchen die in dieser Arbeit thematisierten digitalen Gesundheitsinnovationen Wege zur Defragmentierung der Informationssystemlandschaften. Andererseits ist ihre eingeschränkte Erfolgsquote u.a. in eben jener bestehenden Fragmentierung begründet, die sie aufzulösen suchen. Mit diesem Erkenntnisgewinn eröffnet sich der zweite Forschungsstrang dieser Arbeit, der sich mit der Eigenschaft der 'Interoperabilität' intensiv auseinandersetzt. Er untersucht, wie diese Eigenschaft eine zentrale Rolle für Innovationsvorhaben in der Digital Health Domäne einnehmen soll. Denn Interoperabilität beschreibt, vereinfacht ausgedrückt, die Fähigkeit von zwei oder mehreren Systemen miteinander gemeinsame Aufgaben zu erfüllen. Sie repräsentiert somit das Kernanliegen der identifizierten Zielstellungen und ist Dreh- und Angelpunkt, wenn eine entwickelte Lösung in eine konkrete Zielumgebung integriert werden soll. Von einem technisch-dominierten Blickwinkel aus betrachtet, geht es hierbei um die Gewährleistung von validen, performanten und sicheren Kommunikationsszenarien, sodass die o.g. Informationsflussbrüche zwischen technischen Teilsystemen abgebaut werden. Ein rein technisches Interoperabilitätsverständnis genügt jedoch nicht, um die Vielfalt an Diffusionsbarrieren von digitalen Gesundheitsinnovationen zu umfassen. Denn beispielsweise das Fehlen adäquater Vergütungsoptionen innerhalb der gesetzlichen Rahmenbedingungen oder eine mangelhafte Passfähigkeit für den bestimmten Versorgungsprozess sind keine rein technischen Probleme. Vielmehr kommt hier eine Grundhaltung der Wirtschaftsinformatik zum Tragen, die Informationssysteme - auch die des Gesundheitswesens - als sozio-technische Systeme begreift und dabei Technologie stets im Zusammenhang mit Menschen, die sie nutzen, von ihr beeinflusst werden oder sie organisieren, betrachtet. Soll eine digitale Gesundheitsinnovation, die einen Mehrwert gemäß der o.g. Zielstellungen verspricht, in eine existierende Informationssystemlandschaft der Gesundheitsversorgung integriert werden, so muss sie aus technischen sowie nicht-technischen Gesichtspunkten 'interoperabel' sein. Zwar ist die Notwendigkeit von Interoperabilität in der Wissenschaft, Politik und Praxis bekannt und auch positive Bewegungen der Domäne hin zu mehr Interoperabilität sind zu verspüren. Jedoch dominiert dabei einerseits ein technisches Verständnis und andererseits bleibt das Potential dieser Eigenschaft als Leitmotiv für das Innovationsmanagement bislang weitestgehend ungenutzt. An genau dieser Stelle knüpft nun der Hauptbeitrag dieser Doktorarbeit an, in dem sie eine sozio-technische Konzeptualisierung und Kontextualisierung von Interoperabilität für künftige digitale Gesundheitsinnovationen vorschlägt. Literatur- und expertenbasiert wird ein Rahmenwerk erarbeitet - das Digital Health Innovation Interoperability Framework - das insbesondere Innovatoren und Innovationsfördernde dabei unterstützen soll, die Diffusionswahrscheinlichkeit in die Praxis zu erhöhen. Nun sind mit diesem Framework viele Erkenntnisse und Botschaften verbunden, die ich für diesen Prolog wie folgt zusammenfassen möchte: 1. Um die Entwicklung digitaler Gesundheitsinnovationen bestmöglich auf eine erfolgreiche Integration in eine bestimmte Zielumgebung auszurichten, sind die Realisierung eines neuartigen Wertversprechens sowie die Gewährleistung sozio-technischer Interoperabilität die zwei zusammenhängenden Hauptaufgaben eines Innovationsprozesses. 2. Die Gewährleistung von Interoperabilität ist eine aktiv zu verantwortende Managementaufgabe und wird durch projektspezifische Bedingungen sowie von externen und internen Dynamiken beeinflusst. 3. Sozio-technische Interoperabilität im Kontext digitaler Gesundheitsinnovationen kann über sieben, interdependente Ebenen definiert werden: Politische und regulatorische Bedingungen; Vertragsbedingungen; Versorgungs- und Geschäftsprozesse; Nutzung; Information; Anwendungen; IT-Infrastruktur. 4. Um Interoperabilität auf jeder dieser Ebenen zu gewährleisten, sind Strategien differenziert zu definieren, welche auf einem Kontinuum zwischen Kompatibilitätsanforderungen aufseiten der Innovation und der Motivation von Anpassungen aufseiten der Zielumgebung verortet werden können. 5. Das Streben nach mehr Interoperabilität fördert sowohl den nachhaltigen Erfolg der einzelnen digitalen Gesundheitsinnovation als auch die Defragmentierung existierender Informationssystemlandschaften und trägt somit zur Verbesserung des Gesundheitswesens bei. Zugegeben: die letzte dieser fünf Botschaften trägt eher die Färbung einer Überzeugung, als dass sie ein Ergebnis wissenschaftlicher Beweisführung ist. Dennoch empfinde ich diese, wenn auch persönliche Erkenntnis als Maxim der Domäne, der ich mich zugehörig fühle - der IT-Systementwicklung des Gesundheitswesens

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Amplifying the Music Listening Experience through Song Comments on Music Streaming Platforms

    Full text link
    Music streaming services are increasingly popular among younger generations who seek social experiences through personal expression and sharing of subjective feelings in comments. However, such emotional aspects are often ignored by current platforms, which affects the listeners' ability to find music that triggers specific personal feelings. To address this gap, this study proposes a novel approach that leverages deep learning methods to capture contextual keywords, sentiments, and induced mechanisms from song comments. The study augments a current music app with two features, including the presentation of tags that best represent song comments and a novel map metaphor that reorganizes song comments based on chronological order, content, and sentiment. The effectiveness of the proposed approach is validated through a usage scenario and a user study that demonstrate its capability to improve the user experience of exploring songs and browsing comments of interest. This study contributes to the advancement of music streaming services by providing a more personalized and emotionally rich music experience for younger generations.Comment: In the Proceedings of ChinaVis 202

    Computational Methods in the Study of Political Behavior

    Get PDF
    In this thesis, I explore how individual-level actions contribute to aggregate political outcomes. In each chapter, I aim to understand an observed political behavior using data or methodologies previously unused in their contexts. The subject matter ranges from protest activity and vote choice to theoretical opinion models and re-examining how socioeconomic class is understood in quantitative work. In the first two chapters I employ novel datasets to understand phenomena where popular theories differ from empirical observations. In Chapter 1 I examine protest behavior, which is not the equilibrium prediction of models of collective action. I investigate what aspects of published language can predict protest participation and how these change leading up to and following protests. Specifically, I collect and, using natural language processing methods, analyze 4 million tweets of individuals who participated in the Black Lives Matter protests during the summer of 2020. Using geographical and temporal variation to isolate results, I find evidence that interest in the subject, measured as percentage of online time discussing the matter, is correlated with protest behavior. However, I also find that collective identity, measured through pronoun use, does not have a strong relationship with protest behavior. Next, in Chapter 2, I use a survey---which I helped to develop and field---to understand the 2020 midterm elections' surprising results. While most accepted models of midterm elections predicted massive Democratic losses (averaging around 40 seats in the House), these predictions were not met. In fact, the Democratic party did well---they did not lose a single state legislature, expanded some majorities, and lost only 9 seats in the House of Representatives. Testing various models of midterm elections, I show that the 2020 midterms were issue-based elections, where views on abortion had a large impact on vote choice. In the second half of the thesis I focus on methodologies. Specifically, in Chapter 3, I expanded on mathematical models of consensus building to better mimic reality. Bounded confidence models have historically been used to explain convergence of opinions. In this chapter I add a repulsive element, modeling the inclination to differentiate oneself from someone who otherwise has similar beliefs. With this added component, convergence is no longer assumed. I explore both analytical and simulated numerical results to understand the dynamics of opinions in this new context. Finally, in Chapter 4, I introduce a method for operationalizing socioeconomic class as a latent variable in regression models. While there has been a plethora of research which shows that class affects opinions, views, and actions, the definition of class is nebulous. I argue that this is a result of the nature of class, which is context dependent. Therefore, rather than explicitly determining class, I present using class within a mixture model framework. This allows for the exact definition of class to change within the context being analyzed and enables researchers to use class within their work. Following the theoretical arguments, I present the efficacy of the approach using the American National Election Studies survey from 2020 to show how class differs when related to views of the U.S. Immigration and Customs Enforcement agency and the Black Lives Matter movement.</p

    Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

    Full text link
    Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF
    corecore