982 research outputs found

    Monitoring tools for DevOps and microservices: A systematic grey literature review

    Get PDF
    Microservice-based systems are usually developed according to agile practices like DevOps, which enables rapid and frequent releases to promptly react and adapt to changes. Monitoring is a key enabler for these systems, as they allow to continuously get feedback from the field and support timely and tailored decisions for a quality-driven evolution. In the realm of monitoring tools available for microservices in the DevOps-driven development practice, each with different features, assumptions, and performance, selecting a suitable tool is an as much difficult as impactful task. This article presents the results of a systematic study of the grey literature we performed to identify, classify and analyze the available monitoring tools for DevOps and microservices. We selected and examined a list of 71 monitoring tools, drawing a map of their characteristics, limitations, assumptions, and open challenges, meant to be useful to both researchers and practitioners working in this area. Results are publicly available and replicable

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Thematic Working Group 3 - Inclusion of Excluded Populations : Access and Learning Optimization via IT in the Post-Pandemic Era

    Get PDF
    Thematic Working Group (TWG) 3’s theme is “Inclusion of excluded populations: access and learning optimization via IT in the post-pandemic era”. A focal concern is established by the presence of the first word – ‘inclusion’ – and how this relates to ‘excluded populations’. Much of the research in this field has focused on inclusion for individuals; however, the evidence shows that educational exclusion has multiple dimensions (Passey, 2014). To accommodate this within the current focus, therefore, identifying key dimensions of ‘excluded populations’ will be a key concern of this document. ‘Access’ will be considered beyond physical technology access, involving aspects of accessibility, agency and empowerment. These aspects relate to a definition of access that concerns the needs for individuals to develop and have digital capabilities and abilities to select applications appropriate to purpose, as discussed, for example, by Helsper (2021) and Passey et al. (2018). Taking this wider concern for access, ‘learning optimization’ will be explored as a term that highlights the need to focus on technological access and provision enabling successful outcomes. Given the fact that the intention of the work of TWG3 is to explore findings in the ‘post-pandemic’ context, communication technologies as well as just information technology, ‘IT’, are clearly important and need to be considered. Additionally, exclusion factors to be addressed need to be clearly identified so that inclusion can be accommodated and ensured in the context of specific excluded populations. However, inclusion should not be implemented as an imposition in the context of digital technologies, as some populations do not wish to use digital technologies (Wetmore, 2007), and in this respect the issue of the need to acknowledge diversity is important

    Automatic Generation of Personalized Recommendations in eCoaching

    Get PDF
    Denne avhandlingen omhandler eCoaching for personlig livsstilsstÞtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er Ä designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede lÞsningen er fokusert pÄ forbedring av fysisk aktivitet. Prototypen bruker bÊrbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for Ä utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen pÄ teknologisk verifisering snarere enn klinisk evaluering.publishedVersio

    Interactive visualisation of electricity usage in smart environments

    Get PDF
    Saving electricity is a trending topic due to the electricity challenges that are being faced globally. Smart environments are environments that are equipped with physical objects, which include computers, sensors, actuators, smartphones, and wearable devices interconnected together through the Internet of Things. The Internet of Things provides a network to achieve communication, and computation abilities to provide individuals with smart services anytime, and anywhere. Rapid developments in information technology have increased the number of smart appliances being used, leading to increased electricity usage. Devices and appliances in Smart Environments continue to consume electricity even when not in use, because of the standby function. The problems arise as the electricity consumption of the standby function accumulates to large amounts. Effective communication through visualisation of the electricity consumption in a Smart Environment provides a viable solution to reducing the consumption of electricity. This research aimed to design and developed a visualisation system that successfully communicates electricity consumption to the user using a variety of visualisation techniques. The Design Science Research Methodology was used to address the research questions and was used to iteratively design and develop an energy usage visualisation system. The visualisation system was created for the Smart Lab at the Nelson Mandela University's Department of Computing Sciences. A usability study was conducted to assess the usability and efficacy of the system. The system was found to be usable and effective in communicating power usage to potential customers, since the participants were able to complete the tasks in a short amount of time. The positive results show that visualisation can aid in communicating electricity usage to customers, resulting in a possible reduction in electricity consumption and improved decision-making.Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 202

    Quality and coordination in home care: a national cross-sectional multicenter study – SPOTnat

    Get PDF
    Homecare services include a wide range of medical treatments and therapies, basic care (e.g., personal hygiene), domestic services (e.g., household support) and social services. However, it has been neglected in most countries compared to hospitals and nursing homes, especially regarding healthcare research. As a result, while many countries see high-quality, sustainable care at home as a high-value goal, there are many knowledge gaps in the homecare setting. For agencies, challenges include an increasing demand combined with a workforce shortage, constant cost pressure, and issues with both care coordination and care quality. Problematically, owing to a long shortage of research, knowledge of these elements is scant. In this sector, large-scale studies that consider macro-, meso-, and micro-level factors and incorporate multiple perspectives and measurements to capture coordination and quality of care are extremely rare. When the SPOTnat study (Spitex Koordination und QualitĂ€t - eine nationale Studie (homecare coordination and quality – a national study)) began, no published study had examined how homecare agencies perform regarding care coordination. More importantly, though, none had determined which factors are associated with care coordination in the homecare setting. Moreover, across the entire health sector, no clear, accepted concept was available either of what exactly constitutes coordination, or of what it entails. This dissertation is embedded in the SPOTnat study. Preparing it, the overall goal was to deepen our understanding of the homecare sector regarding care coordination and quality. Therefore, a preliminary goal was to clarify the concept of care coordination. Later goals included describing the various financial and regulatory mechanisms operating in the Swiss homecare setting. That information made it possible to explore how those factors relate to homecare agencies’ structures, processes, and working environments, how system and agency factors are related to care coordination, and ultimately how care coordination is related to quality of care. CHAPTER 1 presents the background, the target research gap and the rationale behind this dissertation. We look closely at the unique challenges of the homecare setting, particularly regarding coordination and care quality. In CHAPTER 2 we establish a theoretical basis for care coordination and explain how the concept of coordination can be understood and measured. Our newly-constructed COORA (care coordination) framework differentiates clearly between coordination as a process—i.e., tasks people perform to coordinate versus coordination as a state, i.e., the desired outcome of the coordination process. Applying this distinction to both measurement and interpretation of results helps avoid misleading conclusions. The COORA theoretical framework is based on the full range of influential coordination literature. Iteratively developed in consultation with healthcare professionals, patients and their relatives, it considers the complex relationships between the many factors influencing coordination (as an outcome), and is applicable not only to homecare but across healthcare settings. However, measurement of both care coordination and quality of care remains a challenge. Further research will be necessary to develop and validate a questionnaire that reliably measures care coordination as an outcome. CHAPTER 3 presents the research protocol for the SPOTnat study, a national multi-center cross-sectional survey in Swiss homecare settings. That study included 88 homecare agencies. Using public records and data from questionnaires sent to those agencies’ 3323 employees (including managers and homecare staff), 1508 clients and 1105 relatives of those clients, the SPOTnat research team gathered data on homecare financing mechanisms, agency characteristics and homecare employees' working environments and coordination activities, as well as staff- and patient-level perceptions of coordination and quality of care. CHAPTER 4 discusses our analyses of how regulatory and financial mechanisms explain differences in agency structures, processes and work environments. Based on the mechanisms acting on the participating agencies, we divided them into four groups. Our analyses showed considerable inter-group differences, especially in the range and volume of services provided, but also regarding their employment conditions and cost structures. The most prominent inter-group differences related to the conditions of their cantonal and municipal service agreements. Alongside such details, financial incentives must harmonize the care goals, i.e., achieving and maintaining accessible, high-quality homecare, with the regulatory goals, i.e., assuring the quality and financial sustainability of that care. CHAPTER 5 includes an analysis of how selected explicit and implicit agency-level coordination (process) mechanisms are linked to successful coordination (as an outcome). The results revealed that several implicit mechanisms, i.e., communication/information exchange, role clarity, mutual respect/trust, accountability/predictability/common perspectives, and knowledge of the health system, all correlate with employee-perceived coordination ratings. We also found that certain coordination mechanisms mediated the effects both of agency characteristics (i.e., staffing/ workload and overtime) and of external factors (i.e., regulations). In CHAPTER 6, the final included study gives insights regarding how both homecare employees’ and clients’ coordination-relevant perceptions relate to one another’s quality-of-care ratings. Our analyses indicate that employee-perceived care coordination ratings correlate positively with their own ratings of their quality of care, while client-perceived care coordination problems correlated inversely with client-reported quality of care. Client-perceived coordination problems also correlated positively with hospitalizations and unscheduled urgent medical visits, but not significantly with emergency department visits. No associations were found between employee-perceived coordination and either healthcare service utilization or client quality-of-care ratings. Alongside these relationships, various coordination deficiencies, for example, poor information flow, also became apparent. To conclude, CHAPTER 7 provides a synthesis of the main findings and discusses the results in relation to practical, political and research implications. While contributing further to the understanding of care coordination via the COORA framework, this dissertation also raises various methodological issues. From a practical perspective, measuring and operationalizing both coordinating processes and quality of care outcomes remain challenging issues. While our qualitative results suggest that improving coordination will lead to higher-quality care, testing and ultimately exploiting any such relationship will require not only improved financial and technical structures, but the abandonment of outmoded siloed attitudes regarding the entire homecare sector

    Open Problems in DAOs

    Full text link
    Decentralized autonomous organizations (DAOs) are a new, rapidly-growing class of organizations governed by smart contracts. Here we describe how researchers can contribute to the emerging science of DAOs and other digitally-constituted organizations. From granular privacy primitives to mechanism designs to model laws, we identify high-impact problems in the DAO ecosystem where existing gaps might be tackled through a new data set or by applying tools and ideas from existing research fields such as political science, computer science, economics, law, and organizational science. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the wider research community to join the global effort to invent the next generation of organizations

    A conceptual framework for uncertainty in software systems and its application to software architectures

    Get PDF
    The development and operation of a software system involve many aspects including processes, artefacts, infrastructure and environments. Most of these aspects are vulnerable to uncertainty. Thus, the identification, representation and management of uncertainty in software systems is important and will be of interest to many stakeholders in software systems. The hypothesis of this work is that such consideration would benefit from an underlying conceptual framework that allows stakeholders to characterise, analyse and mitigate uncertainties. This PhD proposes a framework to provide a generic foundation for the systematic and explicit consideration of uncertainty in software systems by consolidating and extending existing approaches to dealing with uncertainty, which are typically tailored to specific domains or artefacts. The thesis applies the framework to software architectures, which are fundamental in determining the structure, behaviour and qualities of software systems and are thus suited to serve as an exemplar artefact. The framework is evaluated using the software architectures of case studies from 3 different domains. The contributions of the research to the study of uncertainty in software systems include a literature review of approaches to managing uncertainty in software architecture, a review of existing work on uncertainty frameworks related to software systems, a conceptual framework for uncertainty in software systems, a conceptualisation of the workbench infrastructure as a basis for building an uncertainty consideration workbench of tools for representing uncertainty as part of software architecture descriptions, and an evaluation of the uncertainty framework using three software architecture case studies

    Evaluating Architectural Safeguards for Uncertain AI Black-Box Components

    Get PDF
    KĂŒnstliche Intelligenz (KI) hat in den vergangenen Jahren große Erfolge erzielt und ist immer stĂ€rker in den Fokus geraten. Insbesondere Methoden des Deep Learning (ein Teilgebiet der KI), in dem Tiefe Neuronale Netze (TNN) zum Einsatz kommen, haben beeindruckende Ergebnisse erzielt, z.B. im autonomen Fahren oder der Mensch-Roboter-Interaktion. Die immense DatenabhĂ€ngigkeit und KomplexitĂ€t von TNN haben jedoch gravierende Schwachstellen offenbart. So reagieren TNN sensitiv auf bestimmte Einflussfaktoren der Umwelt (z.B. Helligkeits- oder KontrastĂ€nderungen in Bildern) und fĂŒhren zu falschen Vorhersagen. Da KI (und insbesondere TNN) in sicherheitskritischen Systemen eingesetzt werden, kann solch ein Verhalten zu lebensbedrohlichen Situationen fĂŒhren. Folglich haben sich neue Forschungspotenziale entwickelt, die sich explizit der Absicherung von KI-Verfahren widmen. Ein wesentliches Problem bei vielen KI-Verfahren besteht darin, dass ihr Verhalten oder Vorhersagen auf Grund ihrer hohen KomplexitĂ€t nicht erklĂ€rt bzw. nachvollzogen werden können. Solche KI-Modelle werden auch als Black-Box bezeichnet. Bestehende Arbeiten adressieren dieses Problem, in dem zur Laufzeit “bösartige” Eingabedaten identifiziert oder auf Basis von Ein- und Ausgaben potenziell falsche Vorhersagen erkannt werden. Arbeiten in diesem Bereich erlauben es zwar potenziell unsichere ZustĂ€nde zu erkennen, machen allerdings keine Aussagen, inwiefern mit solchen Situationen umzugehen ist. Somit haben sich eine Reihe von AnsĂ€tzen auf Architektur- bzw. Systemebene etabliert, um mit KI-induzierten Unsicherheiten umzugehen (z.B. N-Version-Programming-Muster oder Simplex Architekturen). DarĂŒber hinaus wĂ€chst die Anforderung an KI-basierte Systeme sich zur Laufzeit anzupassen, um mit sich verĂ€ndernden Bedingungen der Umwelt umgehen zu können. Systeme mit solchen FĂ€higkeiten sind bekannt als Selbst-Adaptive Systeme. Software-Ingenieure stehen nun vor der Herausforderung, aus einer Menge von Architekturellen Sicherheitsmechanismen, den Ansatz zu identifizieren, der die nicht-funktionalen Anforderungen bestmöglich erfĂŒllt. Jeder Ansatz hat jedoch unterschiedliche Auswirkungen auf die QualitĂ€tsattribute des Systems. Architekturelle Entwurfsentscheidungen gilt es so frĂŒh wie möglich (d.h. zur Entwurfszeit) aufzulösen, um nach der Implementierung des Systems Änderungen zu vermeiden, die mit hohen Kosten verbunden sind. DarĂŒber hinaus mĂŒssen insbesondere sicherheitskritische Systeme den strengen (QualitĂ€ts-) Anforderungen gerecht werden, die bereits auf Architektur-Ebene des Software-Systems adressiert werden mĂŒssen. Diese Arbeit befasst sich mit einem modellbasierten Ansatz, der Software-Ingenieure bei der Entwicklung von KI-basierten System unterstĂŒtzt, um architekturelle Entwurfsentscheidungen (bzw. architekturellen Sicherheitsmechanismen) zum Umgang mit KI-induzierten Unsicherheiten zu bewerten. Insbesondere wird eine Methode zur ZuverlĂ€ssigkeitsvorhersage von KI-basierten Systemen auf Basis von etablierten modellbasierten Techniken erforscht. In einem weiteren Schritt wird die Erweiterbarkeit/Verallgemeinerbarkeit der ZuverlĂ€ssigkeitsvorhersage fĂŒr Selbst-Adaptive Systeme betrachtet. Der Kern beider AnsĂ€tze ist ein Umweltmodell zur Modellierung () von KI-spezifischen Unsicherheiten und () der operativen Umwelt des Selbst-Adaptiven Systems. Zuletzt wird eine Klassifikationsstruktur bzw. Taxonomie vorgestellt, welche, auf Basis von verschiedenen Dimensionen, KI-basierte Systeme in unterschiedliche Klassen einteilt. Jede Klasse ist mit einem bestimmten Grad an VerlĂ€sslichkeitszusicherungen assoziiert, die fĂŒr das gegebene System gemacht werden können. Die Dissertation umfasst vier zentrale BeitrĂ€ge. 1. DomĂ€nenunabhĂ€ngige Modellierung von KI-spezifischen Umwelten: In diesem Beitrag wurde ein Metamodell zur Modellierung von KI-spezifischen Unsicherheiten und ihrer zeitlichen Ausdehnung entwickelt, welche die operative Umgebung eines selbstadaptiven Systems bilden. 2. ZuverlĂ€ssigkeitsvorhersage von KI-basierten Systemen: Der vorgestellte Ansatz erweitert eine existierende Architekturbeschreibungssprache (genauer: Palladio Component Model) zur Modellierung von Komponenten-basierten Software-Architekturen sowie einem dazugehörigenWerkzeug zur ZuverlĂ€ssigkeitsvorhersage (fĂŒr klassische Software-Systeme). Das Problem der Black-Box-Eigenschaft einer KI-Komponente wird durch ein SensitivitĂ€tsmodell adressiert, das, in AbhĂ€ngigkeit zu verschiedenen Unsicherheitsfaktoren, die PrĂ€dektive Unsicherheit einer KI-Komponente modelliert. 3. Evaluation von Selbst-Adaptiven Systemen: Dieser Beitrag befasst sich mit einem Rahmenwerk fĂŒr die Evaluation von Selbst-Adaptiven Systemen, welche fĂŒr die Absicherung von KI-Komponenten vorgesehen sind. Die Arbeiten zu diesem Beitrag verallgemeinern/erweitern die Konzepte von Beitrag 2 fĂŒr Selbst-Adaptive Systeme. 4. Klassen der VerlĂ€sslichkeitszusicherungen: Der Beitrag beschreibt eine Klassifikationsstruktur, die den Grad der Zusicherung (in Bezug auf bestimmte Systemeigenschaften) eines KI-basierten Systems bewertet. Der zweite Beitrag wurde im Rahmen einer Fallstudie aus dem Bereich des Autonomen Fahrens validiert. Es wurde geprĂŒft, ob PlausibilitĂ€tseigenschaften bei der ZuverlĂ€ssigkeitsvorhersage erhalten bleiben. Hierbei konnte nicht nur die PlausibilitĂ€t des Ansatzes nachgewiesen werden, sondern auch die generelle Möglichkeit Entwurfsentscheidungen zur Entwurfszeit zu bewerten. FĂŒr die Validierung des dritten Beitrags wurden ebenfalls PlausibilitĂ€tseigenschaften geprĂŒft (im Rahmen der eben genannten Fallstudie und einer Fallstudie aus dem Bereich der Mensch-Roboter-Interaktion). DarĂŒber hinaus wurden zwei weitere Community-Fallstudien betrachtet, bei denen (auf Basis von Simulatoren) Selbst-Adaptive Systeme bewertet und mit den Ergebnissen unseres Ansatzes verglichen wurden. In beiden FĂ€llen konnte gezeigt werden, dass zum einen alle PlausibilitĂ€tseigenschaft erhalten werden und zum anderen, der Ansatz dieselben Ergebnisse erzeugt, wie die DomĂ€nen-spezifischen Simulatoren. DarĂŒber hinaus konnten wir zeigen, dass unser Ansatz Software-Ingenieure bzgl. der Bewertung von Entwurfsentscheidungen, die fĂŒr die Entwicklung von Selbst-Adaptiven Systemen relevant sind, unterstĂŒtzt. Der erste Beitrag wurde implizit mit Beitrag 2 und mit 3 validiert. FĂŒr den vierten Beitrag wurde die Klassifikationsstruktur auf bekannte und reprĂ€sentative KI-Systeme angewandt und diskutiert. Es konnte jedes KI-System in eine der Klassen eingeordnet werden, so dass die generelle Anwendbarkeit der Klassifikationsstruktur gezeigt wurde

    Deploying OWL ontologies for semantic mediation of mixed-reality interactions for human–robot collaborative assembly

    Get PDF
    For effective human–robot collaborative assembly, it is paramount to view both robots and humans as autonomous entities in that they can communicate, undertake different roles, and not be bound to pre-planned routines and task sequences. However, with very few exceptions, most of recent research assumes static pre-defined roles during collaboration with centralised architectures devoid of runtime communication that can influence task responsibility and execution. Furthermore, from an information system standpoint, they lack the self-organisation needed to cope with today’s manufacturing landscape that is characterised by product variants. Therefore, this study presents collaborative agents for manufacturing ontology (CAMO), which is an information model based on description logic that maintains a self-organising team network between collaborating human–robot multi-agent system (MAS). CAMO is implemented using the Web Ontology Language (OWL). It models popular notions of net systems and represents the agent, manufacturing, and interaction contexts that accommodate generalisability to different assemblies and agent capabilities. As a novel element, a dynamic consensus-driven collaboration based on parametric validation of semantic representations of agent capabilities via runtime dynamic communication is presented. CAMO is instantiated as agent beliefs in a framework that benefits from real-time dynamic communication with the assembly design environment and incorporates a mixed-reality environment for use by the operator. The employment of web technologies to project scalable notions of intentions via mixed reality is discussed for its novelty from a technology standpoint and as an intention projection mechanism. A case study with a real diesel engine assembly provides appreciable results and demonstrates the feasibility of CAMO and the framework.Peer reviewe
    • 

    corecore