505 research outputs found

    1 A Survey on Service Quality Description

    Get PDF
    Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in compute

    Model driven validation approach for enterprise architecture and motivation extensions

    Get PDF
    As the endorsement of Enterprise Architecture (EA) modelling continues to grow in diversity and complexity, management of its schema, artefacts, semantics and relationships has become an important business concern. To maintain agility and flexibility within competitive markets, organizations have also been compelled to explore ways of adjusting proactively to innovations, changes and complex events also by use of EA concepts to model business processes and strategies. Thus the need to ensure appropriate validation of EA taxonomies has been considered severally as an essential requirement for these processes in order to exert business motivation; relate information systems to technological infrastructure. However, since many taxonomies deployed today use widespread and disparate modelling methodologies, the possibility to adopt a generic validation approach remains a challenge. The proliferation of EA methodologies and perspectives has also led to intricacies in the formalization and validation of EA constructs as models often times have variant schematic interpretations. Thus, disparate implementations and inconsistent simulation of alignment between business architectures and heterogeneous application systems is common within the EA domain (Jonkers et al., 2003). In this research, the Model Driven Validation Approach (MDVA) is introduced. MDVA allows modelling of EA with validation attributes, formalization of the validation concepts and transformation of model artefacts to ontologies. The transformation simplifies querying based on motivation and constraints. As the extended methodology is grounded on the semiotics of existing tools, validation is executed using ubiquitous query language. The major contributions of this work are the extension of a metamodel of Business Layer of an EAF with Validation Element and the development of EAF model to ontology transformation Approach. With this innovation, domain-driven design and object-oriented analysis concepts are applied to achieve EAF model’s validation using ontology querying methodology. Additionally, the MDVA facilitates the traceability of EA artefacts using ontology graph patterns

    An investigation of the utility and value of process patterns in the management of software development projects.

    Get PDF
    Pattern theory has engendered much controversy in the field of architecture; yet it has brought new insights to the field of software engineering. Patterns continue to play an important role in software engineering in general, and in software development in particular. In this study, two preliminary surveys, focusing on the two fields of architecture and software engineering, were carried out to investigate the role and effect of patterns. The surveys indicate that while, patterns are unpopular within the architecture community and are criticised for stifling creativity, software patterns are popular within the software community and a high proportion of software development companies use them in their development practice. The results however show that in the vast majority of cases, pattern usage is limited to design-based problems, involving a single type of pattern (i.e. design patterns). The results further show that process-based patterns are seldom used in the software development industry, which prompted the topic of the main investigation of this research to evaluate the effect and utility of process patterns. A controlled experimental research method was designed and used to evaluate the utility and value of process patterns in the management of software development projects. In this '2x2 factorial design' experiment, the subjects were divided in two groups of experimental and control, where the experimental groups were given a set of process patterns to use in their software development projects. Overall, there were over 750 subjects involved in this experiment and a total of 260 software development projects (individual and group projects) were investigated. Measurements of a number of appropriate software attributes were taken during the life of the projects though a devised goal-based measurement process. A further number of attributes were measured after the projects were completed. Using metrics, a number of software attributes across the four major phases of the development lifecycle (i.e. Requirement Analysis, Design, Implementation, and Delivery) were measured and statistically analysed. In addition to these specific measurement data, official marks awarded to the projects by the tutors were also used in the analysis. The objective was to determine if the experimental groups produced software projects that were of higher quality, in terms of the measured software attributes, than the control groups. The experiment results show that, in the case of thirteen measured attributes, the treated groups scored significantly higher than the control groups. The improvements are across all the four major development phases, with at least two attribute in each phase, showing significant improvement. The experiment, therefore, confirms that the application of process patterns in software development projects, improves the quality of the projects in terms of a number of specific attributes such as productivity and defect density. The results further show that the treated subjects in the group projects performed significantly better than those in the individual projects. This, therefore, confirms that while the application of process patterns significantly improves the quality of both group and individual projects, the improvement is more prominent in the case of team projects. Process patterns are thus shown to be more effective on team projects in improving the quality of software development projects

    From Data to Decision: An Implementation Model for the Use of Evidence-based Medicine, Data Analytics, and Education in Transfusion Medicine Practice

    Get PDF
    Healthcare in the United States is underperforming despite record increases in spending. The causes are as myriad and complex as the suggested solutions. It is increasingly important to carefully assess the appropriateness and cost-effectiveness of treatments especially the most resource-consuming clinical interventions. Healthcare reimbursement models are evolving from fee-for-service to outcome-based payment. The Patient Protection and Affordable Care Act has added new incentives to address some of the cost, quality, and access issues related to healthcare, making the use of healthcare data and evidence-based decision-making essential strategies. However, despite the great promise of these strategies, the transition to data-driven, evidence-based medical practice is complex and faces many challenges. This study aims to bridge the gaps that exist between data, knowledge, and practice in a healthcare setting through the use of a comprehensive framework to address the administrative, cultural, clinical, and technical issues that make the implementation and sustainability of an evidence-based program and utilization of healthcare data so challenging. The study focuses on promoting evidence-based medical practice by leveraging a performance management system, targeted education, and data analytics to improve outcomes and control costs. The framework was implemented and validated in transfusion medicine practice. Transfusion is one of the top ten coded hospital procedures in the United States. Unfortunately, the costs of transfusion are underestimated and the benefits to patients are overestimated. The particular aim of this study was to reduce practice inconsistencies in red blood cell transfusion among hospitalists in a large urban hospital using evidence-based guidelines, a performance management system, recurrent reporting of practice-specific information, focused education, and data analytics in a continuous feedback mechanism to drive appropriate decision-making prior to the decision to transfuse and prior to issuing the blood component. The research in this dissertation provides the foundation for implementation of an integrated framework that proved to be effective in encouraging evidence-based best practices among hospitalists to improve quality and lower costs of care. What follows is a discussion of the essential components of the framework, the results that were achieved and observations relative to next steps a learning healthcare organization would consider

    A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes

    Get PDF
    IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently. This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process model’s main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness. Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi cluster’s performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process model’s level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. NutzungsbedarfsĂ€nderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der RĂŒckschlĂŒsse (rein) aus Fakten und PrĂ€missen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die BeschrĂ€nkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-AnsĂ€tze fĂŒr dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch AbhĂ€ngigkeiten zwischen und innerhalb einzelner Attribute ausreichend berĂŒcksichtigen können. Diese Arbeit prĂ€sentiert ein Prozessmodell fĂŒr das integrierte Reasoning ĂŒber quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewĂ€hrleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. ZunĂ€chst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-GerĂŒst formalisiert. Anschließend wird das GerĂŒst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. Abschließend wird die hergeleitete Reasoning-Funktion verwendet, um mittels “What-if”–Analysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzufĂŒhren. Das Prozessmodell enthĂ€lt fĂŒnf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewĂ€hrleisten und FehleranfĂ€lligkeit zu reduzieren. Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die DurchfĂŒhrbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten AusfĂŒhrung von Hydro-Meteorologie-Modellen erlĂ€utert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten AutomatisierungsansĂ€tze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen

    Performance assessment of urban precinct design: a scoping study

    Get PDF
    Executive Summary: Significant advances have been made over the past decade in the development of scientifically and industry accepted tools for the performance assessment of buildings in terms of energy, carbon, water, indoor environment quality etc. For resilient, sustainable low carbon urban development to be realised in the 21st century, however, will require several radical transitions in design performance beyond the scale of individual buildings. One of these involves the creation and application of leading edge tools (not widely available to built environment professions and practitioners) capable of being applied to an assessment of performance across all stages of development at a precinct scale (neighbourhood, community and district) in either greenfield, brownfield or greyfield settings. A core aspect here is the development of a new way of modelling precincts, referred to as Precinct Information Modelling (PIM) that provides for transparent sharing and linking of precinct object information across the development life cycle together with consistent, accurate and reliable access to reference data, including that associated with the urban context of the precinct. Neighbourhoods are the ‘building blocks’ of our cities and represent the scale at which urban design needs to make its contribution to city performance: as productive, liveable, environmentally sustainable and socially inclusive places (COAG 2009). Neighbourhood design constitutes a major area for innovation as part of an urban design protocol established by the federal government (Department of Infrastructure and Transport 2011, see Figure 1). The ability to efficiently and effectively assess urban design performance at a neighbourhood level is in its infancy. This study was undertaken by Swinburne University of Technology, University of New South Wales, CSIRO and buildingSMART Australasia on behalf of the CRC for Low Carbon Living

    Into the Black Box: Designing for Transparency in Artificial Intelligence

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build
    • 

    corecore