1,684,912 research outputs found

    Quality-constrained routing in publish/subscribe systems

    Get PDF
    Routing in publish/subscribe (pub/sub) features a communication model where messages are not given explicit destination addresses, but destinations are determined by matching the subscription declared by subscribers. For a dynamic computing environment with applications that have quality demands, this is not sufficient. Routing decision should, in such environments, not only depend on the subscription predicate, but should also take the quality-constraints of applications and characteristics of network paths into account. We identified three abstraction levels of these quality constraints: functional, middleware and network. The main contribution of the paper is the concept of the integration of these constraints into the pub/sub routing. This is done by extending the syntax of pub/sub system and applying four generic, proposed by us, guidelines. The added values of quality-constrained routing concept are: message delivery satisfying quality demands of applications, improvement of system scalability and more optimise use of the network resources. We discuss the use case that shows the practical value of our concept

    Geographic information System Inputs to the Model Implementation Program for Water Quality Improvement

    Get PDF
    The purpose of the study id to demonstrate and evaluate the effectiveness of geographical information system inputs to water quality improvement programs. The particular program in this case is the federally sponsored Model Implementation Program (MIP). The water quality problems addressed in the study stem from non-point sources of pollution. The major non-point source pollution is soil erosion from agricultural land. It must be noted that the purpose of the study is not to evaluate the effectiveness of water quality improvement programs. That evaluation is the goal of MIP and beyond the scope of this stud. The information needs for a quality improvement program are usually very complex. Management decisions addressing water quality problem associated with non-point source pollution require the integration of land cover, soil, and land management information. The use if the complex information consumes time and manpower that could be utilized more efficiently to implement the management program. It is anticipated that the use of a computerized geographic system will enable more efficient use of time and resources

    A decision model for selecting of areas for improvement in EFQM model

    Get PDF
    To monitor the progress towards business excellence, thousands of organizations across the world use self-assessment on a regular basis. There are a few popular business excellence models that provide standard criteria against which an organization can measure its performances. European Foundation for Quality Management (EFQM) is the most popular ones. The EFQM Excellence Model was introduced at the beginning of 1992 as the framework for assessing organizations for the European Quality Award. It is now the most widely used organizational framework in Europe and across the world and it has become the basis for the majority of international, national and regional Quality Awards. It is a practical tool that can be used as a guide to identify areas for Improvement. However, the current EFQM model has some drawbacks and problems which are not able to identify the priorities in Area for Improvement (AFI). For organizations with limitations of time, budget and resources and cannot implement all the Area for Improvement, some standards or indexes and limitations should be defined for prioritizing and choosing the Area for Improvement. Using Topsis1 method which is one of the multi-attribute decision making model, the Area for Improvement can be identified. Therefore, this work will develop a method of evaluating, assessing and determining the Area for Improvement in the EFQM model. The results showed that the developed model is more valid and acceptable and the experts verified the model for selecting of Area for Improvement in EFQM in practice. The proposed model was employed in a case study and drawn out results from it and were evaluated from different points of view

    A case study framework for design and evaluation of a national project to improve prehospital care of myocardial infarction and stroke

    Get PDF
    Background: Cardiovascular disease (CVD) affects 1.8% of the population annually, 0.9% with stroke and 0.8% with coronary heart disease. People suffering from CVD often present acutely to ambulance services with symptoms of acute myocardial infarction or stroke. Early and effective treatment prevents death, improves long term health and reduces future disability. Objective: Our aim is to develop a rational approach for informing the design and evaluation of a national project for improving prehospital care of myocardial infarction and stroke: the Ambulance Services Cardiovascular Quality Initiative (ASCQI), the first national improvement project for prehospital care. Methods: We will use a case study methodology initially utilising an evaluation logic model to define inputs (in terms of resources for planning, implementation and evaluation), outputs (in terms of intended changes in healthcare processes) and longer-term outcomes (in terms of health and wider benefits or harms), whether intended or incidental and in the short, medium or long term. Results: We will present an evaluation logic model for the project. This will be expanded to show the analytical techniques which we will use to explain how and why the project achieves its outcomes. This includes times series analyses, pattern matching, cross case syntheses and explanation building to inform an explanatory logic model. We will discuss how this model will be useful in determining the data that will need to be collected during the course of the project to inform the detailed explanation of how and why the project delivered its outcomes. Conclusion: The case study approach will enable us to evaluate the impact of this collaborative project in constituent ambulance services as well as the initiative as a whole. It will enable us to show whether and to what extent the project has had an impact, but also how and why this has happened

    Development and Clinical Evaluation of an AI Support Tool for Improving Telemedicine Photo Quality

    Full text link
    Telemedicine utilization was accelerated during the COVID-19 pandemic, and skin conditions were a common use case. However, the quality of photographs sent by patients remains a major limitation. To address this issue, we developed TrueImage 2.0, an artificial intelligence (AI) model for assessing patient photo quality for telemedicine and providing real-time feedback to patients for photo quality improvement. TrueImage 2.0 was trained on 1700 telemedicine images annotated by clinicians for photo quality. On a retrospective dataset of 357 telemedicine images, TrueImage 2.0 effectively identified poor quality images (Receiver operator curve area under the curve (ROC-AUC) =0.78) and the reason for poor quality (Blurry ROC-AUC=0.84, Lighting issues ROC-AUC=0.70). The performance is consistent across age, gender, and skin tone. Next, we assessed whether patient-TrueImage 2.0 interaction led to an improvement in submitted photo quality through a prospective clinical pilot study with 98 patients. TrueImage 2.0 reduced the number of patients with a poor-quality image by 68.0%.Comment: 24 pages, 7 figure

    MS

    Get PDF
    thesisSeveral methods exist for monitoring software development. Few formal evaluation methods have been applied to measure and improve clinical software application problems once the software has been implemented in the clinical setting. A standardized software problem classification system was developed and implemented at the University of Utah Health Sciences Center. External validity was measured by a survey of 14 University Healthcare Consortium (UHC) hospitals. Internal validation was accomplished by: an indepth analysis of problems details; revision in the problem ticket format; verification from staff within the information systems department; and mapping of old problems to the new classification system. Cohen's Kappa statistics of agreement, used for reliability testing of the new classification systems, revealed good agreement (Kappa = .6162) among HELP Desk agents in consistency of classifying problems calls. A monthly quality improvement report template with the following categories was developed from the new classification system: top 25 problems; unplanned server downtimes; problem summaries; customer satisfaction survey results; top problems details; case analyses; and follow-up of case analysis. Continuous Quality Improvement (CQ) methodology was applied to problem reporting within the Office of Information Resources (OIR) and a web-based ticket entry system was implemented. The new system has resulted in the following benefits: reduction in problem resolution times by one third; improved problem ticket information; shift of 2 FTEs from call center to dispatch due to the increased efficiency of the HELP DESK; and a trend in improvement of customer satisfaction as measured by an online survey. The study provided an internal quality model for the OIR department and the UUHSC. The QM report template provided a method for tracking and trending software problems to use in conducting evaluation and quality improvement studies. The template also provided data for analysis and improvement studies. The template also provided data for analysis and improvement of customer satisfaction. The study has further potential as a model for information system departments at other health care institutions for implementing quality improvement methods. There is potential for improvement in the information technology, social, organizational, and cultural aspects as key issues emerge over time. There can be many consequences to the data collected and many consequences of change can be studied

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI

    Organisational culture and quality improvement : a study

    Get PDF
    Merged with duplicate record 10026.1/2682 on 06.20.2017 by CS (TIS)The initial direction of this research was in the application of Quality tools and techniques, within the framework of the EFQM Model for Business Excellence. Three quality improvement projects managed by the author (Cost of Quality, BPR and Benchmarking) sought to identify the key elements of a process improvement methodology. However, the completion of the three case studies led the author to review the whole approach of the research. The review led to the need to develop an understanding of the culture and the environment of an organisation as a precursor to implementing quality improvement. The ability of an organisation to manage the process of continuous improvement or TQM implementation was fundamentally dependent on the culture of an organisation. Organisational culture is the bedrock upon which organisational change is based and an understanding of the culture could help the practitioner focus on key change issues at the outset. The main work in the research then set about attempting to develop and test a model of organisational culture and climate which would help practitioners develop a fuller understanding of organisational culture and internal environment before interventions were carried out. A process for developing an understanding of organisational culture and climate was derived, using information obtained from the culture, quality and climate literature and the review of the case studies. This process included the use of various tools and techniques such as multi-item questionnaire and focus groups. The process used Focus Groups to identify key issues within Lloyds TSB and to help develop a multi-item questionnaire, termed PCOC. The PCOC questionnaire was then tested in four different Areas of Lloyds TSB and the results were analysed and compared to identify similarities and differences across Business Areas. The implications for the implementation of quality improvement were identified and recommendations for managing change were made

    Assessment of foodservice quality and identification of improvement strategies using hospital foodservice quality model

    Get PDF
    The purposes of this study were to assess hospital foodservice quality and to identify causes of quality problems and improvement strategies. Based on the review of literature, hospital foodservice quality was defined and the Hospital Foodservice Quality model was presented. The study was conducted in two steps. In Step 1, nutritional standards specified on diet manuals and nutrients of planned menus, served meals, and consumed meals for regular, diabetic, and low-sodium diets were assessed in three general hospitals. Quality problems were found in all three hospitals since patients consumed less than their nutritional requirements. Considering the effects of four gaps in the Hospital Foodservice Quality model, Gaps 3 and 4 were selected as critical control points (CCPs) for hospital foodservice quality management. In Step 2, the causes of the gaps and improvement strategies at CCPs were labeled as "quality hazards" and "corrective actions", respectively and were identified using a case study. At Gap 3, inaccurate forecasting and a lack of control during production were identified as quality hazards and corrective actions proposed were establishing an accurate forecasting system, improving standardized recipes, emphasizing the use of standardized recipes, and conducting employee training. At Gap 4, quality hazards were menus of low preferences, inconsistency of menu quality, a lack of menu variety, improper food temperatures, and patients' lack of understanding of their nutritional requirements. To reduce Gap 4, the dietary departments should conduct patient surveys on menu preferences on a regular basis, develop new menus, especially for therapeutic diets, maintain food temperatures during distribution, provide more choices, conduct meal rounds, and provide nutrition education and counseling. The Hospital Foodservice Quality Model was a useful tool for identifying causes of the foodservice quality problems and improvement strategies from a holistic point of view

    Applying Quality Function Deployment to Social Housing?

    Get PDF
    Purpose of this paper This paper focuses on the application of Quality Function Deployment (QFD) in a Housing Association located in the UK. Facing the problem of improving a company‟s performance, practitioners and academics have fashioned and applied a variety of models, theories and techniques. Design / methodology / approach The research questions were developed from a review of the quality and process improvement literature and tested using evidence from field- based, action research within a UK Housing Association company. The case study provides insight to the benefits and challenges arising from the application of QFD. Findings The results provided insight to the benefits and challenges arising from the application of a specific tool, QFD. The primary findings were: i) QFD can be successfully adapted, applied and utilised within the challenging environment of social housing and other sectors, such as professional services; ii) the model can be modified to use most processes/sub-processes; it must include both external and internal requirements and, to be useful, use more detailed process parameters appropriately. Practical implications The conclusions drawn add to on-going commentaries on aspects of quality improvement, especially the application of QFD within the service sector. The authors develop questions for future research regarding improvement projects. Originality/ Value The conclusion proposes that the implementation of QFD should have a positive impact upon a company; if approached in the right manner. It provides a useful mechanism for developing evidence based strategy of operational change, control and improvement. The research proposes questions for future research into aspects of operational quality and efficiency
    corecore