27,128 research outputs found
Process of designing robust, dependable, safe and secure software for medical devices: Point of care testing device as a case study
This article has been made available through the Brunel Open Access Publishing Fund.Copyright © 2013 Sivanesan Tulasidas et al. This paper presents a holistic methodology for the design of medical device software, which encompasses of a new way of eliciting requirements, system design process, security design guideline, cloud architecture design, combinatorial testing process and agile project management. The paper uses point of care diagnostics as a case study where the software and hardware must be robust, reliable to provide accurate diagnosis of diseases. As software and software intensive systems are becoming increasingly complex, the impact of failures can lead to significant property damage, or damage to the environment. Within the medical diagnostic device software domain such failures can result in misdiagnosis leading to clinical complications and in some cases death. Software faults can arise due to the interaction among the software, the hardware, third party software and the operating environment. Unanticipated environmental changes and latent coding errors lead to operation faults despite of the fact that usually a significant effort has been expended in the design, verification and validation of the software system. It is becoming increasingly more apparent that one needs to adopt different approaches, which will guarantee that a complex software system meets all safety, security, and reliability requirements, in addition to complying with standards such as IEC 62304. There are many initiatives taken to develop safety and security critical systems, at different development phases and in different contexts, ranging from infrastructure design to device design. Different approaches are implemented to design error free software for safety critical systems. By adopting the strategies and processes presented in this paper one can overcome the challenges in developing error free software for medical devices (or safety critical systems).Brunel Open Access Publishing Fund
How to Create an Innovation Accelerator
Too many policy failures are fundamentally failures of knowledge. This has
become particularly apparent during the recent financial and economic crisis,
which is questioning the validity of mainstream scholarly paradigms. We propose
to pursue a multi-disciplinary approach and to establish new institutional
settings which remove or reduce obstacles impeding efficient knowledge
creation. We provided suggestions on (i) how to modernize and improve the
academic publication system, and (ii) how to support scientific coordination,
communication, and co-creation in large-scale multi-disciplinary projects. Both
constitute important elements of what we envision to be a novel ICT
infrastructure called "Innovation Accelerator" or "Knowledge Accelerator".Comment: 32 pages, Visioneer White Paper, see http://www.visioneer.ethz.c
A study of the efficacy of a reliability management system - with suggestions for improved data collection and decision making.
Master's thesis in Risk ManagementProduct reliability is very important especially in the perspective of new product development. Making highly reliable drilling and well equipment is expensive and time-consuming process. But ignoring the product reliability could prove even more costly. Thus the manufacturers need to decide on the best reliability performance that succeeds to create a proper balance between time, cost and reliability factors to ensure the desired results. A reliability management system is a tool that the manufacturers can use to manage this process to produce reliable equipment. However, if this system is not well structured and lacks some important features, it can affect the outcomes of reliability analysis and decision making. A lot of research has been done on creating a good reliability and maintenance database to improve systems reliability in the petroleum industry. Offshore & Onshore Reliability Data (OREDA) and ISO 12224 are part of such research projects. The main objective of this research is to analyze the existing reliability management system (RMS) in Petroleum Technology Company (PTC) in terms of its structure, features, functionality, and the quality of data being recorded in RMS and how it affects decision making. The research was motivated by following issues 1) Reliability Management System of PTC is not automated in terms of extracting data from other sources within company, 2) PTC is missing a specified platform for failure reporting of their equipment, 3) the activities related to data collection and management are not well-organized hence demanding more effort. To analyze these issues, a literature study is performed to review the existing standards in the industry. ISO14224 and OREDA define a very structured database to get easy access to reliability and maintenance data. OREDA database has well-defined taxonomy, boundaries and database structure. Also, it has a well-organized procedure in place to collect and store reliability data. Quality assessment of the data being collected is done through predefined procedures guideline. OREDA have a very consistent list of codes to store language in coding form in the reliability
and maintenance database. By reviewing the existing standard in the industry, a few shortcomings have been identified both in the RMS and PTC failure reporting procedures. It is observed that data from the sources is collected by the responsible person but the collection method is usually not tested and
planned. Data collection sources, methods and procedures within company or outside the
company lack well-defined criteria and data quality assurance processes. Currently, the
company is using Field Service Reports (FSR) and company’s other databases as data sources
for RMS. A company cannot access client’s system that contains equipment utilization and
process-related information. This can lead to missing information or ambiguous data because
the data-entry responsible person needs to make assumptions sometimes to complete the
missing operational and environmental data.
The RMS database structure lacks well-defined taxonomy, design parameters, and adequate
failure mode classification. The Failure modes is an important aspect of the high-quality database
since it can help in identifying the need for changes to maintenance periodicities, or the need
for additional checks. The Offshore & Onshore Reliability Data (OREDA) project participating
companies e.g. Statoil can calculate failure rates for selected data populations of within well-defined
boundaries of manufacturer, design and operational parameters. These features are
missing in RMS database.
It is recommended that PTC consider developing a failure reporting database to handle their
failure event data in an organized way. For this purpose, failure reporting, analysis, and
corrective action system (FRACAS) technique is suggested. FRACAS data from FRACAS
database can be used effectively to verify failure modes and failure causes in the failure mode
effect and criticality analysis (FMECA). Failure review board in the FRACAS process includes
personnel from mix disciplines (design, manufacturing, systems, quality, and reliability
engineering) as well as leadership (technical or managerial leads), to make sure that a well-rounded
the discussion is performed for particular failure related issues. The Failure Review Board
(FRB) analyzes the failures in terms of time, money required corrective actions. And finally,
management makes the decisions on basis of identified corrective action.
As data quality has a high impact on the outcomes of reliability analysis through reliability
management system. To have a good data quality, data collecting procedures and process
management should be well-organized. It is crucial to performed data quality assessment on
collected data. A data mining technique is discussed as a part of suggestion to improve data
quality in RMS database. Once data is stored in RMS database a data mining method; data
quality mining can help to assess the quality of data in a database. This is done by applying a data
mining (DM) tool to look at interesting patterns of data with the purpose of quality assessment.
Various data mining model is available in the market but PTC needs to select DM model
which suits best their business objectives.
RMS database is hard-wired so it is difficult to change its features and database structure.
However, if PTC emphasize on improving failure reporting procedures and data quality in data
sources locating within the company, it will directly and positively affect the data quality in
RMS and the results of data analysis in RMS. This, in turn, can improve their decision-making
the process regarding new product development and redesigning the existing products
THE ROLE OF THE SEMANTIC WEB IN STRUCTURING ORGANIZATIONAL KNOWLEDGE
The present paper is a component of an exploratory research project focused on discovering new ways to build, organize and consolidate organizational memory for an economic entity by means of the new a€sSemantic Weba€t technologies and also encloses someorganizational memory, organizational knowledge, semantic web, knowledge management
Process Mining Concepts for Discovering User Behavioral Patterns in Instrumented Software
Process Mining is a technique for discovering “in-use” processes from traces emitted to event logs. Researchers have recently explored applying this technique to documenting processes discovered in software applications. However, the requirements for emitting events to support Process Mining against software applications have not been well documented. Furthermore, the linking of end-user intentional behavior to software quality as demonstrated in the discovered processes has not been well articulated. After evaluating the literature, this thesis suggested focusing on user goals and actual, in-use processes as an input to an Agile software development life cycle in order to improve software quality. It also provided suggestions for instrumenting software applications to support Process Mining techniques
Integration of decision support systems to improve decision support performance
Decision support system (DSS) is a well-established research and development area. Traditional isolated, stand-alone DSS has been recently facing new challenges. In order to improve the performance of DSS to meet the challenges, research has been actively carried out to develop integrated decision support systems (IDSS). This paper reviews the current research efforts with regard to the development of IDSS. The focus of the paper is on the integration aspect for IDSS through multiple perspectives, and the technologies that support this integration. More than 100 papers and software systems are discussed. Current research efforts and the development status of IDSS are explained, compared and classified. In addition, future trends and challenges in integration are outlined. The paper concludes that by addressing integration, better support will be provided to decision makers, with the expectation of both better decisions and improved decision making processes
- …