139,835 research outputs found

    Correct and Control Complex IoT Systems: Evaluation of a Classification for System Anomalies

    Full text link
    In practice there are deficiencies in precise interteam communications about system anomalies to perform troubleshooting and postmortem analysis along different teams operating complex IoT systems. We evaluate the quality in use of an adaptation of IEEE Std. 1044-2009 with the objective to differentiate the handling of fault detection and fault reaction from handling of defect and its options for defect correction. We extended the scope of IEEE Std. 1044-2009 from anomalies related to software only to anomalies related to complex IoT systems. To evaluate the quality in use of our classification a study was conducted at Robert Bosch GmbH. We applied our adaptation to a postmortem analysis of an IoT solution and evaluated the quality in use by conducting interviews with three stakeholders. Our adaptation was effectively applied and interteam communications as well as iterative and inductive learning for product improvement were enhanced. Further training and practice are required.Comment: Submitted to QRS 2020 (IEEE Conference on Software Quality, Reliability and Security

    Mechatronics & the cloud

    Get PDF
    Conventionally, the engineering design process has assumed that the design team is able to exercise control over all elements of the design, either directly or indirectly in the case of sub-systems through their specifications. The introduction of Cyber-Physical Systems (CPS) and the Internet of Things (IoT) means that a design team’s ability to have control over all elements of a system is no longer the case, particularly as the actual system configuration may well be being dynamically reconfigured in real-time according to user (and vendor) context and need. Additionally, the integration of the Internet of Things with elements of Big Data means that information becomes a commodity to be autonomously traded by and between systems, again according to context and need, all of which has implications for the privacy of system users. The paper therefore considers the relationship between mechatronics and cloud-basedtechnologies in relation to issues such as the distribution of functionality and user privacy

    SensorCloud: Towards the Interdisciplinary Development of a Trustworthy Platform for Globally Interconnected Sensors and Actuators

    Get PDF
    Although Cloud Computing promises to lower IT costs and increase users' productivity in everyday life, the unattractive aspect of this new technology is that the user no longer owns all the devices which process personal data. To lower scepticism, the project SensorCloud investigates techniques to understand and compensate these adoption barriers in a scenario consisting of cloud applications that utilize sensors and actuators placed in private places. This work provides an interdisciplinary overview of the social and technical core research challenges for the trustworthy integration of sensor and actuator devices with the Cloud Computing paradigm. Most importantly, these challenges include i) ease of development, ii) security and privacy, and iii) social dimensions of a cloud-based system which integrates into private life. When these challenges are tackled in the development of future cloud systems, the attractiveness of new use cases in a sensor-enabled world will considerably be increased for users who currently do not trust the Cloud.Comment: 14 pages, 3 figures, published as technical report of the Department of Computer Science of RWTH Aachen Universit

    Enforcing reputation constraints on business process workflows

    Get PDF
    The problem of trust in determining the flow of execution of business processes has been in the centre of research interst in the last decade as business processes become a de facto model of Internet-based commerce, particularly with the increasing popularity in Cloud computing. One of the main mea-sures of trust is reputation, where the quality of services as provided to their clients can be used as the main factor in calculating service and service provider reputation values. The work presented here contributes to the solving of this problem by defining a model for the calculation of service reputa-tion levels in a BPEL-based business workflow. These levels of reputation are then used to control the execution of the workflow based on service-level agreement constraints provided by the users of the workflow. The main contribution of the paper is to first present a formal meaning for BPEL processes, which is constrained by reputation requirements from the users, and then we demonstrate that these requirements can be enforced using a reference architecture with a case scenario from the domain of distributed map processing. Finally, the paper discusses the possible threats that can be launched on such an architecture

    Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems

    Get PDF
    In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)

    Support and Assessment for Fall Emergency Referrals (SAFER 1) trial protocol. Computerised on-scene decision support for emergency ambulance staff to assess and plan care for older people who have fallen: evaluation of costs and benefits using a pragmatic cluster randomised trial

    Get PDF
    Background: Many emergency ambulance calls are for older people who have fallen. As half of them are left at home, a community-based response may often be more appropriate than hospital attendance. The SAFER 1 trial will assess the costs and benefits of a new healthcare technology - hand-held computers with computerised clinical decision support (CCDS) software - to help paramedics decide who needs hospital attendance, and who can be safely left at home with referral to community falls services. Methods/Design: Pragmatic cluster randomised trial with a qualitative component. We shall allocate 72 paramedics ('clusters') at random between receiving the intervention and a control group delivering care as usual, of whom we expect 60 to complete the trial. Patients are eligible if they are aged 65 or older, live in the study area but not in residential care, and are attended by a study paramedic following an emergency call for a fall. Seven to 10 days after the index fall we shall offer patients the opportunity to opt out of further follow up. Continuing participants will receive questionnaires after one and 6 months, and we shall monitor their routine clinical data for 6 months. We shall interview 20 of these patients in depth. We shall conduct focus groups or semi-structured interviews with paramedics and other stakeholders. The primary outcome is the interval to the first subsequent reported fall (or death). We shall analyse this and other measures of outcome, process and cost by 'intention to treat'. We shall analyse qualitative data thematically. Discussion: Since the SAFER 1 trial received funding in August 2006, implementation has come to terms with ambulance service reorganisation and a new national electronic patient record in England. In response to these hurdles the research team has adapted the research design, including aspects of the intervention, to meet the needs of the ambulance services. In conclusion this complex emergency care trial will provide rigorous evidence on the clinical and cost effectiveness of CCDS for paramedics in the care of older people who have fallen

    Enhancing knowledge management in online collaborative learning

    Get PDF
    This study aims to explore two crucial aspects of collaborative work and learning: on the one hand, the importance of enabling collaborative learning applications to capture and structure the information generated by group activity and, on the other hand, to extract the relevant knowledge in order to provide learners and tutors with efficient awareness, feedback and support as regards group performance and collaboration. To this end, in this paper we first propose a conceptual model for data analysis and management that identifies and classifies the many kinds of indicators that describe collaboration and learning into high-level aspects of collaboration. Then, we provide a computational platform that, at a first step, collects and classifies both the event information generated asynchronously from the users' actions and the labeled dialogues from the synchronous collaboration according to these indicators. This information is then analyzed in next steps to eventually extract and present to participants the relevant knowledge about the collaboration. The ultimate aim of this platform is to efficiently embed information and knowledge into collaborative learning applications. We eventually suggest a generalization of our approach to be used in diverse collaborative learning situations and domains

    Project Quality of Offshore Virtual Teams Engaged in Software Requirements Analysis: An Exploratory Comparative Study

    Get PDF
    The off-shore software development companies in countries such as India use a global delivery model in which initial requirement analysis phase of software projects get executed at client locations to leverage frequent and deep interaction between user and developer teams. Subsequent phases such as design, coding and testing are completed at off-shore locations. Emerging trends indicate an increasing interest in off-shoring even requirements analysis phase using computer mediated communication. We conducted an exploratory research study involving students from Management Development Institute (MDI), India and Marquette University (MU), USA to determine quality of such off-shored requirements analysis projects. Our findings suggest that project quality of teams engaged in pure off-shore mode is comparable to that of teams engaged in collocated mode. However, the effect of controls such as user project monitoring on the quality of off-shored projects needs to be studied further
    corecore