40,792 research outputs found

    Methods of Technical Prognostics Applicable to Embedded Systems

    Get PDF
    HlavnĂ­ cĂ­lem dizertace je poskytnutĂ­ ucelenĂ©ho pohledu na problematiku technickĂ© prognostiky, kterĂĄ nachĂĄzĂ­ uplatněnĂ­ v tzv. prediktivnĂ­ ĂșdrĆŸbě zaloĆŸenĂ© na trvalĂ©m monitorovĂĄnĂ­ zaƙízenĂ­ a odhadu Ășrovně degradace systĂ©mu či jeho zbĂœvajĂ­cĂ­ ĆŸivotnosti a to zejmĂ©na v oblasti komplexnĂ­ch zaƙízenĂ­ a strojĆŻ. V současnosti je technickĂĄ diagnostika poměrně dobƙe zmapovanĂĄ a reĂĄlně nasazenĂĄ na rozdĂ­l od technickĂ© prognostiky, kterĂĄ je stĂĄle rozvĂ­jejĂ­cĂ­m se oborem, kterĂœ ovĆĄem postrĂĄdĂĄ větĆĄĂ­ mnoĆŸstvĂ­ reĂĄlnĂœch aplikaci a navĂ­c ne vĆĄechny metody jsou dostatečně pƙesnĂ© a aplikovatelnĂ© pro embedded systĂ©my. DizertačnĂ­ prĂĄce pƙinĂĄĆĄĂ­ pƙehled zĂĄkladnĂ­ch metod pouĆŸitelnĂœch pro Ășčely predikce zbĂœvajĂ­cĂ­ uĆŸitnĂ© ĆŸivotnosti, jsou zde popsĂĄny metriky pomocĂ­, kterĂœch je moĆŸnĂ© jednotlivĂ© pƙístupy porovnĂĄvat aĆ„ uĆŸ z pohledu pƙesnosti, ale takĂ© i z pohledu vĂœpočetnĂ­ nĂĄročnosti. Jedno z dizertačnĂ­ch jader tvoƙí doporučenĂ­ a postup pro vĂœběr vhodnĂ© prognostickĂ© metody s ohledem na prognostickĂĄ kritĂ©ria. DalĆĄĂ­m dizertačnĂ­m jĂĄdrem je pƙedstavenĂ­ tzv. částicovĂ©ho filtrovanĂ­ (particle filtering) vhodnĂ© pro model-based prognostiku s ověƙenĂ­m jejich implementace a porovnĂĄnĂ­m. HlavnĂ­ dizertačnĂ­ jĂĄdro reprezentuje pƙípadovou studii pro velmi aktuĂĄlnĂ­ tĂ©ma prognostiky Li-Ion baterii s ohledem na trvalĂ© monitorovĂĄnĂ­. PƙípadovĂĄ studie demonstruje proces prognostiky zaloĆŸenĂ© na modelu a srovnĂĄvĂĄ moĆŸnĂ© pƙístupy jednak pro odhad doby pƙed vybitĂ­m baterie, ale takĂ© sleduje moĆŸnĂ© vlivy na degradaci baterie. SoučástĂ­ prĂĄce je zĂĄkladnĂ­ ověƙenĂ­ modelu Li-Ion baterie a nĂĄvrh prognostickĂ©ho procesu.The main aim of the thesis is to provide a comprehensive overview of technical prognosis, which is applied in the condition based maintenance, based on continuous device monitoring and remaining useful life estimation, especially in the field of complex equipment and machinery. Nowadays technical prognosis is still evolving discipline with limited number of real applications and is not so well developed as technical diagnostics, which is fairly well mapped and deployed in real systems. Thesis provides an overview of basic methods applicable for prediction of remaining useful life, metrics, which can help to compare the different approaches both in terms of accuracy and in terms of computational/deployment cost. One of the research cores consists of recommendations and guide for selecting the appropriate forecasting method with regard to the prognostic criteria. Second thesis research core provides description and applicability of particle filtering framework suitable for model-based forecasting. Verification of their implementation and comparison is provided. The main research topic of the thesis provides a case study for a very actual Li-Ion battery health monitoring and prognostics with respect to continuous monitoring. The case study demonstrates the prognostic process based on the model and compares the possible approaches for estimating both the runtime and capacity fade. Proposed methodology is verified on real measured data.

    Searching Data: A Review of Observational Data Retrieval Practices in Selected Disciplines

    Get PDF
    A cross-disciplinary examination of the user behaviours involved in seeking and evaluating data is surprisingly absent from the research data discussion. This review explores the data retrieval literature to identify commonalities in how users search for and evaluate observational research data. Two analytical frameworks rooted in information retrieval and science technology studies are used to identify key similarities in practices as a first step toward developing a model describing data retrieval

    Dynamic instrumentation in Kieker using runtime bytecode modification

    Get PDF
    Software systems need constant quality assurance - this holds true in the development phase as well as the production phase. An aspect of quality is the performance of specific software modules. Kieker provides a framework to measure and diagnose runtime information of instrumented software methods. In its current state, Kieker only allows inserting probes before application start. This thesis proposes an alternative concept to extend the functionality of Kieker regarding instrumentation. The alternative approach allows inserting probes during runtime. This is done using a technology known under the term Bytecode Instrumentation (BCI) which enables to change the binary code of classes during execution. Thus the software is "reprogrammed" during runtime to provide the measurement logic. The approach is carried over of another monitoring framework called AIM (Adaptable Instrumentation and Monitoring), which already features an established implementation of this technology. Hence, this thesis aims to connect the benefits of both frameworks. This alternative concept is compared against Kieker's traditional way of performance measurement by the means of an experimental evaluation. The evaluation aims to investigate the impact on, (1) the overhead, (2) the turnaround time and (3) the reliability in terms of lost transactions. The results show a reduction of overhead, unfortunately at the cost of turnaround time. The reliability also drops due to an increase of lost transactions.Software-Systeme benötigen eine stĂ€ndige QualitĂ€tskontrolle - sowohl in der Entwicklungsphase als auch in der Produktionsphase. Ein Aspekt das die QualitĂ€t von Software ausmacht, ist die Performanz bestimmter Software-Module. Kieker bietet an dieser Stelle ein Framework an, um Laufzeitdaten von instrumentierten Methoden messen und auswerten zu können. Im aktuellen Stand können Messsonden (Monitoring Probes) in Kieker lediglich vor Programmstart eingesetzt werden. Diese Thesis zeigt ein neues Konzept das die FunktionalitĂ€t von Kieker in Bezug auf die Instrumentierung erweitert. Dieser alternative Ansatz erlaubt es Messsonden wĂ€hrend der Laufzeit einzusetzen. Dies wird mit einer Technologie umgesetzt die bekannt ist unter dem Begriff Bytecode Instrumentation (BCI). Sie ermöglicht den BinĂ€rcode von Klassen wĂ€hrend der AusfĂŒhrung zu verĂ€ndern. Somit wird die Software mit der Logik der Laufzeit-Messung versehen, indem sie zur Laufzeit „neu programmiert“ wird. Der Ansatz wurde aus AIM (Adaptable Instrumentation and Monitoring), einem weiteren Monitoring Framework, ĂŒbernommen. Dieses weißt eine bereits bestehende Implementierung dieser Technologie auf. Daher zielt diese Thesis darauf ab die Vorteile beider Frameworks zu verbinden. Der Overhead von diesem alternativen Konzept wird verglichen mit der herkömmlichen Art die Kieker nutzt um Performanz zu messen. Der Vergleich wird mithilfe einer experimentellen Evaluation durchgefĂŒhrt. Die Evaluation untersucht die Auswirkungen auf (1) den Overhead, (2) die Turnaround-Zeit und (3) die ZuverlĂ€ssigkeit in Hinsicht auf Lost Transactions. Die Ergebnisse zeigen eine Verringerung des Overheads, leider auf Kosten der Turnaround-Zeit. Die ZuverlĂ€ssigkeit sinkt ebenfalls aufgrund einer erhöhten Anzahl an verloren gegangenen Transaktionen

    A comparative evaluation of dynamic visualisation tools

    Get PDF
    Despite their potential applications in software comprehension, it appears that dynamic visualisation tools are seldom used outside the research laboratory. This paper presents an empirical evaluation of five dynamic visualisation tools - AVID, Jinsight, jRMTool, Together ControlCenter diagrams and Together ControlCenter debugger. The tools were evaluated on a number of general software comprehension and specific reverse engineering tasks using the HotDraw objectoriented framework. The tasks considered typical comprehension issues, including identification of software structure and behaviour, design pattern extraction, extensibility potential, maintenance issues, functionality location, and runtime load. The results revealed that the level of abstraction employed by a tool affects its success in different tasks, and that tools were more successful in addressing specific reverse engineering tasks than general software comprehension activities. It was found that no one tool performs well in all tasks, and some tasks were beyond the capabilities of all five tools. This paper concludes with suggestions for improving the efficacy of such tools

    Towards evaluation design for smart city development

    Get PDF
    Smart city developments integrate digital, human, and physical systems in the built environment. With growing urbanization and widespread developments, identifying suitable evaluation methodologies is important. Case-study research across five UK cities - Birmingham, Bristol, Manchester, Milton Keynes and Peterborough - revealed that city evaluation approaches were principally project-focused with city-level evaluation plans at early stages. Key challenges centred on selecting suitable evaluation methodologies to evidence urban value and outcomes, addressing city authority requirements. Recommendations for evaluation design draw on urban studies and measurement frameworks, capitalizing on big data opportunities and developing appropriate, valid, credible integrative approaches across projects, programmes and city-level developments

    Runtime Enforcement for Component-Based Systems

    Get PDF
    Runtime enforcement is an increasingly popular and effective dynamic validation technique aiming to ensure the correct runtime behavior (w.r.t. a formal specification) of systems using a so-called enforcement monitor. In this paper we introduce runtime enforcement of specifications on component-based systems (CBS) modeled in the BIP (Behavior, Interaction and Priority) framework. BIP is a powerful and expressive component-based framework for formal construction of heterogeneous systems. However, because of BIP expressiveness, it remains difficult to enforce at design-time complex behavioral properties. First we propose a theoretical runtime enforcement framework for CBS where we delineate a hierarchy of sets of enforceable properties (i.e., properties that can be enforced) according to the number of observational steps a system is allowed to deviate from the property (i.e., the notion of k-step enforceability). To ensure the observational equivalence between the correct executions of the initial system and the monitored system, we show that i) only stutter-invariant properties should be enforced on CBS with our monitors, ii) safety properties are 1-step enforceable. Given an abstract enforcement monitor (as a finite-state machine) for some 1-step enforceable specification, we formally instrument (at relevant locations) a given BIP system to integrate the monitor. At runtime, the monitor observes and automatically avoids any error in the behavior of the system w.r.t. the specification. Our approach is fully implemented in an available tool that we used to i) avoid deadlock occurrences on a dining philosophers benchmark, and ii) ensure the correct placement of robots on a map.Comment: arXiv admin note: text overlap with arXiv:1109.5505 by other author

    Geo-cultural influences and critical factors in inter-firm collaboration

    Get PDF
    Inter-firm collaboration and other forms of inter-organisational activity are increasingly the means by which technological innovation occurs. This paper draws on evidence from two studies of the same set of firms to examine the conduct of collaborations over time across different contexts. The purpose is to examine the critical factors associated with successful collaboration and explore the importance of the geo-cultural context in understanding the conduct of inter-firm collaboration. The conceptual framework draws on two main sources: - Storper’s concept of ‘conventions’ of identity and participation and Lorenz’s classification of different types of knowledge. These are used to indicate the kinds and sources of adjustments required for successful collaboration

    COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1

    Full text link
    This report presents the activities of the first working group of the COST Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide an overview of some of the major core aspects involved in Runtime Verification. Runtime Verification is the field of research dedicated to the analysis of system executions. It is often seen as a discipline that studies how a system run satisfies or violates correctness properties. The report exposes a taxonomy of Runtime Verification (RV) presenting the terminology involved with the main concepts of the field. The report also develops the concept of instrumentation, the various ways to instrument systems, and the fundamental role of instrumentation in designing an RV framework. We also discuss how RV interplays with other verification techniques such as model-checking, deductive verification, model learning, testing, and runtime assertion checking. Finally, we propose challenges in monitoring quantitative and statistical data beyond detecting property violation
    • 

    corecore