798 research outputs found

    Integration and coordination in a cognitive vision system

    Get PDF
    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information that is triggered by perceptual and contextual cues. The system integrates a wide variety of visual functions like localization, object tracking and recognition, action recognition, interactive object learning, etc. We show how different kinds of system behavior are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and eventdriven integration approach

    A methodology for software performance modeling and its application to a border inspection system

    Get PDF
    It is essential that software systems meet their performance objectives. Many factors affect software performance and it is fundamental to identify those factors and the magnitude of their effects early in the software lifecycle to avoid costly and extensive changes to software design, implementation, or requirements. In the last decade the development of techniques and methodologies to carry out performance analysis in the early stages of the software lifecycle has gained a lot of attention within the research community. Different approaches to evaluate software performance have been developed. Each of them is characterized by a certain software specification and performance modeling notation.;In this thesis we present a methodology for predictive performance modeling and analysis of software systems. We use the Unified Modeling Language (UML) as a software modeling notation and Layered Queuing Networks (LQN) as a performance modeling notation. Our focus is on the definition of a UML to LQN transformation We extend existing approaches by applying the transformation to a different set of UML diagrams, and propose a few extensions to the current UML Profile for Schedulability, Performance, and Time , which we use to annotate UML diagrams with performance-related information. We test the applicability of our methodology to the performance evaluation of a complex software system used at border entry ports to grant or deny access to incoming travelers

    Independent verification of specification models for large software systems at the early phases of development lifecycle

    Get PDF
    One of the major challenges facing the software industry, in general and IV&V (Independent Verification and Validation) analysts in particular, is to find ways for analyzing dynamic behavior of requirement specifications of large software systems early in the development lifecycle. Such analysis can significantly improve the performance and reliability of the developed systems. This dissertation addresses the problem of developing an IV&V framework for extracting semantics of dynamic behavior from requirement specifications based on: (1) SART (Structured Analysis with Realtime) models, and (2) UML (Unified Modeling Language) models.;For SART, the framework presented here shows a direct mapping from SART specification models to CPN (Colored Petrinets) models. The semantics of the SART hierarchy at the individual levels are preserved in the mapping. This makes it easy for the analyst to perform the analysis and trace back to the corresponding SART model. CPN was selected because it supports rigorous dynamic analysis. A large scale case study based on a component of NASA EOS system was performed for a proof of the concept.;For UML specifications, an approach based on metamodels is presented. A special type of metamodel, called dynamic metamodel (DMM), is introduced. This approach holds several advantages over the direct mapping of UML to CPN. The mapping rules for generating DMM are not CPN specific, hence they would not change if a language other than CPN is used. Also it makes it more flexible to develop DMM because other types of models can be added to the existing UML models. A simple example of a pacemaker is used to illustrate the concepts of DMM

    Big continuous data: dealing with velocity by composing event streams

    No full text
    International audienceThe rate at which we produce data is growing steadily, thus creating even larger streams of continuously evolving data. Online news, micro-blogs, search queries are just a few examples of these continuous streams of user activities. The value of these streams relies in their freshness and relatedness to on-going events. Modern applications consuming these streams need to extract behaviour patterns that can be obtained by aggregating and mining statically and dynamically huge event histories. An event is the notification that a happening of interest has occurred. Event streams must be combined or aggregated to produce more meaningful information. By combining and aggregating them either from multiple producers, or from a single one during a given period of time, a limited set of events describing meaningful situations may be notified to consumers. Event streams with their volume and continuous production cope mainly with two of the characteristics given to Big Data by the 5V’s model: volume & velocity. Techniques such as complex pattern detection, event correlation, event aggregation, event mining and stream processing, have been used for composing events. Nevertheless, to the best of our knowledge, few approaches integrate different composition techniques (online and post-mortem) for dealing with Big Data velocity. This chapter gives an analytical overview of event stream processing and composition approaches: complex event languages, services and event querying systems on distributed logs. Our analysis underlines the challenges introduced by Big Data velocity and volume and use them as reference for identifying the scope and limitations of results stemming from different disciplines: networks, distributed systems, stream databases, event composition services, and data mining on traces
    • …
    corecore