4,106 research outputs found

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    Stochastic models for quality of service of component connectors

    Get PDF
    The intensifying need for scalable software has motivated modular development and using systems distributed over networks to implement large-scale applications. In Service-oriented Computing, distributed services are composed to provide large-scale services with a specific functionality. In this way, reusability of existing services can be increased. However, due to the heterogeneity of distributed software systems, software composition is not easy and requires additional mechanisms to impose some form of a coordination on a distributed software system. Besides functional correctness, a composed service must satisfy various quantitative requirements for its clients, which are generically called its quality of service (QoS). Particularly, it is tricky to obtain the overall QoS of a composed service even if the QoS information of its constituent distributed services is given. In this thesis, we propose Stochastic Reo to specify software composition with QoS aspects and its compositional semantic models. They are also used as intermediate models to generate their corresponding stochastic models for practical analysis. Based on this, we have implemented the tool Reo2MC. Using Reo2MC, we have modeled and analyzed an industrial software, the ASK system. Its analysis results provided the best cost-effective resource utilization and some suggestions to improve the performance of the system.UBL - phd migration 201

    Stochastic reo: a case study

    Get PDF
    QoS analysis of coordinated distributed autonomous services is currently of interest in the area of service-oriented computing and calls for new technologies and supporting tools. In previous work, the first three authors have proposed a compositional automata model to provide semantics for stochastic Reo, a channel based coordination language that supports the specification of QoS values (such as request arrivals or processing rates). Furthermore, translations from this automata model into stochastic models, such as continuous-time Markov chains (CTMCs) and interactive Markov chains (IMCs) have also been presented. Based on those results, we describe in this paper a case study of QoS analysis. We analyze a certain instance of the ASK system, an industrial software system for connecting people offering professional services to clients requiring those services. We develop a model of the ASK system using stochastic Reo. The distributions used in this model were obtained by applying statistical analysis techniques on the raw values that we obtained from the real logs of an actual running ASK system. These distributions are used for the derived CTMC model for the ASK system to analyze and to improve the performance of the system, under the assumption that the distributions are exponentially distributed. In practice, this is not always the case. Thus, we also carry out a simulation-based analysis by a Reo simulator that can deal with non-exponential distributions. Compared to the analysis on the derived CTMC model, the simulation is approximation-based analysis, but it reveals valuable insight in the behavior of the system. The outcome of both analyses helps both the developers and the installations of the ASK system to improve the performance of the system

    Third Dutch model checking day, Eindhoven, November 7, 2001 : proceedings

    Get PDF
    This report contains the preliminary proceedings of the third Dutch Model Checking Day, held on 7th November 2001 at the Technische Universiteit Eindhoven. Model checking is an automatic technique for verifying hardware and software systems. The advance of the research in this area in the past few years has lead to a significant improvement of the model checking tools. Successful applications of model checking have been reported in the verification of a wide variety of systems, like complex sequential circuit designs and communication protocols. An important evidence of the great practical potential of model checking is the development of in-house model checking tools within the major companies from the information and telecommunication industry. The objective of the Model Checking Day was to bring together researchers and practitioners from academia and industry who are interested in model checking. The presentations featured both practical and theoretical advances in the area. This includes new techniques and methodologies, as well as experience with their application in various areas, such as embedded systems, communication protocols, hardware components, production processes, etc. Besides this, the Model Checking Day provided an opportunity to exchange experiences, and to have discussions about new ideas and the latest developments in the area. This proceedings contains contributions related to the presentations on this day, details are given in the table of contents. The Model Checking Day received generous support from the Formal Methods Group of the Technische Universiteit Eindhoven and the research school IPA (Institute for Programming research and Algorithmics). At this point I would like to thank the members of the program committee Dragan Bosnacki (TU/e Computer Science), Leszek Holenderski (Philips Research) and Jeroen Voeten (TU/e Electrical Engineering), and the secretary Elize Russell (TU/e Computer Science) for all their work

    Representation Learning: A Review and New Perspectives

    Full text link
    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning

    Design of testbed and emulation tools

    Get PDF
    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems

    Design and Development of Software Tools for Bio-PEPA

    Get PDF
    This paper surveys the design of software tools for the Bio-PEPA process algebra. Bio-PEPA is a high-level language for modelling biological systems such as metabolic pathways and other biochemical reaction networks. Through providing tools for this modelling language we hope to allow easier use of a range of simulators and model-checkers thereby freeing the modeller from the responsibility of developing a custom simulator for the problem of interest. Further, by providing mappings to a range of different analysis tools the Bio-PEPA language allows modellers to compare analysis results which have been computed using independent numerical analysers, which enhances the reliability and robustness of the results computed.

    Automatic specification of reliability models for fault-tolerant computers

    Get PDF
    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model
    • …
    corecore