17,360 research outputs found

    A Feature Computation Tree Model to Specify Requirements and Reuse

    Get PDF
    A large subset of requirements for complex systems, services and product lines is traditionally specified by hierarchical structures of features. Features are usually gathered and represented in the form of a feature tree. The feature tree is a structural model. It represents mainly composition and specialization relations between features and does not provide the possibility to specify requirements in the form of ordering relations defined on functional features. Use case scenarios are usually employed for specification of the ordering relations. However, use case scenarios comprise isolated sequences of features, and therefore they may be inconsistent and even may contradict each other and the feature tree. Moreover, some use case scenarios defining relations on features may be incomplete. In order to support consistent specification of requirements, we suggest using a pair of related models: a feature tree model and a feature computation tree model. The pair of such related feature tree models provides the basis for the method of consistency checks of requirements. It introduces a united view on the system’s behavior at the stage of requirement specification and facilitates specification of forbidden sequences and construction complete sequences from incomplete ones. It allows designers to precisely specify the desired reuse and to find that a certain sort of reuse is not possible. Understanding already at the stage of requirements engineering that a subsystem cannot be reused without modification saves effort and money spent on development. The proposed method and models are explained using a case study of design of a system for electronic cards production

    Advanced service monitoring configurations with SLA decomposition and selection

    Get PDF
    Service Level Agreements (SLAs) for Software Services aim to clearly identify the service level commitments established between service requesters and providers. The commitments that are agreed however can be expressed in complex notations through a combination of expressions that need to evaluated and monitored efficiently. The dynamic allocation of the responsibility for monitoring SLAs (and often different parts within them) to different monitoring components is necessary as both SLAs and the components available for monitoring them may change dynamically during the operation of a service based system. In this paper we discuss an approach to supporting this dynamic configuration, and in particular, how SLAs expressed in higher-level notations can be efficiently decomposed and appropriate monitoring components dynamically allocated for each part of the agreements. The approach is illustrated with mechanical support in the form of a configuration service which can be incorporated into SLA-based service monitoring infrastructures

    The Swapping Constraint

    Get PDF
    Triviality arguments against the computational theory of mind claim that computational implementation is trivial and thus does not serve as an adequate metaphysical basis for mental states. It is common to take computational implementation to consist in a mapping from physical states to abstract computational states. In this paper, I propose a novel constraint on the kinds of physical states that can implement computational states, which helps to specify what it is for two physical states to non-trivially implement the same computational state

    Improving Software Performance in the Compute Unified Device Architecture

    Get PDF
    This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA). We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s) memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU), like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge) tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.Compute Unified Device Architecture, Fermi Architecture, Naive Transpose, Coalesced Transpose, Shared Memory Copy, Loop in Kernel, Loop over Kernel
    • 

    corecore