594 research outputs found

    An Empirical Study on Decision making for Quality Requirements

    Full text link
    [Context] Quality requirements are important for product success yet often handled poorly. The problems with scope decision lead to delayed handling and an unbalanced scope. [Objective] This study characterizes the scope decision process to understand influencing factors and properties affecting the scope decision of quality requirements. [Method] We studied one company's scope decision process over a period of five years. We analyzed the decisions artifacts and interviewed experienced engineers involved in the scope decision process. [Results] Features addressing quality aspects explicitly are a minor part (4.41%) of all features handled. The phase of the product line seems to influence the prevalence and acceptance rate of quality features. Lastly, relying on external stakeholders and upfront analysis seems to lead to long lead-times and an insufficient quality requirements scope. [Conclusions] There is a need to make quality mode explicit in the scope decision process. We propose a scope decision process at a strategic level and a tactical level. The former to address long-term planning and the latter to cater for a speedy process. Furthermore, we believe it is key to balance the stakeholder input with feedback from usage and market in a more direct way than through a long plan-driven process

    WSMO-Lite and hRESTS: lightweight semantic annotations for Web services and RESTful APIs

    Get PDF
    Service-oriented computing has brought special attention to service description, especially in connection with semantic technologies. The expected proliferation of publicly accessible services can benefit greatly from tool support and automation, both of which are the focus of Semantic Web Service (SWS) frameworks that especially address service discovery, composition and execution. As the first SWS standard, in 2007 the World Wide Web Consortium produced a lightweight bottom-up specification called SAWSDL for adding semantic annotations to WSDL service descriptions. Building on SAWSDL, this article presents WSMO-Lite, a lightweight ontology of Web service semantics that distinguishes four semantic aspects of services: function, behavior, information model, and nonfunctional properties, which together form a basis for semantic automation. With the WSMO-Lite ontology, SAWSDL descriptions enable semantic automation beyond simple input/output matchmaking that is supported by SAWSDL itself. Further, to broaden the reach of WSMO-Lite and SAWSDL tools to the increasingly common RESTful services, the article adds hRESTS and MicroWSMO, two HTML microformats that mirror WSDL and SAWSDL in the documentation of RESTful services, enabling combining RESTful services with WSDL-based ones in a single semantic framework. To demonstrate the feasibility and versatility of this approach, the article presents common algorithms for Web service discovery and composition adapted to WSMO-Lite

    Intelligent Design

    Get PDF
    When designers obtain exclusive intellectual property (IP) rights in the functional aspects of their creations, they can wield these rights to increase both the costs to their competitors and the prices that consumers must pay for their goods. IP rights and the costs they entail are justified when they create incentives for designers to invest in new, socially valuable designs. But the law must be wary of allowing rights to be misused. Accordingly, IP law has employed a series of doctrinal and costly screens to channel designs into the appropriate regime—copyright law, design patent law, or utility patent law—depending upon the type of design. Unfortunately, those screens are no longer working. Designers are able to obtain powerful IP protection over the utilitarian aspects of their creations without demonstrating that they have made socially valuable contributions. They are also able to do so without paying substantial fees that might weed out weaker, socially costly designs. This is bad for competition and bad for consumers. In this Article, we integrate theories of doctrinal and costly screens and explore their roles in channeling IP rights. We explain the inefficiencies that have arisen through the misapplication of these screens in copyright and design patent laws. Finally, we propose a variety of solutions that would move design protection toward a successful channeling regime, balancing the law’s needs for incentives and competition. These proposals include improving doctrinal screens to weed out functionality, making design protection more costly, and preventing designers from obtaining multiple forms of protection for the same design

    Integrating Fuzzy Decisioning Models With Relational Database Constructs

    Get PDF
    Human learning and classification is a nebulous area in computer science. Classic decisioning problems can be solved given enough time and computational power, but discrete algorithms cannot easily solve fuzzy problems. Fuzzy decisioning can resolve more real-world fuzzy problems, but existing algorithms are often slow, cumbersome and unable to give responses within a reasonable timeframe to anything other than predetermined, small dataset problems. We have developed a database-integrated highly scalable solution to training and using fuzzy decision models on large datasets. The Fuzzy Decision Tree algorithm is the integration of the Quinlan ID3 decision-tree algorithm together with fuzzy set theory and fuzzy logic. In existing research, when applied to the microRNA prediction problem, Fuzzy Decision Tree outperformed other machine learning algorithms including Random Forest, C4.5, SVM and Knn. In this research, we propose that the effectiveness with which large dataset fuzzy decisions can be resolved via the Fuzzy Decision Tree algorithm is significantly improved when using a relational database as the storage unit for the fuzzy ID3 objects, versus traditional storage objects. Furthermore, it is demonstrated that pre-processing certain pieces of the decisioning within the database layer can lead to much swifter membership determinations, especially on Big Data datasets. The proposed algorithm uses the concepts inherent to databases: separated schemas, indexing, partitioning, pipe-and-filter transformations, preprocessing data, materialized and regular views, etc., to present a model with a potential to learn from itself. Further, this work presents a general application model to re-architect Big Data applications in order to efficiently present decisioned results: lowering the volume of data being handled by the application itself, and significantly decreasing response wait times while allowing the flexibility and permanence of a standard relational SQL database, supplying optimal user satisfaction in today\u27s Data Analytics world. We experimentally demonstrate the effectiveness of our approach
    • …
    corecore