578,689 research outputs found

    Gouy-Stodola Theorem as a variational principle for open systems

    Get PDF
    The recent researches in non equilibrium and far from equilibrium systems have been proved to be useful for their applications in different disciplines and many subjects. A general principle to approach all these phenomena with a unique method of analysis is required in science and engineering: a variational principle would have this fundamental role. Here, the Gouy-Stodola theorem is proposed to be this general variational principle, both proving that it satisfies the above requirements and relating it to a statistical results on entropy production.Comment: arXiv admin note: text overlap with arXiv:1101.131

    A Study of Text Mining Framework for Automated Classification of Software Requirements in Enterprise Systems

    Get PDF
    abstract: Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of classification of software requirements statements into categories easily comprehensible for developers for faster development and delivery, which till now was mostly done manually by software engineers - indeed a tedious job. However, most of the research was focused on classification of Non-functional requirements pertaining to intangible features such as security, reliability, quality and so on. It is indeed a challenging task to automatically classify functional requirements, those pertaining to how the system will function, especially those belonging to different and large enterprise systems. This requires exploitation of text mining capabilities. This thesis aims to investigate results of text classification applied on functional software requirements by creating a framework in R and making use of algorithms and techniques like k-nearest neighbors, support vector machine, and many others like boosting, bagging, maximum entropy, neural networks and random forests in an ensemble approach. The study was conducted by collecting and visualizing relevant enterprise data manually classified previously and subsequently used for training the model. Key components for training included frequency of terms in the documents and the level of cleanliness of data. The model was applied on test data and validated for analysis, by studying and comparing parameters like precision, recall and accuracy.Dissertation/ThesisMasters Thesis Engineering 201

    Towards an Approach for Analysing the Strategic Alignment of Software Requirements using Quantified Goal Graphs

    Get PDF
    Analysing the strategic alignment of software requirements primarily provides assurance to stakeholders that the software-to-be will add value to the organisation. Additionally, such analysis can improve a requirement by disambiguating its purpose and value, thereby supporting validation and value-oriented decisions in requirements engineering processes, such as prioritisation, release planning, and trade-off analysis. We review current approaches that could enable such an analysis. We focus on Goal Oriented Requirements Engineering methodologies, since goal graphs are well suited for relating software goals to business goals. However, we argue that unless the extent of goal-goal contribution is quantified with verifiable metrics, goal graphs are not sufficient for demonstrating the strategic alignment of software requirements. Since the concept of goal contribution is predictive, what results is a forecast of the benefits of implementing software requirements. Thus, we explore how the description of the contribution relationship can be enriched with concepts such as uncertainty and confidence, non-linear causation, and utility. We introduce the approach using an example software project from Rolls-Royce.Comment: arXiv admin note: text overlap with arXiv:1211.625

    Automated Natural Language Requirements Analysis using General Architecture for Text Engineering (GATE) Framework

    Get PDF
    Stakeholders exchange ideas and describe requirements of the system in natural language at the early stage of software development. These software requirements tend to be unclear, incomplete and inconsistent. However, better quality and low cost of system development are grounded on clear, complete and consistent requirements statements. Requirements boilerplate is an effective way to minimise the ambiguity from the natural language requirements. But manual conformance of natural language requirements with boilerplate is time consuming and difficult task. This paper aims to automate requirements analysis phase using language processing tool. We propose a natural language requirement analysis model. We also present an open source General Architecture for Text Engineering (GATE) framework for automatically checking of natural language requirements against boilerplates for conformance. The evaluation of proposed approach shows that GATE framework is only capable of detecting ambiguity in natural language requirements. We also present the rules to minimise ambiguity, incompleteness, and inconsistency

    Flexible Ambiguity Resolution and Incompleteness Detection in Requirements Descriptions via an Indicator-Based Configuration of Text Analysis Pipelines

    Get PDF
    Natural language software requirements descriptions enable end users to formulate their wishes and expectations for a future software product without much prior knowledge in requirements engineering. However, these descriptions are susceptible to linguistic inaccuracies such as ambiguities and incompleteness that can harm the development process. There is a number of software solutions that can detect deficits in requirements descriptions and partially solve them, but they are often hard to use and not suitable for end users. For this reason, we develop a software system that helps end-users to create unambiguous and complete requirements descriptions by combining existing expert tools and controlling them using automatic compensation strategies. In order to recognize the necessity of individual compensation methods in the descriptions, we have developed linguistic indicators, which we present in this paper. Based on these indicators, the whole text analysis pipeline is ad-hoc configured and thus adapted to the individual circumstances of a requirements description

    Towards advanced data skills for information systems graduates

    Get PDF
    This study aims to explore advanced data skills -in particular technical skills expected of a data engineering (DE) professional. Descriptions and requirements for the DE jobs, one each from 50 US companies, were collected from three job sites. An automated text analysis was performed on the job descriptions to extract a list of technical skills and competencies required for the DE job in the industry. The extracted information was then manually synthesized and categorized by the author. Findings of the study are intended to be of use by information systems graduates looking to pursue advanced data education and for curricula committees in offering advanced data courses and specializations
    • 

    corecore