9,362 research outputs found

    A POS Tagging Approach to Capture Security Requirements within an Agile Software Development Process

    Get PDF
    Software use is an inescapable reality. Computer systems are embedded into devices from the mundane to the complex and significantly impact daily life. Increased use expands the opportunity for malicious use which threatens security and privacy. Factors such as high profile data breaches, rising cost due to security incidents, competitive advantage and pending legislation are driving software developers to integrate security into software development rather than adding security after a product has been developed. Security requirements must be elicited, modeled, analyzed, documented and validated beginning at the initial phases of the software engineering process rather than being added at later stages. However, approaches to developing security requirements have been lacking which presents barriers to security requirements integration during the requirements phase of software development. In particular, software development organizations working within short development lifecycles (often characterized as agile lifecycle) and minimal resources need a light and practical approach to security requirements engineering that can be easily integrated into existing agile processes. In this thesis, we present an approach for eliciting, analyzing, prioritizing and developing security requirements which can be integrated into existing software development lifecycles for small, agile organizations. The approach is based on identifying candidate security goals, categorizing security goals based on security principles, understanding the stakeholder goals to develop preliminary security requirements and prioritizing preliminary security requirements. The identification activity consists of part of speech (POS) tagging of requirements related artifacts for security terminology to discover candidate security goals. The categorization activity applies a general security principle to candidate goals. Elicitation activities are undertaken to gain a deeper understanding of the security goals from stakeholders. Elicited goals are prioritized using risk management techniques and security requirements are developed from validated goals. Security goals may fail the validation activity, requiring further iterations of analysis, elicitation, and prioritization activities until stakeholders are satisfied with or have eliminated the security requirement. Finally, candidate security requirements are output which can be further modeled, defined and validated using other approaches. A security requirements repository is integrated into our proposed approach for future security requirements refinement and reuse. We validate the framework through an industrial case study with a small, agile software development organization

    Collaborative knowledge as a service applied to the disaster management domain

    Get PDF
    Cloud computing offers services which promise to meet continuously increasing computing demands by using a large number of networked resources. However, data heterogeneity remains a major hurdle for data interoperability and data integration. In this context, a Knowledge as a Service (KaaS) approach has been proposed with the aim of generating knowledge from heterogeneous data and making it available as a service. In this paper, a Collaborative Knowledge as a Service (CKaaS) architecture is proposed, with the objective of satisfying consumer knowledge needs by integrating disparate cloud knowledge through collaboration among distributed KaaS entities. The NIST cloud computing reference architecture is extended by adding a KaaS layer that integrates diverse sources of data stored in a cloud environment. CKaaS implementation is domain-specific; therefore, this paper presents its application to the disaster management domain. A use case demonstrates collaboration of knowledge providers and shows how CKaaS operates with simulation models

    Social media analytics: a survey of techniques, tools and platforms

    Get PDF
    This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing

    Recursive internetwork architecture, investigating RINA as an alternative to TCP/IP (IRATI)

    Get PDF
    Driven by the requirements of the emerging applications and networks, the Internet has become an architectural patchwork of growing complexity which strains to cope with the changes. Moore’s law prevented us from recognising that the problem does not hide in the high demands of today’s applications but lies in the flaws of the Internet’s original design. The Internet needs to move beyond TCP/IP to prosper in the long term, TCP/IP has outlived its usefulness. The Recursive InterNetwork Architecture (RINA) is a new Internetwork architecture whose fundamental principle is that networking is only interprocess communication (IPC). RINA reconstructs the overall structure of the Internet, forming a model that comprises a single repeating layer, the DIF (Distributed IPC Facility), which is the minimal set of components required to allow distributed IPC between application processes. RINA supports inherently and without the need of extra mechanisms mobility, multi-homing and Quality of Service, provides a secure and configurable environment, motivates for a more competitive marketplace and allows for a seamless adoption. RINA is the best choice for the next generation networks due to its sound theory, simplicity and the features it enables. IRATI’s goal is to achieve further exploration of this new architecture. IRATI will advance the state of the art of RINA towards an architecture reference model and specifcations that are closer to enable implementations deployable in production scenarios. The design and implemention of a RINA prototype on top of Ethernet will permit the experimentation and evaluation of RINA in comparison to TCP/IP. IRATI will use the OFELIA testbed to carry on its experimental activities. Both projects will benefit from the collaboration. IRATI will gain access to a large-scale testbed with a controlled network while OFELIA will get a unique use-case to validate the facility: experimentation of a non-IP based Internet

    XBRL:The Views of Stakeholders

    Get PDF

    A commentary on standardization in the Semantic Web, Common Logic and MultiAgent Systems

    Get PDF
    Given the ubiquity of the Web, the Semantic Web (SW) offers MultiAgent Systems (MAS) a most wide-ranging platform by which they could intercommunicate. It can be argued however that MAS require levels of logic that the current Semantic Web has yet to provide. As ISO Common Logic (CL) ISO/IEC IS 24707:2007 provides a firstorder logic capability for MAS in an interoperable way, it seems natural to investigate how CL may itself integrate with the SW thus providing a more expressive means by which MAS can interoperate effectively across the SW. A commentary is accordingly presented on how this may be achieved. Whilst it notes that certain limitations remain to be addressed, the commentary proposes that standardising the SW with CL provides the vehicle by which MAS can achieve their potential.</p
    • 

    corecore