119,969 research outputs found

    Innovative Applications of Artificial Intelligence Techniques in Software Engineering

    Full text link
    International audienceArtificial Intelligence (AI) techniques have been successfully applied in many areas of software engineering. The complexity of software systems has limited the application of AI techniques in many real world applications. This talk provides an insight into applications of AI techniques in software engineering and how innovative application of AI can assist in achieving ever competitive and firm schedules for software development projects as well as Information Technology (IT) management. The pros and cons of using AI techniques are investigated and specifically the application of AI in IT management, software application development and software security is considered. Organisations that build software applications do so in an environment characterised by limited resources, increased pressure to reduce cost and development schedules. Organisations demand to build software applications adequately and quickly. One approach to achieve this is to use automated software development tools from the very initial stage of software design up to the software testing and installation. Considering software testing as an example, automated software systems can assist in most software testing phases. On the hand data security, availability, privacy and integrity are very important issues in the success of a business operation. Data security and privacy policies in business are governed by business requirements and government regulations. AI can also assist in software security, privacy and reliability. Implementing data security using data encryption solutions remain at the forefront for data security. Many solutions to data encryption at this level are expensive, disruptive and resource intensive. AI can be used for data classification in organizations. It can assist in identifying and encrypting only the relevant data thereby saving time and processing power. Without data classification organizations using encryption process would simply encrypt everything and consequently impact users more than necessary. Data classification is essential and can assist organizations with their data security, privacy and accessibility needs. This talk explores the use of AI techniques (such as fuzzy logic) for data classification and suggests a method that can determine requirements for classification of organizations' data for security and privacy based on organizational needs and government policies. Finally the application of FCM in IT management is discussed

    Using Machine Learning to Assist with the Selection of Security Controls During Security Assessment

    Get PDF
    In many domains such as healthcare and banking, IT systems need to fulfill various requirements related to security. The elaboration of security requirements for a given system is in part guided by the controls envisaged by the applicable security standards and best practices. An important difficulty that analysts have to contend with during security requirements elaboration is sifting through a large number of security controls and determining which ones have a bearing on the security requirements for a given system. This challenge is often exacerbated by the scarce security expertise available in most organizations. [Objective] In this article, we develop automated decision support for the identification of security controls that are relevant to a specific system in a particular context. [Method and Results] Our approach, which is based on machine learning, leverages historical data from security assessments performed over past systems in order to recommend security controls for a new system. We operationalize and empirically evaluate our approach using real historical data from the banking domain. Our results show that, when one excludes security controls that are rare in the historical data, our approach has an average recall of ≈ 94% and average precision of ≈ 63%. We further examine through a survey the perceptions of security analysts about the usefulness of the classification models derived from historical data. [Conclusions] The high recall – indicating only a few relevant security controls are missed – combined with the reasonable level of precision – indicating that the effort required to confirm recommendations is not excessive – suggests that our approach is a useful aid to analysts for more efficiently identifying the relevant security controls, and also for decreasing the likelihood that important controls would be overlooked. Further, our survey results suggest that the generated classification models help provide a documented and explicit rationale for choosing the applicable security controls

    Electronic Security Implications of NEC: A Tactical Battlefield Scenario

    No full text
    In [1] three principal themes are identified by the UK MoD (Ministry of Defence) in order to deliver the vision of NEC (Network Enabled Capability): Networks, People and Information. It is the security of information, which is discussed in this article. The drive towards NEC is due to many factors; one defining factor is to provide an increase in operational tempo in effect placing one ahead of their enemy in terms of acting within their OODA (Observe, Orient, Decide, Act) loop. However as technical and procedural systems are being advanced to achieve the vision of NEC, what impact does this have on the traditional information security triangle, of preserving the confidentiality, integrity and availability of information? And how does this influence current security engineering and accreditation practices, particularly in light of the proliferation problem? This article describes research conducted into answering these questions, building upon the findings of the NITEworksÂź [2] ISTAR (Intelligence, Surveillance, Target Acquisition and Reconnaissance) Theme studies and focusing on a tactical battlefield scenario. This scenario relates to the IFPA (Indirect Fire Precision Attack) [3] project where the efficient synchronisation of potentially numerous sources of information is required, providing real-time decisions and delivery of effects, in accordance with the requirements of NEC. It is envisaged that the IFPA systems will consist of numerous sub-systems each of which will provide a unique effecting capability to the UK army with differing levels of speed, accuracy and range

    What Works Better? A Study of Classifying Requirements

    Full text link
    Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naive Bayes for the sub-classification of NFRs.Comment: 7 pages, the 25th IEEE International Conference on Requirements Engineering (RE'17

    A Survey on Service Composition Middleware in Pervasive Environments

    Get PDF
    The development of pervasive computing has put the light on a challenging problem: how to dynamically compose services in heterogeneous and highly changing environments? We propose a survey that defines the service composition as a sequence of four steps: the translation, the generation, the evaluation, and finally the execution. With this powerful and simple model we describe the major service composition middleware. Then, a classification of these service composition middleware according to pervasive requirements - interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management, and autonomous management - is given. The classification highlights what has been done and what remains to do to develop the service composition in pervasive environments
    • 

    corecore