5,338 research outputs found

    Determining Training Needs for Cloud Infrastructure Investigations using I-STRIDE

    Full text link
    As more businesses and users adopt cloud computing services, security vulnerabilities will be increasingly found and exploited. There are many technological and political challenges where investigation of potentially criminal incidents in the cloud are concerned. Security experts, however, must still be able to acquire and analyze data in a methodical, rigorous and forensically sound manner. This work applies the STRIDE asset-based risk assessment method to cloud computing infrastructure for the purpose of identifying and assessing an organization's ability to respond to and investigate breaches in cloud computing environments. An extension to the STRIDE risk assessment model is proposed to help organizations quickly respond to incidents while ensuring acquisition and integrity of the largest amount of digital evidence possible. Further, the proposed model allows organizations to assess the needs and capacity of their incident responders before an incident occurs.Comment: 13 pages, 3 figures, 3 tables, 5th International Conference on Digital Forensics and Cyber Crime; Digital Forensics and Cyber Crime, pp. 223-236, 201

    Derivation of diagnostic models based on formalized process knowledge

    Get PDF
    © IFAC.Industrial systems are vulnerable to faults. Early and accurate detection and diagnosis in production systems can minimize down-time, increase the safety of the plant operation, and reduce manufacturing costs. Knowledge- and model-based approaches to automated fault detection and diagnosis have been demonstrated to be suitable for fault cause analysis within a broad range of industrial processes and research case studies. However, the implementation of these methods demands a complex and error-prone development phase, especially due to the extensive efforts required during the derivation of models and their respective validation. In an effort to reduce such modeling complexity, this paper presents a structured causal modeling approach to supporting the derivation of diagnostic models based on formalized process knowledge. The method described herein exploits the Formalized Process Description Guideline VDI/VDE 3682 to establish causal relations among key-process variables, develops an extension of the Signed Digraph model combined with the use of fuzzy set theory to allow more accurate causality descriptions, and proposes a representation of the resulting diagnostic model in CAEX/AutomationML targeting dynamic data access, portability, and seamless information exchange

    The Relationship between Fuzzy Reasoning and Its Temporal Characteristics for Knowledge Management

    Get PDF
    The knowledge management systems based on artificial reasoning (KMAR) tries to provide computers the capabilities of performing various intelligent tasks for which their human users resort to their knowledge and collective intelligence. There is a need for incorporating aspects of time and imprecision into knowledge management systems, considering appropriate semantic foundations. The aim of this paper is to present the FRTES, a real-time fuzzy expert system, embedded in a knowledge management system. Our expert system is a special possibilistic expert system, developed in order to focus on fuzzy knowledge.Knowledge Management, Artificial Reasoning, predictability

    Potentially Polluting Marine Sites GeoDB: An S-100 Geospatial Database as an Effective Contribution to the Protection of the Marine Environment

    Get PDF
    Potentially Polluting Marine Sites (PPMS) are objects on, or areas of, the seabed that may release pollution in the future. A rationale for, and design of, a geospatial database to inventory and manipu-late PPMS is presented. Built as an S-100 Product Specification, it is specified through human-readable UML diagrams and implemented through machine-readable GML files, and includes auxiliary information such as pollution-control resources and potentially vulnerable sites in order to support analyses of the core data. The design and some aspects of implementation are presented, along with metadata requirements and structure, and a perspective on potential uses of the database

    AsterixDB: A Scalable, Open Source BDMS

    Full text link
    AsterixDB is a new, full-function BDMS (Big Data Management System) with a feature set that distinguishes it from other platforms in today's open source Big Data ecosystem. Its features make it well-suited to applications like web data warehousing, social data storage and analysis, and other use cases related to Big Data. AsterixDB has a flexible NoSQL style data model; a query language that supports a wide range of queries; a scalable runtime; partitioned, LSM-based data storage and indexing (including B+-tree, R-tree, and text indexes); support for external as well as natively stored data; a rich set of built-in types; support for fuzzy, spatial, and temporal types and queries; a built-in notion of data feeds for ingestion of data; and transaction support akin to that of a NoSQL store. Development of AsterixDB began in 2009 and led to a mid-2013 initial open source release. This paper is the first complete description of the resulting open source AsterixDB system. Covered herein are the system's data model, its query language, and its software architecture. Also included are a summary of the current status of the project and a first glimpse into how AsterixDB performs when compared to alternative technologies, including a parallel relational DBMS, a popular NoSQL store, and a popular Hadoop-based SQL data analytics platform, for things that both technologies can do. Also included is a brief description of some initial trials that the system has undergone and the lessons learned (and plans laid) based on those early "customer" engagements

    URBANO: A Tour-Guide Robot Learning to Make Better Speeches

    Get PDF
    —Thanks to the numerous attempts that are being made to develop autonomous robots, increasingly intelligent and cognitive skills are allowed. This paper proposes an automatic presentation generator for a robot guide, which is considered one more cognitive skill. The presentations are made up of groups of paragraphs. The selection of the best paragraphs is based on a semantic understanding of the characteristics of the paragraphs, on the restrictions defined for the presentation and by the quality criteria appropriate for a public presentation. This work is part of the ROBONAUTA project of the Intelligent Control Research Group at the Universidad PolitĂ©cnica de Madrid to create "awareness" in a robot guide. The software developed in the project has been verified on the tour-guide robot Urbano. The most important aspect of this proposal is that the design uses learning as the means to optimize the quality of the presentations. To achieve this goal, the system has to perform the optimized decision making, in different phases. The modeling of the quality index of the presentation is made using fuzzy logic and it represents the beliefs of the robot about what is good, bad, or indifferent about a presentation. This fuzzy system is used to select the most appropriate group of paragraphs for a presentation. The beliefs of the robot continue to evolving in order to coincide with the opinions of the public. It uses a genetic algorithm for the evolution of the rules. With this tool, the tour guide-robot shows the presentation, which satisfies the objectives and restrictions, and automatically it identifies the best paragraphs in order to find the most suitable set of contents for every public profil

    Automated software quality visualisation using fuzzy logic techniques

    Get PDF
    In the past decade there has been a concerted effort by the software industry to improve the quality of its products. This has led to the inception of various techniques with which to control and measure the process involved in software development. Methods like the Capability Maturity Model have introduced processes and strategies that require measurement in the form of software metrics. With the ever increasing number of software metrics being introduced by capability based processes, software development organisations are finding it more difficult to understand and interpret metric scores. This is particularly problematic for senior management and project managers where analysis of the actual data is not feasible. This paper proposes a method with which to visually represent metric scores so that managers can easily see how their organisation is performing relative to quality goals set for each type of metric. Acting primarily as a proof of concept and prototype, we suggest ways in which real customer needs can be translated into a feasible technical solution. The solution itself visualises metric scores in the form of a tree structure and utilises Fuzzy Logic techniques, XGMML, Web Services and the .NET Framework. Future work is proposed to extend the system from the prototype stage and to overcome a problem with the masking of poor scores

    Creating information delivery specifications using linked data

    Get PDF
    The use of Building Information Management (BIM) has become mainstream in many countries. Exchanging data in open standards like the Industry Foundation Classes (IFC) is seen as the only workable solution for collaboration. To define information needs for collaboration, many organizations are now documenting what kind of data they need for their purposes. Currently practitioners define their requirements often a) in a format that cannot be read by a computer; b) by creating their own definitions that are not shared. This paper proposes a bottom up solution for the definition of new building concepts a property. The authors have created a prototype implementation and will elaborate on the capturing of information specifications in the future
    • 

    corecore