3,701 research outputs found

    Multi-level agent-based modeling - A literature survey

    Full text link
    During last decade, multi-level agent-based modeling has received significant and dramatically increasing interest. In this article we present a comprehensive and structured review of literature on the subject. We present the main theoretical contributions and application domains of this concept, with an emphasis on social, flow, biological and biomedical models.Comment: v2. Ref 102 added. v3-4 Many refs and text added v5-6 bibliographic statistics updated. v7 Change of the name of the paper to reflect what it became, many refs and text added, bibliographic statistics update

    REMIND: A Framework for the Resilient Design of Automotive Systems

    Get PDF
    In the past years, great effort has been spent on enhancing the security and safety of vehicular systems. Current advances in information and communication technology have increased the complexity of these systems and lead to extended functionalities towards self-driving and more connectivity. Unfortunately, these advances open the door for diverse and newly emerging attacks that hamper the security and, thus, the safety of vehicular systems. In this paper, we contribute to supporting the design of resilient automotive systems. We review and analyze scientific literature on resilience techniques, fault tolerance, and dependability. As a result, we present the REMIND resilience framework providing techniques for attack detection, mitigation, recovery, and resilience endurance. Moreover, we provide guidelines on how the REMIND framework can be used against common security threats and attacks and further discuss the trade-offs when applying these guidelines

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    A framework for designing cloud forensic‑enabled services (CFeS)

    Get PDF
    Cloud computing is used by consumers to access cloud services. Malicious actors exploit vulnerabilities of cloud services to attack consumers. The link between these two assumptions is the cloud service. Although cloud forensics assists in the direction of investigating and solving cloud-based cyber-crimes, in many cases the design and implementation of cloud services falls back. Software designers and engineers should focus their attention on the design and implementation of cloud services that can be investigated in a forensic sound manner. This paper presents a methodology that aims on assisting designers to design cloud forensic-enabled services. The methodology supports the design of cloud services by implementing a number of steps to make the services cloud forensic-enabled. It consists of a set of cloud forensic constraints, a modelling language expressed through a conceptual model and a process based on the concepts identified and presented in the model. The main advantage of the proposed methodology is the correlation of cloud services’ characteristics with the cloud investigation while providing software engineers the ability to design and implement cloud forensic-enabled services via the use of a set of predefined forensic related task

    Evidential Reasoning & Analytical Techniques In Criminal Pre-Trial Fact Investigation

    Get PDF
    This thesis is the work of the author and is concerned with the development of a neo-Wigmorean approach to evidential reasoning in police investigation. The thesis evolved out of dissatisfaction with cardinal aspects of traditional approaches to police investigation, practice and training. Five main weaknesses were identified: Firstly, a lack of a theoretical foundation for police training and practice in the investigation of crime and evidence management; secondly, evidence was treated on the basis of its source rather than it's inherent capacity for generating questions; thirdly, the role of inductive elimination was underused and misunderstood; fourthly, concentration on single, isolated cases rather than on the investigation of multiple cases and, fifthly, the credentials of evidence were often assumed rather than considered, assessed and reasoned within the context of argumentation. Inspiration from three sources were used to develop the work: Firstly, John Henry Wigmore provided new insights into the nature of evidential reasoning and formal methods for the construction of arguments; secondly, developments in biochemistry provided new insights into natural methods of storing and using information; thirdly, the science of complexity provided new insights into the complex nature of collections of data that could be developed into complex systems of information and evidence. This thesis is an application of a general methodology supported by new diagnostic and analytical techniques. The methodology was embodied in a software system called Forensic Led Intelligence System: FLINTS. My standpoint is that of a forensic investigator with an interest in how evidential reasoning can improve the operation we call investigation. New areas of evidential reasoning are in progress and these are discussed including a new application in software designed by the author: MAVERICK. There are three main themes; Firstly, how a broadened conception of evidential reasoning supported by new diagnostic and analytical techniques can improve the investigation and discovery process. Secondly, an explanation of how a greater understanding of the roles and effects of different styles of reasoning can assist the user; and thirdly; a range of concepts and tools are presented for the combination, comparison, construction and presentation of evidence in imaginative ways. Taken together these are intended to provide examples of a new approach to the science of evidential reasoning. Originality will be in four key areas; 1. Extending and developing Wigmorean techniques to police investigation and evidence management. 2. Developing existing approaches in single case analysis and introducing an intellectual model for multi case analysis. 3. Introducing a new model for police training in investigative evidential reasoning. 4. Introducing a new software system to manage evidence in multi case approaches using forensic scientific evidence. FLINTS

    The Proceedings of 14th Australian Digital Forensics Conference, 5-6 December 2016, Edith Cowan University, Perth, Australia

    Get PDF
    Conference Foreword This is the fifth year that the Australian Digital Forensics Conference has been held under the banner of the Security Research Institute, which is in part due to the success of the security conference program at ECU. As with previous years, the conference continues to see a quality papers with a number from local and international authors. 11 papers were submitted and following a double blind peer review process, 8 were accepted for final presentation and publication. Conferences such as these are simply not possible without willing volunteers who follow through with the commitment they have initially made, and I would like to take this opportunity to thank the conference committee for their tireless efforts in this regard. These efforts have included but not been limited to the reviewing and editing of the conference papers, and helping with the planning, organisation and execution of the conference. Particular thanks go to those international reviewers who took the time to review papers for the conference, irrespective of the fact that they are unable to attend this year. To our sponsors and supporters a vote of thanks for both the financial and moral support provided to the conference. Finally, to the student volunteers and staff of the ECU Security Research Institute, your efforts as always are appreciated and invaluable. Yours sincerely, Conference Chair Professor Craig Valli Director, Security Research Institut

    Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning

    Get PDF
    CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel

    Method of Information Security Risk Analysis for Virtualized System

    Get PDF
    The growth of usage of Information Technology (IT) in daily operations of enterprises causes the value and the vulnerability of information to be at the peak of interest. Moreover, distributed computing revolutionized the out-sourcing of computing functions, thus allowing flexible IT solutions. Since the concept of information goes beyond the traditional text documents, reaching manufacturing, machine control, and, to a certain extent – reasoning – it is a great responsibility to maintain appropriate information security. Information Security (IS) risk analysis and maintenance require extensive knowledge about the possessed assets as well as the technologies behind them, to recognize the threats and vulnerabilities the infrastructure is facing. A way of formal description of the infrastructure – the Enterprise Architecture (EA) – offers a multiperspective view of the whole enterprise, linking together business processes as well as the infrastructure. Several IS risk analysis solutions based on the EA exist. However, lack of methods of IS risk analysis for virtualization technologies complicates the procedure, thus leading to reduced availability of such analysis. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces the problem of information security risk analysis and its’ automation. Moreover, state-of-the-art methodologies and their implementations for automated information security risk analysis are discussed. The second chapter proposes a novel method for risk analysis of virtualization components based on the most recent data, including threat classification and specification, control means and metrics of the impact. The third chapter presents an experimental evaluation of the proposed method, implementing it to the Cyber Security Modeling Language (CySeMoL) and comparing the analysis results to well-calibrated expert knowledge. It was concluded that the automation of virtualization solution risk analysis provides sufficient data for adjustment and implementation of security controls to maintain optimum security level
    corecore