2,862 research outputs found
Formally Specifying and Proving Operational Aspects of Forensic Lucid in Isabelle
A Forensic Lucid intensional programming language has been proposed for
intensional cyberforensic analysis. In large part, the language is based on
various predecessor and codecessor Lucid dialects bound by the higher-order
intensional logic (HOIL) that is behind them. This work formally specifies the
operational aspects of the Forensic Lucid language and compiles a theory of its
constructs using Isabelle, a proof assistant system.Comment: 23 pages, 3 listings, 3 figures, 1 table, 1 Appendix with theorems,
pp. 76--98. TPHOLs 2008 Emerging Trends Proceedings, August 18-21, Montreal,
Canada. Editors: Otmane Ait Mohamed and Cesar Munoz and Sofiene Tahar. The
individual paper's PDF is at
http://users.encs.concordia.ca/~tphols08/TPHOLs2008/ET/76-98.pd
Towards Improving Validation, Verification, Crash Investigations, and Event Reconstruction of Flight-Critical Systems with Self-Forensics
This paper introduces a novel concept of self-forensics to complement the
standard autonomic self-CHOP properties of the self-managed systems, to be
specified in the Forensic Lucid language. We argue that self-forensics, with
the forensics taken out of the cybercrime domain, is applicable to
"self-dissection" for the purpose of verification of autonomous software and
hardware systems of flight-critical systems for automated incident and anomaly
analysis and event reconstruction by the engineering teams in a variety of
incident scenarios during design and testing as well as actual flight data.Comment: 10 pages; a white discussion paper submitted in response to NASA's
RFI NNH09ZEA001L at
http://prod.nais.nasa.gov/cgi-bin/eps/synopsis.cgi?acqid=13449
Recommended from our members
Validating digital forensic evidence
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This dissertation focuses on the forensic validation of computer evidence. It is a
burgeoning field, by necessity, and there have been significant advances in the detection and gathering of evidence related to electronic crimes. What makes the computer
forensics field similar to other forensic fields is that considerable emphasis is placed on the validity of the digital evidence. It is not just the methods used to collect the evidence that is a concern. What is also a problem is that perpetrators of digital crimes may be engaged in what is called anti-forensics. Digital forensic evidence techniques are deliberately thwarted and corrupted by those under investigation. In traditional forensics
the link between evidence and perpetrator's actions is often straightforward: a fingerprint on an object indicates that someone has touched the object. Anti-forensic activity would be the equivalent of having the ability to change the nature of the fingerprint before, or during the investigation, thus making the forensic evidence collected invalid or less
reliable. This thesis reviews the existing security models and digital forensics, paying
particular attention to anti-forensic activity that affects the validity of data collected in the form of digital evidence. This thesis will build on the current models in this field and suggest a tentative first step model to manage and detect possibility of anti-forensic activity. The model is concerned with stopping anti-forensic activity, and thus is not a forensic model in the normal sense, it is what will be called a “meta-forensic” model. A
meta-forensic approach is an approach intended to stop attempts to invalidate digital forensic evidence. This thesis proposes a formal procedure and guides forensic examiners to look at evidence in a meta-forensic way
Toward Refactoring of DMARF and GIPSY Case Studies -- a Team 12 SOEN6471-S14 Project Report
The main significance of this document is two source systems namely GIPSY and
DMARF. Intensional languages are required like GIPSY for absoluteness and
forward practical investigations on the subject.DMARF mainly focuses on
software architectural design and implementation on Distributed Audio
recognition and its applications such as speaker identification which can run
distributively on web services architecture. This mainly highlights security
aspects in a distributed system, the Java data security framework (JDSF) in
DMARF. ASSL (Autonomic System Specification Language) frame work is used to
integrate a self-optimizing property for DMARF. GIPSY mainly depends on
Higher-Order Intensional Logic (HOIL) and reflects three main goals Generality,
Adaptability and Efficiency.Comment: 35 page
An overview of decision table literature 1982-1995.
This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.
A platform for discovering and sharing confidential ballistic crime data.
Criminal investigations generate large volumes of complex data that detectives have to analyse and understand. This data tends to be "siloed" within individual jurisdictions and re-using it in other investigations can be difficult. Investigations into trans-national crimes are hampered by the problem of discovering relevant data held by agencies in other countries and of sharing those data. Gun-crimes are one major type of incident that showcases this: guns are easily moved across borders and used in multiple crimes but finding that a weapon was used elsewhere in Europe is difficult. In this paper we report on the Odyssey Project, an EU-funded initiative to mine, manipulate and share data about weapons and crimes. The project demonstrates the automatic combining of data from disparate repositories for cross-correlation and automated analysis. The data arrive from different cultural/domains with multiple reference models using real-time data feeds and historical databases
Automatic IQ estimation using stylometry methods.
Stylometry is a study of text linguistic properties that brings together various field of research such as statistics, linguistics, computer science and more. Stylometry methods have been used for historic investigation, as forensic evidence and educational tool. This thesis presents a method to automatically estimate individual’s IQ based on quality of writing and discusses challenges associated with it. The method utilizes various text features and NLP techniques to calculate metrics which are used to estimate individual’s IQ. The results show a high degree of correlation between expected and estimated IQs in cases when IQ is within the average range. Obtaining good estimation for IQs on the high and low ends of the spectrum proves to be more challenging and this work offers several reasons for that. Over the years stylometry benefitted from wide exposure and interest among researches, however it appears that there aren’t many studies that focus on using stylometry methods to estimate individual’s intelligence. Perhaps this work presents the first in-depth attempt to do s
SoK: Attacks on Industrial Control Logic and Formal Verification-Based Defenses
Programmable Logic Controllers (PLCs) play a critical role in the industrial
control systems. Vulnerabilities in PLC programs might lead to attacks causing
devastating consequences to the critical infrastructure, as shown in Stuxnet
and similar attacks. In recent years, we have seen an exponential increase in
vulnerabilities reported for PLC control logic. Looking back on past research,
we found extensive studies explored control logic modification attacks, as well
as formal verification-based security solutions. We performed systematization
on these studies, and found attacks that can compromise a full chain of control
and evade detection. However, the majority of the formal verification research
investigated ad-hoc techniques targeting PLC programs. We discovered challenges
in every aspect of formal verification, rising from (1) the ever-expanding
attack surface from evolved system design, (2) the real-time constraint during
the program execution, and (3) the barrier in security evaluation given
proprietary and vendor-specific dependencies on different techniques. Based on
the knowledge systematization, we provide a set of recommendations for future
research directions, and we highlight the need of defending security issues
besides safety issues.Comment: 18 pages w/ ref, Sok, PLC, ICS, CPS, attack, formal verificatio
- …