153,692 research outputs found
Conformance checking of a longwall shearer operation based on low-level events
Conformance checking is a process mining technique that compares a process model with an event log of the same process to check whether the current execution stored in the log conforms to the model and vice versa. This paper deals with the conformance checking of a longwall shearer process. The approach uses place-transition Petri nets with inhibitor arcs for modeling purposes. We use event log files collected from a few coal mines located in Poland by Famur S.A., one of the global suppliers of coal mining machines. One of the main advantages of the approach is the possibility for both offline and online analysis of the log data. The paper presents a detailed description of the longwall process, an original formal model we developed, selected elements of the approachâs implementation and the results of experiments
Estimating time between creation and achievement of knowledge objects in learning groups through social network analysis
Networked collaboration performed over specific platforms designed for such purposes can provide knowledge about roles, intentions and effects regarding participants, their interaction among themselves and the interaction with the available knowledge objects. This study aims to propose a mechanism for discovering temporal behaviour underlying the raw data collected in log files from e-learning activity in specific platforms. The proposal is based on measuring and, subsequently, estimating time spans through social networks analysis (SNA). The main focus of this work is to match different temporal behaviours, shown during collaborative learning, with formal profiles identified inside a complex network of interactions. The final goal is to define a concrete mechanism to measure the response of participants, from the perspective that knowledge objects have been created by the partners in the same learning group
Recommended from our members
The challenge of supporting networked personal inquiry learning across contexts
Supporting learning across different contexts can be challenging. Defining formal, informal and nonformal learning is the subject of continuing debate as each can be difficult to describe. We report on a study that evaluated the effectiveness of a Personal Inquiry toolkit on supporting personal inquiries into the sustainability of the food cycle, carried out across the contexts of home and an after school club in a UK secondary school. The toolkit consisted of a web-based Sustainability Investigator that could be accessed from any location, together with a selection of data-gathering tools such as environmental sensors (e.g. temperature probes) and cameras. It was designed to support students through the process of carrying out inquiries within the club and between the club and their home. Our main focus here is on describing how the Sustainability Investigator supported students' inquiries that were conceived and designed within the club and conducted at home. The 30 students (aged 12-14 years) chose to investigate home food storage, packaging and preservation. Our focus is on exploring the nature of the semi-formal club context and how this mediated students' use of the Sustainability Investigator. Analysis of our field notes, log files of students' use of the Sustainability Investigator, together with video and audio recordings of club sessions and interviews with teachers and pupils, suggest that while the pupils' use of the toolkit across contexts was sporadic and varied between students, they successfully completed personally relevant inquiries and developed positive attitudes to the process. This was different to the predictable, sustained and consistent use of the toolkit identified in our previous studies when the students used it (again successfully) to support their inquiries in a formal classroom setting (see e.g. Scanlon et al. 2009). Three main features of the
school club context that mediated the ways in which the Sustainability Investigator was used by the students across contexts were: 1) the students' aims and priorities, 2) affordances and constraints of the technology, and 3) institutional priorities. We use this example of a study of learning across contexts to suggest implications of the work for the potential of a Personal Inquiry toolkit to support learning across the life course
Advanced Cryptographic Techniques for Protecting Log Data
This thesis examines cryptographic techniques providing security for computer log files.
It focuses on ensuring authenticity and integrity, i.e. the properties of having been created by a specific entity and being unmodified.
Confidentiality, the property of being unknown to unauthorized entities, will be considered, too, but with less emphasis.
Computer log files are recordings of actions performed and events encountered in
computer systems. While the complexity of computer systems is steadily growing, it is increasingly difficult to predict how a given system will behave under certain
conditions, or to retrospectively reconstruct and explain which events and conditions led to a specific behavior.
Computer log files help to mitigate the problem of retracing a systemâs behavior retrospectively by providing a (usually chronological) view of events
and actions encountered in a system.
Authenticity and integrity of computer log files are widely recognized security requirements, see e.g. [Latham, ed., "Department of Defense Trusted Computer System Evaluation Criteria", 1985, p. 10], [Kent and Souppaya, "Guide to Computer Security Log Management", NIST Special Publication 800-92, 2006, Section 2.3.2], [Guttman and Roback, "An Introduction to Computer Security: The NIST Handbook", superseded NIST Special Publication 800-12, 1995, Section 18.3.1],
[Nieles et al., "An Introduction to Information Security" , NIST Special Publication 800-12, 2017, Section 9.3], [Common Criteria Editorial Board, ed., "Common Criteria for Information Technology Security Evaluation", Part 2, Section 8.6].
Two commonly cited ways to ensure integrity of log files are to store log data on so-called write-once-read-many-times (WORM) drives and to immediately print log records on a continuous-feed printer.
This guarantees that log data cannot be retroactively modified by an attacker without physical access to the storage medium.
However, such special-purpose hardware may not always be a viable option for the application at hand, for example because it may be too costly.
In such cases, the integrity and authenticity of log records must be ensured via other means, e.g. with cryptographic techniques. Although these techniques cannot prevent the modification of log data, they can offer strong guarantees that modifications will be detectable, while being implementable in software.
Furthermore, cryptography can be used to achieve public verifiability of log files, which may be needed in applications that have strong transparency requirements. Cryptographic techniques can even be used in addition to hardware solutions, providing protection against attackers who do have physical access
to the logging hardware, such as insiders.
Cryptographic schemes for protecting stored log data need to be resilient against attackers who obtain control over the computer storing the log data.
If this computer operates in a standalone fashion, it is an absolute requirement for the cryptographic schemes to offer security even in the event of a key compromise.
As this is impossible with standard cryptographic tools, cryptographic solutions for protecting log data typically make use of forward-secure schemes, guaranteeing that changes to log data recorded in the past can be detected. Such schemes use a sequence of authentication keys instead of a single one, where previous keys cannot be computed efficiently from latter ones.
This thesis considers the following requirements for, and desirable features of, cryptographic logging schemes:
1) security, i.e. the ability to reliably detect violations of integrity and authenticity, including detection of log truncations,
2) efficiency regarding both computational and storage overhead,
3) robustness, i.e. the ability to verify unmodified log entries even if others have been illicitly changed, and
4) verifiability of excerpts, including checking an excerpt for omissions.
The goals of this thesis are to devise new techniques for the construction of cryptographic schemes that provide security for computer log files, to give concrete constructions of such schemes, to develop new models that can accurately capture the security guarantees offered by the new schemes, as well as to examine the security of previously published schemes.
This thesis demands that cryptographic schemes for securely storing log data must be able to detect if log entries have been deleted from a log file. A special case of deletion is log truncation, where a continuous subsequence of log records from the end of the log file is deleted.
Obtaining truncation resistance, i.e. the ability to detect truncations, is one of the major difficulties when designing cryptographic logging schemes.
This thesis alleviates this problem by introducing a novel technique to detect log truncations without the help of third parties or designated logging hardware.
Moreover, this work presents new formal security notions capturing truncation resistance.
The technique mentioned above is applied to obtain cryptographic logging schemes which can be shown to satisfy these notions under mild assumptions, making them the first schemes with formally proven truncation security.
Furthermore, this thesis develops a cryptographic scheme for the protection of log files which can support the creation of excerpts.
For this thesis, an excerpt is a (not necessarily contiguous) subsequence of records from a log file.
Excerpts created with the scheme presented in this thesis can be publicly checked for integrity and authenticity (as explained above) as well as for completeness, i.e. the property that no relevant log entry has been omitted from the excerpt.
Excerpts provide a natural way to preserve the confidentiality of information that is contained in a log file, but not of interest for a specific public analysis of the log file, enabling the owner of the log file to meet confidentiality and transparency requirements at the same time.
The scheme demonstrates and exemplifies the technique for obtaining truncation security mentioned above.
Since cryptographic techniques to safeguard log files usually require authenticating log entries individually, some researchers [Ma and Tsudik, "A New Approach to Secure Logging", LNCS 5094, 2008; Ma and Tsudik, "A New Approach to Secure Logging", ACM TOS 2009; Yavuz and Peng, "BAF: An Efficient Publicly Verifiable Secure Audit Logging Scheme for Distributed Systems", ACSAC 2009] have proposed using aggregatable signatures [Boneh et al., "Aggregate and Verifiably Encrypted Signatures from Bilinear Maps", EUROCRYPT 2003] in order to reduce the overhead in storage space incurred by using such a cryptographic scheme.
Aggregation of signatures refers to some âcombinationâ of any number of signatures (for distinct or equal messages, by distinct or identical signers) into an âaggregateâ signature. The size of the aggregate signature should be less than the total of the sizes of the orginal signatures, ideally the size of one of the original signatures.
Using aggregation of signatures in applications that require storing or transmitting a large number of signatures (such as the storage of log
records) can lead to significant reductions in the use of storage space and bandwidth.
However, aggregating the signatures for all log records into a single signature will cause some fragility:
The modification of a single log entry will render the aggregate signature invalid, preventing the cryptographic verification of any part of the log file.
However, being able to distinguish manipulated log entries from non-manipulated ones may be of importance for after-the-fact investigations.
This thesis addresses this issue by presenting a new technique providing a trade-off between storage overhead and robustness, i.e. the ability to tolerate some modifications to the log file while preserving the cryptographic verifiability of unmodified log entries.
This robustness is achieved by the use of a special kind of aggregate signatures (called fault-tolerant aggregate signatures), which contain some redundancy.
The construction makes use of combinatorial methods guaranteeing that if the number of errors is below a certain threshold, then there will be enough redundancy to identify and verify the non-modified log entries.
Finally, this thesis presents a total of four attacks on three different schemes intended for securely storing log files presented in the literature [Yavuz et al., "Efficient, Compromise Resilient and Append-Only Cryptographic Schemes for Secure Audit Logging", Financial Cryptography 2012; Ma, "Practical Forward Secure Sequential Aggregate Signatures", ASIACCS 2008].
The attacks allow for virtually arbitrary log file forgeries or even recovery of the secret key used for authenticating the log file, which could then be used for mostly arbitrary log file forgeries, too.
All of these attacks exploit weaknesses of the specific schemes. Three of the attacks presented here contradict the security properties of the schemes claimed
and supposedly proven by the respective authors. This thesis briefly discusses these proofs and points out their flaws.
The fourth attack presented here is outside of the security model considered by the schemeâs authors, but nonetheless presents a realistic
threat.
In summary, this thesis advances the scientific state-of-the-art with regard to providing security for computer log files in a number of ways:
by introducing a new technique for obtaining security against log truncations,
by providing the first scheme where excerpts from log files can be verified for completeness,
by describing the first scheme that can achieve some notion of robustness while being able to aggregate log record
signatures, and
by analyzing the security of previously proposed schemes
Metamorphic Code Generation from LLVM IR Bytecode
Metamorphic software changes its internal structure across generations with its functionality remaining unchanged. Metamorphism has been employed by malware writers as a means of evading signature detection and other advanced detection strate- gies. However, code morphing also has potential security benefits, since it increases the âgenetic diversityâ of software. In this research, we have created a metamorphic code generator within the LLVM compiler framework. LLVM is a three-phase compiler that supports multiple source languages and target architectures. It uses a common intermediate representation (IR) bytecode in its optimizer. Consequently, any supported high-level programming language can be transformed to this IR bytecode as part of the LLVM compila- tion process. Our metamorphic generator functions at the IR bytecode level, which provides many advantages over previously developed metamorphic generators. The morphing techniques that we employ include dead code insertionâwhere the dead code is actually executed within the morphed codeâand subroutine permutation. We have tested the effectiveness of our code morphing using hidden Markov model analysis
An Entry Point for Formal Methods: Specification and Analysis of Event Logs
Formal specification languages have long languished, due to the grave
scalability problems faced by complete verification methods. Runtime
verification promises to use formal specifications to automate part of the more
scalable art of testing, but has not been widely applied to real systems, and
often falters due to the cost and complexity of instrumentation for online
monitoring. In this paper we discuss work in progress to apply an event-based
specification system to the logging mechanism of the Mars Science Laboratory
mission at JPL. By focusing on log analysis, we exploit the "instrumentation"
already implemented and required for communicating with the spacecraft. We
argue that this work both shows a practical method for using formal
specifications in testing and opens interesting research avenues, including a
challenging specification learning problem
Data conversion and interoperability for FCA
This paper proposes a tool that converts non-FCA format data files into an FCA format, thereby making a wide range of public data sets and data produced by non-FCA tools interoperable with FCA tools. This will also offer the power of FCA to a wider community of data analysts. A repository of converted data is also proposed, as a consistent resource of public data for analysis and for the testing, evaluation and comparison of FCA tools and algorithms.</p
Supporting ethnographic studies of ubiquitous computing in the wild
Ethnography has become a staple feature of IT research over the last twenty years, shaping our understanding of the social character of computing systems and informing their design in a wide variety of settings. The emergence of ubiquitous computing raises new challenges for ethnography however, distributing interaction across a burgeoning array of small, mobile devices and online environments which exploit invisible sensing systems. Understanding interaction requires ethnographers to reconcile interactions that are, for example, distributed across devices on the street with online interactions in order to assemble coherent understandings of the social character and purchase of ubiquitous computing systems. We draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to do this and identify key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
- âŠ