3,362 research outputs found
Using a Goal-Driven Approach in the Investigation of a Questioned Contract
Part 3: FORENSIC TECHNIQUESInternational audienceThis paper presents a systematic process for describing digital forensic investigations. It focuses on forensic goals and anti-forensic obstacles and their operationalization in terms of human and software actions. The paper also demonstrates how the process can be used to capture the various forensic and anti-forensic aspects of a real-world case involving document forgery
Exploring Text Mining and Analytics for Applications in Public Security: An in-depth dive into a systematic literature review
Text mining and related analytics emerge as a technological approach to support human activities in extracting useful knowledge through texts in several formats. From a managerial point of view, it can help organizations in planning and decision-making processes, providing information that was not previously evident through textual materials produced internally or even externally. In this context, within the public/governmental scope, public security agencies are great beneficiaries of the tools associated with text mining, in several aspects, from applications in the criminal area to the collection of people's opinions and sentiments about the actions taken to promote their welfare. This article reports details of a systematic literature review focused on identifying the main areas of text mining application in public security, the most recurrent technological tools, and future research directions. The searches covered four major article bases (Scopus, Web of Science, IEEE Xplore, and ACM Digital Library), selecting 194 materials published between 2014 and the first half of 2021, among journals, conferences, and book chapters. There were several findings concerning the targets of the literature review, as presented in the results of this article
A Cloud Computing Capability Model for Large-Scale Semantic Annotation
Semantic technologies are designed to facilitate context-awareness for web
content, enabling machines to understand and process them. However, this has
been faced with several challenges, such as disparate nature of existing
solutions and lack of scalability in proportion to web scale. With a holistic
perspective to web content semantic annotation, this paper focuses on
leveraging cloud computing for these challenges. To achieve this, a set of
requirements towards holistic semantic annotation on the web is defined and
mapped with cloud computing mechanisms to facilitate them. Technical
specification for the requirements is critically reviewed and examined against
each of the cloud computing mechanisms, in relation to their technical
functionalities. Hence, a mapping is established if the cloud computing
mechanism's functionalities proffer a solution for implementation of a
requirement's technical specification. The result is a cloud computing
capability model for holistic semantic annotation which presents an approach
towards delivering large scale semantic annotation on the web via a cloud
platform
Digital Preservation, Archival Science and Methodological Foundations for Digital Libraries
Digital libraries, whether commercial, public or personal, lie at the heart of the information society. Yet, research into their longâterm viability and the meaningful accessibility of their contents remains in its infancy. In general, as we have pointed out elsewhere, âafter more
than twenty years of research in digital curation and preservation the actual theories, methods and technologies that can either foster or ensure digital longevity remain
startlingly limited.â Research led by DigitalPreservationEurope (DPE) and the Digital
Preservation Cluster of DELOS has allowed us to refine the key research challenges â theoretical, methodological and technological â that need attention by researchers in digital libraries during the coming five to ten years, if we are to ensure that the materials held in our emerging digital libraries are to remain sustainable, authentic, accessible and understandable over time. Building on this work and taking the theoretical framework of archival science as bedrock, this paper investigates digital preservation and its foundational role if digital libraries are to have longâterm viability at the centre of the
global information society.
Intensional Cyberforensics
This work focuses on the application of intensional logic to cyberforensic
analysis and its benefits and difficulties are compared with the
finite-state-automata approach. This work extends the use of the intensional
programming paradigm to the modeling and implementation of a cyberforensics
investigation process with backtracing of event reconstruction, in which
evidence is modeled by multidimensional hierarchical contexts, and proofs or
disproofs of claims are undertaken in an eductive manner of evaluation. This
approach is a practical, context-aware improvement over the finite state
automata (FSA) approach we have seen in previous work. As a base implementation
language model, we use in this approach a new dialect of the Lucid programming
language, called Forensic Lucid, and we focus on defining hierarchical contexts
based on intensional logic for the distributed evaluation of cyberforensic
expressions. We also augment the work with credibility factors surrounding
digital evidence and witness accounts, which have not been previously modeled.
The Forensic Lucid programming language, used for this intensional
cyberforensic analysis, formally presented through its syntax and operational
semantics. In large part, the language is based on its predecessor and
codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective
Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD
thesis; v2 corrects some typos and refs; also available on Spectrum at
http://spectrum.library.concordia.ca/977460
Cybersecurity knowledge graphs
Cybersecurity knowledge graphs, which represent cyber-knowledge with a graph-based data model, provide holistic approaches for processing massive volumes of complex cybersecurity data derived from diverse sources. They can assist security analysts to obtain cyberthreat intelligence, achieve a high level of cyber-situational awareness, discover new cyber-knowledge, visualize networks, data flow, and attack paths, and understand data correlations by aggregating and fusing data. This paper reviews the most prominent graph-based data models used in this domain, along with knowledge organization systems that define concepts and properties utilized in formal cyber-knowledge representation for both background knowledge and specific expert knowledge about an actual system or attack. It is also discussed how cybersecurity knowledge graphs enable machine learning and facilitate automated reasoning over cyber-knowledge
Desire Lines: Open Educational Collections, Memory and the Social Machine
This paper delineates the initial ideas around the development of the Co-Curate North East project. The idea of computerised machines which have a social use and impact was central to the development of the project. The project was designed with and for schools and communities as a digital platform which would collect and aggregate âmemoryâ resources and collections around local area studies and social identity. It was a co-curation process supported by museums and curators which was about the âmeshworkâ between âofficialâ and âunofficialâ archives and collections and the ways in which materials generated from within the schools and community groups could themselves be re-narrated and exhibited online as part of self-organised learning experiences. This paper looks at initial ideas of social machines and the ways in machines can be used in identity and memory studies. It examines ideas of navigation and visualisation of data and concludes with some initial findings from the early stages of the project about the potential for machines and educational work
e-Business challenges and directions: important themes from the first ICE-B workshop
A three-day asynchronous, interactive workshop was held at ICE-Bâ10 in Piraeus, Greece in July of 2010. This event captured conference themes for e-Business challenges and directions across four subject areas: a) e-Business applications and models, b) enterprise engineering, c) mobility, d) business collaboration and e-Services, and e) technology platforms. Quality Function Deployment (QFD) methods were used to gather, organize and evaluate themes and their ratings. This paper summarizes the most important themes rated by participants: a) Since technology is becoming more economic and social in nature, more agile and context-based application develop methods are needed. b) Enterprise engineering approaches are needed to support the design of systems that can evolve with changing stakeholder needs. c) The digital native groundswell requires changes to business models, operations, and systems to support Prosumers. d) Intelligence and interoperability are needed to address Prosumer activity and their highly customized product purchases. e) Technology platforms must rapidly and correctly adapt, provide widespread offerings and scale appropriately, in the context of changing situational contexts
Computer Vision for Multimedia Geolocation in Human Trafficking Investigation: A Systematic Literature Review
The task of multimedia geolocation is becoming an increasingly essential
component of the digital forensics toolkit to effectively combat human
trafficking, child sexual exploitation, and other illegal acts. Typically,
metadata-based geolocation information is stripped when multimedia content is
shared via instant messaging and social media. The intricacy of geolocating,
geotagging, or finding geographical clues in this content is often overly
burdensome for investigators. Recent research has shown that contemporary
advancements in artificial intelligence, specifically computer vision and deep
learning, show significant promise towards expediting the multimedia
geolocation task. This systematic literature review thoroughly examines the
state-of-the-art leveraging computer vision techniques for multimedia
geolocation and assesses their potential to expedite human trafficking
investigation. This includes a comprehensive overview of the application of
computer vision-based approaches to multimedia geolocation, identifies their
applicability in combating human trafficking, and highlights the potential
implications of enhanced multimedia geolocation for prosecuting human
trafficking. 123 articles inform this systematic literature review. The
findings suggest numerous potential paths for future impactful research on the
subject
- âŠ