167 research outputs found
The Advanced Data Acquisition Model (Adam): A Process Model for Digital Forensic Practice
As with other types of evidence, the courts make no presumption that digital evidence is reliable without some evidence of empirical testing in relation to the theories and techniques associated with its production. The issue of reliability means that courts pay close attention to the manner in which electronic evidence has been obtained and in particular the process in which the data is captured and stored. Previous process models have tended to focus on one particular area of digital forensic practice, such as law enforcement, and have not incorporated a formal description. We contend that this approach has prevented the establishment of generally-accepted standards and processes that are urgently needed in the domain of digital forensics. This paper presents a generic process model as a step towards developing such a generally-accepted standard for a fundamental digital forensic activity–the acquisition of digital evidence
Pattern for malware remediation – A last line of defence tool against Malware in the global communication platform
Malware is becoming a major problem to every organization that operates on the global communication platform. The malicious software programs are advancing in sophistication in many ways in order to defeat harden deployed defenses. When an organization’s defense fails to keep this malice invasion out, the organization would incur significant amount of risks and damages. Risks include data leakage, inability to operate and tarnished corporate image. Damages include compensation costs to customers and partners, service unavailability and loss of customers’ and partners’ confidence in the organization. This in turn will affect the organization’s business continuity. In order to manage the risks and damages induced by Malware incidents, incident responders are called upon to be the last line of defense against the digital onslaught assault. However incident responders are challenged too by the deep levels of knowledge, skills and experience required to contain the ever advancing and persistent Malware. This paper proposes the establishment of a Pattern template for Malware Remediation to aid incident responders to overcome their competency limitations in order to provide organizations the tool to repel Malware and to reduce the associated risks. Examples and details of the proposed patters are provided with discussions on future direction of the research work
Selecting Keyword Search Terms in Computer Forensics Examinations Using Domain Analysis and Modeling
The motivation for computer forensics research includes the increase in crimes that involve the use of computers, the increasing capacity of digital storage media, a shortage of trained computer forensics technicians, and a lack of computer forensics standard practices. The hypothesis of this dissertation is that domain modeling of the computer forensics case environment can serve as a methodology for selecting keyword search terms and planning forensics examinations. This methodology can increase the quality of forensics examinations without significantly increasing the combined effort of planning and executing keyword searches. The contributions of this dissertation include: ? A computer forensics examination planning method that utilizes the analytical strengths and knowledge sharing abilities of domain modeling in artificial intelligence and software engineering, ? A computer forensics examination planning method that provides investigators and analysts with a tool for deriving keyword search terms from a case domain model, and ? The design and execution of experiments that illustrate the utility of the case domain modeling method. Three experiment trials were conducted to evaluate the effectiveness of case domain modeling, and each experiment trial used a distinct computer forensics case scenario: an identity theft case, a burglary and money laundering case, and a threatening email case. Analysis of the experiments supports the hypothesis that case domain modeling results in more evidence found during an examination with more effective keyword searching. Additionally, experimental data indicates that case domain modeling is most useful when the evidence disk has a relatively high occurrence of text-based documents and when vivid case background details are available. A pilot study and a case study were also performed to evaluate the utility of case domain modeling for typical law enforcement investigators. In these studies the subjects used case domain models in a computer forensics service solicitation activity. The results of these studies indicate that typical law enforcement officers have a moderate comprehension of the case domain modeling method and that they recognize a moderate amount of utility in the method. Case study subjects also indicated that the method would be more useful if supported by a semi-automated tool
Implementasi Pemrograman API dalam Membangun Aplikasi Rekening Bersama pada Komunitas Facebook dengan Virtual Account
Aktivitas jual beli di dalam komunitas Facebook tidak semua dapat dipercaya, ada pula oknum yang melakukan penipuan yang dapat merugikan pihak yang terlibat transaksi. Rekber atau biasa disebut rekening bersama merupakan metode pembayaran online yang dianggap paling aman daripada melakukan transaksi secara mentransfer uang secara langsung. Oleh karena itu penelitian ini bertujuan untuk membangun aplikasi Rekberkuy pada komunitas facebook menggunakan pemrograman API. Pengambilan data pada penelitian ini didapatkan melalui wawancara dan observasi pada anggota grup XYZ di Facebook. Pada tahap pengembangan perangkat lunak, digunakan pendekatan prototyping dengan beberapa tahapan dan dibuat dalam jangka waktu 5 bulan. Hasil dalam penelitian ini yaitu implementasi payment gateway midtrans pada aplikasi rekberkuy berjalan dengan baik. Dengan melakukan testing pada transaksi beberapa customer menggunakan rekberkuy membuktikan bahwa system payment gateway midtrans berintegrasi dengan baik oleh rekberkuy dan mendapatkan komentar positif dari pembeli maupun penjual yang melakukan transaksi pada grup XYZ di Facebook
A generic smartphone forensic investigation process model
Smartphones are sources of digital evidence and repository for considerable amount
of personal and work-related information about the phone users, their network of
contacts and activities. Investigations involving various such devices have been
identified as growing challenges to digital forensic researchers and practitioners.
Similar to other areas of digital forensic practice, the process models developed for
smartphones do not consider satisfying any scientific requirement of a digital
investigation process models to make such models reliable and admissible in court.
They have also been criticized for their tendency to focus on one particular type of
devices and failure to embrace the level of practicality and generality needed to be
applied in the investigation of all smartphones, independent of their platforms. In
addition, the common challenge associated with these models is that they tried to
encompass all aspects of digital forensic activities in a single-tier, high level process
models. This makes such models too unwieldy, impractical and unlikely to be adopted.
This research proposes a new forensic process model for digital investigation of
smartphones, called Generic Smartphone Forensic Investigation Process Model
(GSFIPM), which addresses both the practical needs of practitioners and the
expectations of legal domain for a reliable and structured process model to be
followed. The proposed model is a multi-tier, objective-based, iterative process model
that is generically applicable in investigation of any type of smartphones. GSFIPM is
integrated with Encompassing Proceedings as principles that have a wider scope than
a single process in the course of an investigation. The second tier of the GSFIPM
focuses on the evidence collection and preservation process since this process is
arguably the most critical process in the course of a digital investigation. Any doubt
cast upon this process makes the output of other processes moot. A two-stage formal
model called Formal Evidence Collection Model for Smartphones (FECMS) is
designed, comprising of two UML Activity Diagrams, two Implementation Guidelines
and the Overarching Principles.
This research employed the Design Science Research Process (DSRP) methodology
on the basis that it is an ‘ideal approach’ in the problem domain of digital forensic and
especially appropriate for creating a new process model. The effectiveness of the
GSFIPM and FECMS to satisfy the intended requirements are independently
evaluated by a group of digital forensic experts. Feedbacks from these experts are
taken into account and amendments are applied as appropriately as possible. The
feedbacks received from experts, regarding the GSFIPM, are generally positive in
fulfilling the scientific requirements. GSFIPM is also believed to hold new features in
the design, namely being multi-tier and iterative, and containing overarching
principles and stratification in roles and responsibilities. The feedbacks are also
optimist for FECMS, in terms of utility and usability. This research demonstrates how
GSFIPM and FECMS can be practically applicable in smartphone investigations and
beneficial to the digital forensic practitioners in various environments
Recommended from our members
Data trading based on seller preferences within blockchain smart contract
This thesis was submitted for the award of Master of Philosophy and was awarded by Brunel University LondonOnline data trading has not focused on the necessary control of data selling
by the data seller preferences (DSP) using blockchain technology. This
research aims to explore the DSP using smart contract over blockchain
within the domain of online data trading. Data trading has been carried out
for several decades, but cutting-edge technologies and cloud services have
grown dramatically worldwide. Industries are gaining benefits from
accessing the data that enabled them to perform mission-critical tasks by
performing data analysis on the massively available data and getting a
higher return on investment (ROI).
This research aims to make online data trading possible only if the buyer
can satisfy the conditions predefined by the seller. For example, DSP can
restrict the data purchase if the participating buyer is doing business from
a specific geographic location, or it can further restrict a particular type and
size of business. So, data trading will be controlled by smart contract
validation based on DSP hence the novel DSP artefact has been achieved
and evaluated via a personal blockchain Ganache, which is always set to
automatics mining. Even though the DSP Dapp artefact has been explored
with a limited scope of seller preferences and data volume, future
researchers may evolve the DSP Dapp artefact framework to achieve
complex seller preferences such as ethical selling (e.g., green credentials).
The smart contract serves as an automated contract depending on DSP, between seller and buyer, without the involvement of any broker or third
party.
After the first chapter's introduction has set up the context for chapter two
to review the literature, present the research question, and set the aims
and objectives. Chapter three selected the DSR methodology for this
research and analysed the requirements to set the building block for
chapters four and five. Chapters four and five fulfilled objective two by
designing and developing the DSP artefact using a smart contract to control
data trading. Chapter 6 validated the DSP trading system to confirm the
novelty of this research, and finally, chapter 7 summarised the contribution
and future research.
The research proposes a new approach to online data trading that controls
the data selling depending on DSP within smart contract over blockchain
and opens new doors for the researchers for future work in this area
Enhanced forensic process model in cloud environment
Digital forensics practitioners have used conventional digital forensics process models to investigate cloud security incidents. Presently, there is a lack of an agreed upon or a standard process model in cloud forensics. Besides, literature has shown that there is an explicit need for consumers to collect evidence for due-diligence or legal reasons. Furthermore, a consumer oriented cloud forensics process model is yet to be found in the literature. This has created a lack of consumer preparedness for cloud incident investigations and dependency on providers for evidence collection. This research addressed these limitations by developing a cloud forensic process model. A design science research methodology was employed to develop the model. A set of requirements believed to be solutions for the challenges reported in three survey papers were applied in this research. These requirements were mapped to existing cloud forensic process models to further explicate the weaknesses. A set of process models suitable for the extraction of necessary processes was selected based on the requirements, and these selected models constituted the cloud forensic process model. The processes were consolidated and the model was proposed to alleviate dependency on the provider problem. In this model, three digital forensic types including forensic readiness, live forensics and postmortem forensic investigations were considered. Besides, a Cloud-Forensic-as-a-Service model that produces evidence trusted by both consumers and providers through a conflict resolution protocol was also designed. To evaluate the utility and usability of the model, a plausible case scenario was investigated. For validation purposes, the cloud forensic process model together with its implementation in the case scenario and set of requirements were presented to a group of experts for evaluation. Effectiveness of the requirements was rated positive by the experts. The findings of the research indicated that the model can be used for cloud investigation and is rated easy to be used and adopted by consumers
1st doctoral symposium of the international conference on software language engineering (SLE) : collected research abstracts, October 11, 2010, Eindhoven, The Netherlands
The first Doctoral Symposium to be organised by the series of International Conferences on Software Language Engineering (SLE) will be held on October 11, 2010 in Eindhoven, as part of the 3rd instance of SLE. This conference series aims to integrate the different sub-communities of the software-language engineering community to foster cross-fertilisation and strengthen research overall. The Doctoral Symposium at SLE 2010 aims to contribute towards these goals by providing a forum for both early and late-stage Ph.D. students to present their research and get detailed feedback and advice from researchers both in and out of their particular research area. Consequently, the main objectives of this event are: – to give Ph.D. students an opportunity to write about and present their research; – to provide Ph.D. students with constructive feedback from their peers and from established researchers in their own and in different SLE sub-communities; – to build bridges for potential research collaboration; and – to foster integrated thinking about SLE challenges across sub-communities. All Ph.D. students participating in the Doctoral Symposium submitted an extended abstract describing their doctoral research. Based on a good set of submisssions we were able to accept 13 submissions for participation in the Doctoral Symposium. These proceedings present final revised versions of these accepted research abstracts. We are particularly happy to note that submissions to the Doctoral Symposium covered a wide range of SLE topics drawn from all SLE sub-communities. In selecting submissions for the Doctoral Symposium, we were supported by the members of the Doctoral-Symposium Selection Committee (SC), representing senior researchers from all areas of the SLE community.We would like to thank them for their substantial effort, without which this Doctoral Symposium would not have been possible. Throughout, they have provided reviews that go beyond the normal format of a review being extra careful in pointing out potential areas of improvement of the research or its presentation. Hopefully, these reviews themselves will already contribute substantially towards the goals of the symposium and help students improve and advance their work. Furthermore, all submitting students were also asked to provide two reviews for other submissions. The members of the SC went out of their way to comment on the quality of these reviews helping students improve their reviewing skills
1st doctoral symposium of the international conference on software language engineering (SLE) : collected research abstracts, October 11, 2010, Eindhoven, The Netherlands
The first Doctoral Symposium to be organised by the series of International Conferences on Software Language Engineering (SLE) will be held on October 11, 2010 in Eindhoven, as part of the 3rd instance of SLE. This conference series aims to integrate the different sub-communities of the software-language engineering community to foster cross-fertilisation and strengthen research overall. The Doctoral Symposium at SLE 2010 aims to contribute towards these goals by providing a forum for both early and late-stage Ph.D. students to present their research and get detailed feedback and advice from researchers both in and out of their particular research area. Consequently, the main objectives of this event are: – to give Ph.D. students an opportunity to write about and present their research; – to provide Ph.D. students with constructive feedback from their peers and from established researchers in their own and in different SLE sub-communities; – to build bridges for potential research collaboration; and – to foster integrated thinking about SLE challenges across sub-communities. All Ph.D. students participating in the Doctoral Symposium submitted an extended abstract describing their doctoral research. Based on a good set of submisssions we were able to accept 13 submissions for participation in the Doctoral Symposium. These proceedings present final revised versions of these accepted research abstracts. We are particularly happy to note that submissions to the Doctoral Symposium covered a wide range of SLE topics drawn from all SLE sub-communities. In selecting submissions for the Doctoral Symposium, we were supported by the members of the Doctoral-Symposium Selection Committee (SC), representing senior researchers from all areas of the SLE community.We would like to thank them for their substantial effort, without which this Doctoral Symposium would not have been possible. Throughout, they have provided reviews that go beyond the normal format of a review being extra careful in pointing out potential areas of improvement of the research or its presentation. Hopefully, these reviews themselves will already contribute substantially towards the goals of the symposium and help students improve and advance their work. Furthermore, all submitting students were also asked to provide two reviews for other submissions. The members of the SC went out of their way to comment on the quality of these reviews helping students improve their reviewing skills
Reasoning on the usage control security policies over data artifact business process models
The inclusion of security aspects in organizations is a crucial aspect to ensure compliance with both internal and external regulations. Business process models are a well-known mechanism to describe and automate the activities of the organizations, which should include security policies to ensure the correct performance of the daily activities. Frequently, these security policies involve complex data which cannot be represented using the standard Business Process Model Notation (BPMN). In this paper, we propose the enrichment of the BPMN with a UML class diagram to describe the data model, that is also combined with security policies defined using the UCONABC framework annotated within the business process model. The integration of the business process model, the data model, and the security policies provides a context where more complex reasoning can be applied about the satisfiability of the security policies in accordance with the business process and data models. To do so, wetransform the original models, including security policies, into the BAUML framework (an artifact-centric approach to business process modelling). Once this is done, it is possible to ensure that there are no inherent errors in the model (verification) and that it fulfils the business requirements (validation), thus ensuring that the business process and the security policies are compatible and that they are aligned with the business security requirements.This work has been supported by Project PID2020-112540RB-C44 funded by MCIN/AEI/ 10.13039/501100011033, Project TIN2017-87610-R funded by MCIN/AEI/10.13039/501100011033 and FEDER “Una manera de hacer Europa”, Project 2017-SGR-1749 by the Generalitat de Catalunya, Projects COPERNICA (P20 01224) and METAMORFOSIS by the Junta de Andalucía.Peer ReviewedPostprint (published version
- …