611 research outputs found
Enhancing Cloud Security and Privacy: The Power and the Weakness of the Audit Trail
Winner of best paper award.Publisher PD
Digital forensic model for computer networks
The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project
Framework for Security Transparency in Cloud Computing
The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations.
The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providersâ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud usersâ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providersâ conformity to specific customer requirements. The significant growth in moving business to the cloud â due to its scalability and perceived effectiveness â underlines the dire need for research in this area.
This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied.
The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment
The importance to manage data protection in the right way: Problems and solutions
Information and communication technology (ICT) has made remarkable impact on the society, especially on companies and organizations. The use of computers, databases, servers, and other technologies has made an evolution on the way of storing, processing, and transferring data. However, companies access and share their data on internet or intranet, thus there is a critical need to protect this data from destructive forces and from the unwanted actions of unauthorized users. This thesis groups a set of solutions proposed, from a company point of view, to reach the goal of \u201cManaging data protection\u201d. The work presented in this thesis represents a set of security solutions, which focuses on the management of data protection taking into account both the organizational and technological side. The work achieved can be divided into set of goals that are obtained particularly from the needs of the research community. This thesis handles the issue of managing data protection in a systematic way, through proposing a Data protection management approach, aiming to protect the data from both the organizational and the technological side, which was inspired by the ISO 27001 requirements. An Information Security Management System (ISMS) is then presented implementing this approach, an ISMS consists of the policies, procedures, guidelines, and associated resources and activities, collectively managed by an organization, in the pursuit of protecting its information assets. An ISMS is a systematic approach for establishing, implementing, operating, monitoring, reviewing, maintaining and improving an organization\u2019s information security to achieve business objectives, The goal of ISMS is to minimize risk and ensure continuity by pro-actively limiting the impact of a security breach. To be well-prepared to the potential threats that could occur to an organization, it is important to adopt an ISMS that helps in managing the data protection process, and in saving time and effort, minimizes cost of any loss. After that, a comprehensive framework is designed for the security risk management of Cyber Physical Systems (CPSs), this framework represents the strategy used to manage the security risk management, and it falls inside the ISMS as a security strategy. Traditional IT risk assessment methods can do the job (security risk management for a CPS); however, and because of the characteristics of a CPS, it is more efficient to adopt a solution that is wider than a method that addresses the type, functionalities and complexity of a CPS. Therefore, there is a critical need to follow a solution that breaks the restriction to a traditional risk assessment method, and so a high-level framework is proposed, it encompasses wider set of procedures and gives a great attention to the cybersecurity of these systems, which consequently leads to the safety of the physical world. In addition, inside the ISMS, another part of the work takes place, suggesting the guidelines to select an applicable Security Incident and Event Management (SIEM) solution. It also proposes an approach that aims to support companies seeking to adopt SIEM systems into their environments, suggesting suitable answers to preferred requirements that are believed to be valuable prerequisites a SIEM system should have; and to suggest criteria to judge SIEM systems using an evaluation process composed of quantitative and qualitative methods. This approach, unlike others, is customer driven which means that customer needs are taken into account when following the whole approach, specifically when defining the requirements and then evaluating the suppliers\u2019 solutions. At the end, a research activity was carried out aiming classify web attacks on the network level, since any information about the attackers might be helpful and worth a lot to the cyber security analysts. And so, using network statistical fingerprints and machine learning techniques, a two-layers classification system is designed to detect the type of the web attack and the type of software used by the attackers
Recommended from our members
Chapter 3 - Investigating digital crime
This chapter, bearing the name of the book, explores the relationship between digital evidence and cyber crime. It introduces the reader to protocols that apply to the practice of digital forensic examinations and the associated accreditation opportunities. The challenges of presenting digital evidence to lay juries in court are also explored
Recommended from our members
Nonintrusive tracing in the Internet
Intruders that log in through a series of machines when conducting an attack are hard to trace because of the complex architecture of the Internet. The thumbprinting method provides an efficient way of tracing such intruders by determining whether two connections are part of the same connection chain. Because many connections are transient and therefore short in length, choosing the best time interval to thumbprint over can be an issue. In this paper, we provide a way to shorten the time interval used for thumbprinting. We then study some special properties of the thumbprinting function. We also study another mechanism for tracing intruders in the Internet based on a timestamping approach, which passively monitors flows between source and destination pairs. Given a potentially suspicious source, we identify its true destination. We compute the error probability of our algorithm and show that its value decreases exponentially as the observation time increases. Our simulation results show that our approach performs well
A Distributed Network Logging Topology
Network logging is used to monitor computer systems for potential problems and threats by network administrators. Research has found that the more logging enabled, the more potential threats can be detected in the logs (Levoy, 2006). However, generally it is considered too costly to dedicate the manpower required to analyze the amount of logging data that it is possible to generate. Current research is working on different correlation and parsing techniques to help filter the data, but these methods function by having all of the data dumped in to a central repository. Central repositories are limited in the amount of data they are able to receive without losing some of the data (SolarWindows, 2009). In large networks, the data limit is a problem, and industry standard syslog protocols could potentially lose data without being aware of the loss, potentially handicapping network administrators in their ability to analyze network problems and discover security risks. This research provides a scalable, accessible and fault-tolerant logging infrastructure that resolves the centralized server bottleneck and data loss problem while still maintaining a searchable and efficient storage system
TOWARDS A BRIGHT FUTURE: ENHANCING DIFFUSION OF CONTINUOUS CLOUD SERVICE AUDITING BY THIRD PARTIES
Using cloud services empowers organizations to achieve various financial and technical benefits. Nonetheless, customers are faced with a lack of control since they cede control over their IT resources to the cloud providers. Independent third party assessments have been recommended as good means to counteract this lack of control. However, current third party assessments fail to cope with an ever-changing cloud computing environment. We argue that continuous auditing by third parties (CATP) is required to assure continuously reliable and secure cloud services. Yet, continuous auditing has been applied mostly for internal purposes, and adoption of CATP remains lagging behind. Therefore, we examine the adoption process of CATP by building on the lenses of diffusion of innovations theory as well as conducting a scientific database search and various interviews with cloud service experts. Our findings reveal that relative advantages, a high degree of compatibility and observability of CATP would strongly enhance adoption, while a high complexity and a limited trialability might hamper diffusion. We contribute to practice and research by advancing the understanding of the CATP adop-tion process by providing a synthesis of relevant attributes that influence adoption rate. More im-portantly, we provide recommendations on how to enhance the adoption process
FORENSIC ANALYSIS OF THE GARMIN CONNECT ANDROID APPLICATION
Wearable smart devices are becoming more prevalent in our lives. These tiny devices
read various health signals such as heart rate and pulse and also serve as companion
devices that store sports activities and even their coordinates. This data is typically
sent to the smartphone via a companion application installed. These applications
hold a high forensic value because of the usersâ private information they store. They
can be crucial in a criminal investigation to understand what happened or where
that person was during a given period. They also need to guarantee that the data
is secure and that the application is not vulnerable to any attack that can lead to
data leaks.
The present work aims to do a complete forensic analysis of the companion
application Garmin Connect for Android devices. We used a Garmin Smartband to
generate data and test the application with a rooted Android device. This analysis is
split into two parts. The first part will be a traditional Post Mortem analysis where
we will present the application, data generation process, acquisition process, tools,
and methodologies. Lastly, we analyzed the data extracted and studied what can
be considered a forensic artifact. In the second part of this analysis, we performed
a dynamic analysis. We used various offensive security techniques and methods to
find vulnerabilities in the application code and network protocol to obtain data in
transit.
Besides completing the Garmin Connect application analysis, we contributed
various modules and new features for the tool Android Logs Events And Protobuf
Parser (ALEAPP) to help forensic practitioners analyze the application and to
improve the open-source digital forensics landscape. We also used this analysis as a
blueprint to explore six other fitness applications that can receive data from Garmin
Connect.
With this work, we could conclude that Garmin Connect stores a large quantity
of private data in its device, making it of great importance in case of a forensic
investigation. We also studied its robustness and could conclude that the application
is not vulnerable to the tested scenarios. Nevertheless, we found a weakness in their
communication methods that lets us obtain any data from the user even if it was
not stored in the device. This fact increased its forensic importance even more
IPCFA: A Methodology for Acquiring Forensically-Sound Digital Evidence in the Realm of IAAS Public Cloud Deployments
Cybercrimes and digital security breaches are on the rise: savvy businesses and organizations of all sizes must ready themselves for the worst. Cloud computing has become the new normal, opening even more doors for cybercriminals to commit crimes that are not easily traceable. The fast pace of technology adoption exceeds the speed by which the cybersecurity community and law enforcement agencies (LEAs) can invent countermeasures to investigate and prosecute such criminals. While presenting defensible digital evidence in courts of law is already complex, it gets more complicated if the crime is tied to public cloud computing, where storage, network, and computing resources are shared and dispersed over multiple geographical areas. Investigating such crimes involves collecting evidence data from the public cloud that is court-sound. Digital evidence court admissibility in the U.S. is governed predominantly by the Federal Rules of Evidence and Federal Rules of Civil Procedures. Evidence authenticity can be challenged by the Daubert test, which evaluates the forensic process that took place to generate the presented evidence.
Existing digital forensics models, methodologies, and processes have not adequately addressed crimes that take place in the public cloud. It was only in late 2020 that the Scientific Working Group on Digital Evidence (SWGDE) published a document that shed light on best practices for collecting evidence from cloud providers. Yet SWGDEâs publication does not address the gap between the technology and the legal system when it comes to evidence admissibility. The document is high level with more focus on law enforcement processes such as issuing a subpoena and preservation orders to the cloud provider.
This research proposes IaaS Public Cloud Forensic Acquisition (IPCFA), a methodology to acquire forensic-sound evidence from public cloud IaaS deployments. IPCFA focuses on bridging the gap between the legal and technical sides of evidence authenticity to help produce admissible evidence that can withstand scrutiny in U.S. courts. Grounded in design research science (DSR), the research is rigorously evaluated using two hypothetical scenarios for crimes that take place in the public cloud. The first scenario takes place in AWS and is hypothetically walked-thru. The second scenario is a demonstration of IPCFAâs applicability and effectiveness on Azure Cloud. Both cases are evaluated using a rubric built from the federal and civil digital evidence requirements and the international best practices for iv digital evidence to show the effectiveness of IPCFA in generating cloud evidence sound enough to be considered admissible in court
- âŠ