176 research outputs found

    NoSQL databases : forensic attribution implications

    Get PDF
    NoSQL databases have gained a lot of popularity over the last few years. They are now used in many new system implementations that work with vast amounts of data. Such data will typically also include sensitive information that needs to be secured. NoSQL databases are also underlying a number of cloud implementations which are increasingly being used to store sensitive information by various organisations. This has made NoSQL databases a new target for hackers and other state sponsored actors. Forensic examinations of compromised systems will need to be conducted to determine what exactly transpired and who was responsible. This paper examines specifically if NoSQL databases have security features that leave relevant traces so that accurate forensic attribution can be conducted. The seeming lack of default security measures such as access control and logging has prompted this examination. A survey into the top ranked NoSQL databases was conducted to establish what authentication and authorisation features are available. Additionally the provided logging mechanisms were also examined since access control without any auditing would not aid forensic attribution tremendously. Some of the surveyed NoSQL databases do not provide adequate access control mechanisms and logging features that leave relevant traces to allow forensic attribution to be done using those. The other surveyed NoSQL databases did provide adequate mechanisms and logging traces for forensic attribution, but they are not enabled or configured by default. This means that in many cases they might not be available, leading to insufficient information to perform accurate forensic attribution even on those databases.http://www.saiee.org.za/DirectoryDisplay/DirectoryCMSPages.aspx?name=Publications#id=1588&dirname=ARJ&dirid=337am2019Computer Scienc

    Forensic attribution challenges during forensic examinations of databases

    Get PDF
    An aspect of database forensics that has not yet received much attention in the academic research community is the attribution of actions performed in a database. When forensic attribution is performed for actions executed in computer systems, it is necessary to avoid incorrectly attributing actions to processes or actors. This is because the outcome of forensic attribution may be used to determine civil or criminal liability. Therefore, correctness is extremely important when attributing actions in computer systems, also when performing forensic attribution in databases. Any circumstances that can compromise the correctness of the attribution results need to be identified and addressed. This dissertation explores possible challenges when performing forensic attribution in databases. What can prevent the correct attribution of actions performed in a database? Thirst identified challenge is the database trigger, which has not yet been studied in the context of forensic examinations. Therefore, the dissertation investigates the impact of database triggers on forensic examinations by examining two sub questions. Firstly, could triggers due to their nature, combined with the way databases are forensically acquired and analysed, lead to the contamination of the data that is being analysed? Secondly, can the current attribution process correctly identify which party is responsible for which changes in a database where triggers are used to create and maintain data? The second identified challenge is the lack of access and audit information in NoSQL databases. The dissertation thus investigates how the availability of access control and logging features in databases impacts forensic attribution. The database triggers, as dened in the SQL standard, are studied together with a number of database trigger implementations. This is done in order to establish, which aspects of a database trigger may have an impact on digital forensic acquisition, analysis and interpretation. Forensic examinations of relational and NoSQL databases are evaluated to determine what challenges the presence of database triggers pose. A number of NoSQL databases are then studied to determine the availability of access control and logging features. This is done because these features leave valuable traces for the forensic attribution process. An algorithm is devised, which provides a simple test to determine if database triggers played any part in the generation or manipulation of data in a specific database object. If the test result is positive, the actions performed by the implicated triggers will have to be considered in a forensic examination. This dissertation identified a group of database triggers, classified as non-data triggers, which have the potential to contaminate the data in popular relational databases by inconspicuous operations, such as connection or shutdown. It also established that database triggers can influence the normal ow of data operations. This means what the original operation intended to do, and what actually happened, are not necessarily the same. Therefore, the attribution of these operations becomes problematic and incorrect deductions can be made. Accordingly, forensic processes need to be extended to include the handling and analysis of all database triggers. This enables safer acquisition and analysis of databases and more accurate attribution of actions performed in databases. This dissertation also established that popular NoSQL databases either lack sufficient access control and logging capabilities or do not enable them by default to support attribution to the same level as in relational databases.Dissertation (MSc)--University of Pretoria, 2018.Computer ScienceMScUnrestricte

    IoT database forensics : an investigation on HarperDB Security

    Get PDF
    The data that are generated by several devices in the IoT realmrequire careful and real time processing. Recently, researchers haveconcentrated on the usage of cloud databases for storing such datato improve efficiency. HarperDB aims at producing a DBMS that isrelational and non-relational simultaneously, to help journeymendevelopers creating products and servers in the IoT space. Much ofwhat the HarperDB team has talked about has been achieved, butfrom a security perspective, a lot of improvements need to be made.The team has clearly focused on the problems that exist from adatabase and data point of view, creating a structure that is unique,fast, easy to use and has great potential to grow with a startup.The functionality and ease of use of this DBMS is not in question,however as the trade-off triangle to the right suggests, this doesentail an impact to security. In this paper, using multiple forensicmethodologies, we performed an in-depth forensic analysis onHarperDB and found several areas of extreme concern, such as lackof logging functionalities, basic level of authorisation, exposure ofusers’ access rights to any party using the database, There had to bea focus on preventative advice instead of reactive workarounds dueto the nature of the flaws found in HarperDB. As such, we providea number of recommendations for the users and developers

    IZVOĐENJE UPITA NAD PODACIMA U NOSQL BAZAMA PODATAKA

    Get PDF
    The goal of this paper is to give an overview of fundamental concepts and types of NoSQL databases, to show some examples of database queries, some related research, and the implementation of those queries in an original practical example. The introduction is a brief representation and description of the NoSQL database. There are also several comparisons of NoSQL database with the relational database. The next chapter contains a review of the basic NoSQL databases and their prototypes. In each of the following subchapters, the types of NoSQL databases are described in more detail and various queries which can be performed over them are presented. In the last chapter there is also a practical example of querying one of these databases.Zadatak rada bio je istražiti NoSQL baze podataka, fokusirati se na izvođenje upita nad njima, opisati neke od dosadašnjih istraživanja te prikazati izvršavanje tih upita na vlastitom praktičnom primjeru. U uvodu su ukratko predstavljene i opisane NoSQL baze podataka. Prikazano je i nekoliko usporedbi s relacijskom bazom podataka. U idućem poglavlju možemo vidjeti pregled osnovnih NoSQL baza podataka te njihove prototipe. U svakom od sljedećih potpoglavlja detaljnije su opisane vrste NoSQL baza podataka i prikazani razni upiti koji se mogu izvoditi nad njima, a u posljednje poglavlje uključen je i vlastiti praktični primjer izvođenja upita nad jednom od tih baza

    Actionable Threat Intelligence for Digital Forensics Readiness

    Get PDF
    The purpose of this paper is to formulate a novel model for enhancing the effectiveness of existing Digital Forensic Readiness (DFR) schemes by leveraging the benefits of cyber threat information sharing. This paper employs a quantitative methodology to identify the most popular Threat Intelligence elements and introduces a formalized procedure to correlate these elements with potential digital evidence resulting in the quick and accurate identification of patterns of malware activities. While threat intelligence exchange steadily becomes a common practice for the prevention or detection of security incidents, the proposed approach highlights its usefulness for the digital forensics domain. The proposed model can help organizations to improve their digital forensic readiness posture and thus minimize the time and cost of cybercrime incident

    A framework for mobile handset detection

    Get PDF
    Due to the increase in social media and mobile media in general, access to these platforms from a number of different mobile devices must be catered for. Mobile Device Detection (MDD) refers to software that identifies the type of mobile device visiting the mobile web and either redirects the end user to a dedicated mobile web site or adapts the rendered output from a web server to best suit the capabilities of the end user\u27s mobile device. Furthermore, handset credentials are, in some cases, used for the purpose of correctly identifying the mobile operating system, which in turn is used to redirect to a mobile application download. In any web server request, identification of the user is transmitted in a header field known as the User-Agent (UA). Identifiable information present in the request header allows for unique browser identification and the device used in making the request for a web page. A lookup table, comprising of the all known handset capabilities, is the core functionality of a MDD. Our aim in this paper is to survey the distribution of mobile User-Agents so as to establish an attribution of mobile browsing detections. In particular, assessing details of mobile User-Agents, proxy requests (intermediary for requests from users seeking resources from other servers), emulated requests (software program that imitates a real handset), perpetuates our ability to census mobile traffic with some degree of accuracy. The approach is to make use of a sample set of real-world mobile aggregated requests. We will analyze and filter these requests by origin, User-Agent, geo-location and sometimes non-industry standard (inserted by mobile network operators, which might contain personally identifiable information) to build our mobile browsing framework. In doing so, we encompass description, identification, nomenclature, and classification of end user mobile handset detections. Finally, we will investigate the significance of our framework to see how unique requests are given nothing other than identifiable browser information

    Data management in cloud environments: NoSQL and NewSQL data stores

    Get PDF
    : Advances in Web technology and the proliferation of mobile devices and sensors connected to the Internet have resulted in immense processing and storage requirements. Cloud computing has emerged as a paradigm that promises to meet these requirements. This work focuses on the storage aspect of cloud computing, specifically on data management in cloud environments. Traditional relational databases were designed in a different hardware and software era and are facing challenges in meeting the performance and scale requirements of Big Data. NoSQL and NewSQL data stores present themselves as alternatives that can handle huge volume of data. Because of the large number and diversity of existing NoSQL and NewSQL solutions, it is difficult to comprehend the domain and even more challenging to choose an appropriate solution for a specific task. Therefore, this paper reviews NoSQL and NewSQL solutions with the objective of: (1) providing a perspective in the field, (2) providing guidance to practitioners and researchers to choose the appropriate data store, and (3) identifying challenges and opportunities in the field. Specifically, the most prominent solutions are compared focusing on data models, querying, scaling, and security related capabilities. Features driving the ability to scale read requests and write requests, or scaling data storage are investigated, in particular partitioning, replication, consistency, and concurrency control. Furthermore, use cases and scenarios in which NoSQL and NewSQL data stores have been used are discussed and the suitability of various solutions for different sets of applications is examined. Consequently, this study has identified challenges in the field, including the immense diversity and inconsistency of terminologies, limited documentation, sparse comparison and benchmarking criteria, and nonexistence of standardized query languages

    Improving Forensic Triage Efficiency through Cyber Threat Intelligence

    Get PDF
    The complication of information technology and the proliferation of heterogeneous security devices that produce increased volumes of data coupled with the ever-changing threat landscape challenges have an adverse impact on the efficiency of information security controls and digital forensics, as well as incident response approaches. Cyber Threat Intelligence (CTI)and forensic preparedness are the two parts of the so-called managed security services that defendants can employ to repel, mitigate or investigate security incidents. Despite their success, there is no known effort that has combined these two approaches to enhance Digital Forensic Readiness (DFR) and thus decrease the time and cost of incident response and investigation. This paper builds upon and extends a DFR model that utilises actionable CTI to improve the maturity levels of DFR. The effectiveness and applicability of this model are evaluated through a series of experiments that employ malware-related network data simulating real-world attack scenarios. To this extent, the model manages to identify the root causes of information security incidents with high accuracy (90.73%), precision (96.17%) and recall (93.61%), while managing to decrease significantly the volume of data digital forensic investigators need to examine. The contribution of this paper is twofold. First, it indicates that CTI can be employed by digital forensics processes. Second, it demonstrates and evaluates an efficient mechanism that enhances operational DFR

    Database forensic investigation process models: a review

    Get PDF
    Database Forensic Investigation (DBFI) involves the identification, collection, preservation, reconstruction, analysis, and reporting of database incidents. However, it is a heterogeneous, complex, and ambiguous field due to the variety and multidimensional nature of database systems. A small number of DBFI process models have been proposed to solve specific database scenarios using different investigation processes, concepts, activities, and tasks as surveyed in this paper. Specifically, we reviewed 40 proposed DBFI process models for RDBMS in the literature to offer up- to-date and comprehensive background knowledge on existing DBFI process model research, their associated challenges, issues for newcomers, and potential solutions for addressing such issues. This paper highlights three common limitations of the DBFI domain, which are: 1) redundant and irrelevant investigation processes; 2) redundant and irrelevant investigation concepts and terminologies; and 3) a lack of unified models to manage, share, and reuse DBFI knowledge. Also, this paper suggests three solutions for the discovered limitations, which are: 1) propose generic DBFI process/model for the DBFI field; 2) develop a semantic metamodeling language to structure, manage, organize, share, and reuse DBFI knowledge; and 3) develop a repository to store and retrieve DBFI field knowledge
    corecore