34 research outputs found
Searchable Privacy-Enabled Information and Event Management Solution.
Masteroppgave informasjons- og kommunikasjonsteknologi - Universitetet i Agder, 2015With network traffic proliferating over the last couple of decades, there is an increasing
need to monitor security information in order to prevent and resolve network
security threats. A Security Information and Event Management (SIEM)
solution collects all the alerts that the various Intrusion Detection and Prevention
Systems (IDS/IDP or IDPS) generates, as well as security logs from various
other systems, into one database so that the security analyst (SA) can more easily
get an overview of the threat activity. A privacy enhanced anonymization and
deanonymization protocol (Anonymiser/ Reversible Anonymiser) has been used
to prevent a first-line security analyst, without proper clearance, getting access
to personal identifiable information (PII) and/or other types of confidential information
that are not allowed to leave the network perimeter. Some examples may
be PII sampled in IP packets, critical address information and network architecture.
This thesis proposes an architectural design for a new SIEM solution which
utilises a reversible anonymizer (RA) for enabling privacy-enhanced data collection
and on demand deanonymization of anonymized alarms
Indexing collections of XML documents with arbitrary links
In recent years, the popularity of XML has increased significantly. XML is the extensible markup language of the World Wide Web Consortium (W3C). XML is used to represent data in many areas, such as traditional database management systems, e-business environments, and the World Wide Web. XML data, unlike relational and object-oriented data, has no fixed schema known in advance and is stored separately from the data. XML data is self-describing and can model heterogeneity more naturally than relational or object-oriented data models. Moreover, XML data usually has XLinks or XPointers to data in other documents (e.g., global-links). In addition to XLink or XPointer links, the XML standard allows to add internal-links between different elements in the same XML document using the ID/IDREF attributes. The rise in popularity of XML has generated much interest in query processing over graph-structured data. In order to facilitate efficient evaluation of path expressions, structured indexes have been proposed. However, most variants of structured indexes ignore global- or interior-document references. They assume a tree-like structure of XML-documents, which do not contain such global-and internal-links. Extending these indexes to work with large XML graphs considering of global- or internal-document links, firstly requires a lot of computing power for the creation process. Secondly, this would also require a great deal of space in which to store the indexes. As a latter demonstrates, the efficient evaluation of ancestors-descendants queries over arbitrary graphs with long paths is indeed a complex issue. This thesis proposes the HID index (2-Hop cover path Index based on DAG) is based on the concept of a two-hop cover for a directed graph. The algorithms proposed for the HID index creation, in effect, scales down the original graph size substantially. As a result, a directed acyclic graph (DAG) with a smaller number of nodes and edges will emerge. This reduces the number of computing steps required for building the index. In addition to this, computing time and space will be reduced as well. The index also permits to efficiently evaluate ancestors-descendants relationships. Moreover, the proposed index has an advantage over other comparable indexes: it is optimized for descendants- or-self queries on arbitrary graphs with link relationship, a task that would stress any index structures. Our experiments with real life XML data show that, the HID index provides better performance than other indexes
Forensic attribution challenges during forensic examinations of databases
An aspect of database forensics that has not yet received much attention in the academic research community is the attribution of actions performed in a database. When forensic attribution is performed for actions executed in computer systems, it is necessary to avoid incorrectly attributing actions to processes or actors. This is because the outcome of forensic attribution may be used to determine civil or criminal liability. Therefore, correctness is extremely important when attributing actions in computer systems, also when performing forensic attribution in databases. Any circumstances that can compromise the correctness of the attribution results need to be identified and addressed. This dissertation explores possible challenges when performing forensic attribution in databases. What can prevent the correct attribution of actions performed in a database? Thirst identified challenge is the database trigger, which has not yet been studied in the context of forensic examinations. Therefore, the dissertation investigates the impact of database triggers on forensic examinations by examining two sub questions. Firstly, could triggers due to their nature, combined with the way databases are forensically acquired and analysed, lead to the contamination of the data that is being analysed? Secondly, can the current attribution process correctly identify which party is responsible for which changes in a database where triggers are used to create and maintain data? The second identified challenge is the lack of access and audit information in NoSQL databases. The dissertation thus investigates how the availability of access control and logging features in databases impacts forensic attribution. The database triggers, as dened in the SQL standard, are studied together with a number of database trigger implementations. This is done in order to establish, which aspects of a database trigger may have an impact on digital forensic acquisition, analysis and interpretation. Forensic examinations of relational and NoSQL databases are evaluated to determine what challenges the presence of database triggers pose. A number of NoSQL databases are then studied to determine the availability of access control and logging features. This is done because these features leave valuable traces for the forensic attribution process. An algorithm is devised, which provides a simple test to determine if database triggers played any part in the generation or manipulation of data in a specific database object. If the test result is positive, the actions performed by the implicated triggers will have to be considered in a forensic examination. This dissertation identified a group of database triggers, classified as non-data triggers, which have the potential to contaminate the data in popular relational databases by inconspicuous operations, such as connection or shutdown. It also established that database triggers can influence the normal ow of data operations. This means what the original operation intended to do, and what actually happened, are not necessarily the same. Therefore, the attribution of these operations becomes problematic and incorrect deductions can be made. Accordingly, forensic processes need to be extended to include the handling and analysis of all database triggers. This enables safer acquisition and analysis of databases and more accurate attribution of actions performed in databases. This dissertation also established that popular NoSQL databases either lack sufficient access control and logging capabilities or do not enable them by default to support attribution to the same level as in relational databases.Dissertation (MSc)--University of Pretoria, 2018.Computer ScienceMScUnrestricte
Recommended from our members
A Systematic Performance Study of Object Database Management Systems
Many previous performance benchmarks for Object Database Management Systems (ODBMSs) have typically used arbitrary sets of tests based on what their designers felt were the characteristics of Engineering applications. Increasingly, however, ODBMSs are being used in non-engineering domains, such as Financial Trading, Clinical Healthcare, Telecommunications Network Management, etc. Part of the reason for this is that the technology has matured over the past few years and has become a less risky choice for organisations looking for better w'ays to manage complex data. However, the development of suitable application- or industry-specific benchmarks, based on actual performance studies, has not paralleled this growth.
The research reported here approaches performance evaluation of ODBMSs pragmatically. It uses a combination of case studies and benchmark experiments to investigate the performance characteristics of ODBMSs for particular applications, following the successful use of this approach by Youssef [Youss93] for studying the performance of On- Line Transaction Processing (OLTP) applications for Relational Database Management Systems (RDBMSs).
Six case studies at five organisations show’ that organisations consider a wide range of factors when undertaking their own performance studies or benchmarks. Furthermore, none of the studied organisations considered using any public benchmarks. Six current and derived benchmarks also highlight statistically significant performance differences between three major commercial products: Objectivity/DB, ObjectStore and UniSQL. These benchmarks indicate the suitability of the products tested for particular application domains.
The research could not find any evidence at this time to support the concept of a generic or canonical performance workload for ODBMSs. This is demonstrated by the case studies and supported by the benchmark experiments. However, the research shows that performance benchmarks serve a very useful role in ODBMS evaluations and can help identify architectural and quality problems with products that would not otherwise be observed until significant application or system development was already in progress
Extending an open source enterprise service bus for SQL statement transformation to enable cloud data access
Cloud computing has gained tremendous popularity in the past decade in the IT industry for its resource-sharing and cost-reducing nature. To move existing applications to the Cloud, they can be redesigned to fit into the Cloud paradigm, or migrate its existing components partially or totally to the Cloud. In application design, a three-tier architecture is often used, consisting of a presentation layer, a business logic layer, and a data layer. The presentation layer describes the interaction between application and user; the business layer provides the business logic; and the data layer deals with data storage. The data layer is further divided into the Data Access Layer which abstracts the data access functionality, and the Database Layer for data persistence and data manipulation.
In various occasions, corporations decide to move the their application's database layer to the Cloud, due to the high resource consumption and maintenance cost. However, currently there is little support and guidance on how to enable appropriate data access to the Cloud. Moreover, the diversity and heterogeneity of database systems increase the difficulty of adaption for the existing presentation layer and business layer with the migrated database layer. In this thesis, we focus on the heterogeneity of SQL language across different database systems. We extend an existing open source Enterprise Service Bus with Cloud data access capability for the transformation of SQL statements used in the presentation and business layer, to the SQL dialect used in the Cloud database system back end. With the prototype we develop, we validate it against real world scenario with Cloud services, such as FlexiScale and Amazon RDS. Besides, we analyze the complexity of the algorithm we realized for parsing and transforming the SQL statements and prove the complexity through performance measurements
Cloud-Based Access Portal for Designer Documentation in SoC Development
Document Management System plays an important role in any organization especially when it comes to retrieving hundreds of documents. Various companies use Document Management Systems for their specified purposes like accessing and managing the documents, editing online etc. Each Document Management System functionality varies according to the required features. Keeping in mind above mentioned criteria, Documentation Technology team in our department came up with an idea to create an access portal where the Designer Team can access required documents through one access point.
Designer documentation in our department is scattered in various platforms and tools. It’s hard to keep track of each document, whether it’s HW, SW, Modelling, Design, or Verification related. The Access Portal front end is based on HTML, CSS, and Bootstrap. While Django Framework and Python are used to build the back end. PostgreSQL serves as a database to store the documents. Docker is used as a cloud container to run the website on our company’s internal cloud UNIX server
Tipping offers e-commerce project
Rewardli
is
a
company
that
lets
small
and
medium
businesses
obtain
prices
only
available
to
big
companies
by
providing
a
listing
of
deals
and
perks.
The
main
type
of
offers
in
Rewardli
until
this
project
started
were
cash
back
offers.
A
Rewardli
user
can
get
a
discount
on
many
different
merchants
depending
on
his
buying
power.
The
discount
that
the
business
can
get
depends
on
the
buying
power.
The
buying
power
is
calculated
with
a
formula
that
takes
into
account
the
previous
purchases
and
the
connections
this
user
has
with
other
users
in
Rewardli