106 research outputs found
Analysis of Windows memory for forensic investigations
Containing most recently accessed data and information about the status of a computer system, physical memory is one of the best sources of digital evidence. This thesis presents new methods to analyze Windows physical memory of compromised computers for cyber forensics. The thesis includes three distinct contributions to cyber forensics investigation. Firstly, by digging into details of Windows memory management, forensically important information and data structures are identified. Secondly, we proposed different methods to find files and extract them out of memory in order to rebuild executable and data files. This helps investigators obtain valuable information available in executable or data files that have been in use at incident time. Thirdly, we presented two methods for extraction of forensically sensitive information such as usernames or passwords from memory. The first method is based on fingerprints of applications in memory. In the second method, we have been able to locate and extract arguments used in function calls. This method, leads to the acquisition of important and forensically sensitive information from the memory stack. Finally, to bring these contributions to application level, a framework for cyber forensics investigations has been developed that helps finding sensitive informatio
A framework for developing finite element codes for multi- disciplinary applications
The world of computing simulation has experienced great progresses in recent years and requires
more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the
importance of solving multi-disciplinary problems makes developers put more attention to these
problems and deal with difficulties involved in developing software in this area.
Conventional finite element codes have several difficulties in dealing with multi-disciplinary
problems. Many of these codes are designed and implemented for solving a certain type of problems,
generally involving a single field. Extending these codes to deal with another field of analysis
usually consists of several problems and large amounts of modifications and implementations.
Some typical difficulties are: predefined set of degrees of freedom per node, data structure with
fixed set of defined variables, global list of variables for all entities, domain based interfaces, IO
restriction in reading new data and writing new results and algorithm definition inside the code.
A common approach is to connect different solvers via a master program which implements the
interaction algorithms and also transfers data from one solver to another. This approach has been
used successfully in practice but results duplicated implementation and redundant overhead of
data storing and transferring which may be significant depending to the solvers data structure.
The objective of this work is to design and implement a framework for building multi-disciplinary
finite element programs. Generality, reusability, extendibility, good performance and memory efficiency
are considered to be the main points in design and implementation of this framework.
Preparing the structure for team development is another objective because usually a team of experts
in different fields are involved in the development of multi-disciplinary code.
Kratos, the framework created in this work, provides several tools for easy implementation
of finite element applications and also provides a common platform for natural interaction of its
applications in different ways. This is done not only by a number of innovations but also by
collecting and reusing several existing works.
In this work an innovative variable base interface is designed and implemented which is used
at different levels of abstraction and showed to be very clear and extendible. Another innovation
is a very efficient and flexible data structure which can be used to store any type of data in a
type-safe manner. An extendible IO is also created to overcome another bottleneck in dealing with
multi-disciplinary problems. Collecting different concepts of existing works and adapting them
to coupled problems is considered to be another innovation in this work. Examples are using an
interpreter, different data organizations and variable number of dofs per node. The kernel and
application approach is used to reduce the possible conflicts arising between developers of different
fields and layers are designed to reflect the working space of different developers also considering
their programming knowledge. Finally several technical details are applied in order to increase the
performance and efficiency of Kratos which makes it practically usable.
This work is completed by demonstrating the framework’s functionality in practice. First some
classical single field applications like thermal, fluid and structural applications are implemented and
used as benchmark to prove its performance. These applications are used to solve coupled problems
in order to demonstrate the natural interaction facility provided by the framework. Finally some
less classical coupled finite element algorithms are implemented to show its high flexibility and
extendibility
Proceedings of the 1994 Monterey Workshop, Increasing the Practical Impact of Formal Methods for Computer-Aided Software Development: Evolution Control for Large Software Systems Techniques for Integrating Software Development Environments
Office of Naval Research, Advanced Research Projects Agency, Air Force Office of Scientific Research, Army Research Office, Naval Postgraduate School, National Science Foundatio
3D RECONSTRUCTION FROM STEREO/RANGE IMAGES
3D reconstruction from stereo/range image is one of the most fundamental and extensively researched topics in computer vision. Stereo research has recently experienced somewhat of a new era, as a result of publically available performance testing such as the Middlebury data set, which has allowed researchers to compare their algorithms against all the state-of-the-art algorithms. This thesis investigates into the general stereo problems in both the two-view stereo and multi-view stereo scopes. In the two-view stereo scope, we formulate an algorithm for the stereo matching problem with careful handling of disparity, discontinuity and occlusion. The algorithm works with a global matching stereo model based on an energy minimization framework. The experimental results are evaluated on the Middlebury data set, showing that our algorithm is the top performer. A GPU approach of the Hierarchical BP algorithm is then proposed, which provides similar stereo quality to CPU Hierarchical BP while running at real-time speed. A fast-converging BP is also proposed to solve the slow convergence problem of general BP algorithms. Besides two-view stereo, ecient multi-view stereo for large scale urban reconstruction is carefully studied in this thesis. A novel approach for computing depth maps given urban imagery where often large parts of surfaces are weakly textured is presented. Finally, a new post-processing step to enhance the range images in both the both the spatial resolution and depth precision is proposed
Hierarchical multi-label classification for protein function prediction going beyond traditional approaches
Hierarchical multi-label classification is a variant of traditional classification in which the
instances can belong to several labels, that are in turn organized in a hierarchy. Functional classification of genes is a challenging problem in functional genomics due to several reasons. First, each gene participates in multiple biological activities. Hence, prediction models should support multi-label classification. Second, the genes are organized and classified according to a hierarchical classification scheme that represents the relationships between the functions of the genes. These relationships should be maintained by the prediction models. In addition, various bimolecular data sources, such as gene expression data and protein-protein interaction data, can be used to assign biological functions to genes. Therefore, the integration of multiple data
sources is required to acquire a precise picture of the roles of the genes in the living organisms through uncovering novel biology in the form of previously unknown functional annotations. In order to address these issues, the presented work deals with the hierarchical multi-label classification.
The purpose of this thesis is threefold: first, Hierarchical Multi-Label classification
algorithm using Boosting classifiers, HML-Boosting, for the hierarchical multi-label
classification problem in the context of gene function prediction is proposed. HML-Boosting exploits the predefined hierarchical dependencies among the classes. We demonstrate, through HML-Boosting and using two approaches for class-membership inconsistency correction during the testing phase, the top-down approach and the bottom-up approach, that the HMLBoosting algorithm outperforms the flat classifier approach. Moreover, the author proposed the HiBLADE algorithm (Hierarchical multi-label Boosting with LAbel DEpendency), a novel algorithm that takes advantage of not only the pre-established hierarchical taxonomy of the classes, but also effectively exploits the hidden correlation among the classes that is not shown through the class hierarchy, thereby improving the quality of the predictions. According to the proposed approach, first, the pre-defined hierarchical taxonomy of the labels is used to decide
upon the training set for each classifier. Second, the dependencies of the children for each label in the hierarchy are captured and analyzed using Bayes method and instance-based similarity. The primary objective of the proposed algorithm is to find and share a number of base models across the correlated labels. HiBLADE is different than the conventional algorithms in two ways. First, it allows the prediction of multiple functions for genes at the same time while maintaining the hierarchy constraint. Second, the classifiers are built based on the label understudy and its most similar sibling. Experimental results on several real-world biomolecular datasets show that the proposed method can improve the performance of hierarchical multilabel classification.
More important, however, is then the third part that focuses on the integration of multiple
heterogeneous data sources for improving hierarchical multi-label classification. Unlike most of the previous works, which mostly consider a single data source for gene function prediction, the author explores the integration of heterogeneous data sources for genome-wide gene function prediction. The integration of multiple heterogeneous data sources is addressed with a novel Hierarchical Bayesian iNtegration algorithm, HiBiN, a general framework that uses Bayesian reasoning to integrate heterogeneous data sources for accurate gene function prediction. The system formally uses posterior probabilities to assign class memberships to samples using multiple data sources while maintaining the hierarchical constraint that governs the annotation of the genes. The author demonstrates, through HiBiN, that the integration of the diverse datasets significantly improves the classification quality for hierarchical gene function prediction
in terms of several measures, compared to single-source prediction models and fused-flat model, which are the baselines compared against. Moreover, the system has been extended to include a weighting scheme to control the contributions from each data source according to its relevance to the label under-study. The results show that the new weighting scheme compares favorably with the other approach along
various performance criteria
Forensic identification and detection of hidden and obfuscated malware
The revolution in online criminal activities and malicious software (malware) has posed a serious challenge in malware forensics. Malicious attacks have become more organized and purposefully directed. With cybercrimes escalating to great heights in quantity as well as in sophistication and stealth, the main challenge is to detect hidden and obfuscated malware. Malware authors use a variety of obfuscation methods and specialized stealth techniques of information hiding to embed malicious code, to infect systems and to thwart any attempt to detect them, specifically with the use of commercially available anti-malware engines. This has led to the situation of zero-day attacks, where malware inflict systems even with existing security measures. The aim of this thesis is to address this situation by proposing a variety of novel digital forensic and data mining techniques to automatically detect hidden and obfuscated malware. Anti-malware engines use signature matching to detect malware where signatures are generated by human experts by disassembling the file and selecting pieces of unique code. Such signature based detection works effectively with known malware but performs poorly with hidden or unknown malware. Code obfuscation techniques, such as packers, polymorphism and metamorphism, are able to fool current detection techniques by modifying the parent code to produce offspring copies resulting in malware that has the same functionality, but with a different structure. These evasion techniques exploit the drawbacks of traditional malware detection methods, which take current malware structure and create a signature for detecting this malware in the future. However, obfuscation techniques aim to reduce vulnerability to any kind of static analysis to the determent of any reverse engineering process. Furthermore, malware can be hidden in file system slack space, inherent in NTFS file system based partitions, resulting in malware detection that even more difficult.Doctor of Philosoph
Congestion Avoidance Testbed Experiments
DARTnet provides an excellent environment for executing networking experiments. Since the network is private and spans the continental United States, it gives researchers a great opportunity to test network behavior under controlled conditions. However, this opportunity is not available very often, and therefore a support environment for such testing is lacking. To help remedy this situation, part of SRI's effort in this project was devoted to advancing the state of the art in the techniques used for benchmarking network performance. The second objective of SRI's effort in this project was to advance networking technology in the area of traffic control, and to test our ideas on DARTnet, using the tools we developed to improve benchmarking networks. Networks are becoming more common and are being used by more and more people. The applications, such as multimedia conferencing and distributed simulations, are also placing greater demand on the resources the networks provide. Hence, new mechanisms for traffic control must be created to enable their networks to serve the needs of their users. SRI's objective, therefore, was to investigate a new queueing and scheduling approach that will help to meet the needs of a large, diverse user population in a "fair" way
Recommended from our members
Efficient architectures and power modelling of multiresolution analysis algorithms on FPGA
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.In the past two decades, there has been huge amount of interest in Multiresolution Analysis Algorithms (MAAs) and their applications. Processing some of their applications such as medical imaging are computationally intensive, power hungry and requires large amount of memory which cause a high demand for efficient algorithm implementation, low power architecture and acceleration. Recently, some MAAs such as Finite Ridgelet Transform (FRIT) Haar Wavelet Transform (HWT) are became very popular and they are suitable for a number of image processing applications such as detection of line singularities and contiguous edges, edge detection (useful for compression and feature detection), medical image denoising and segmentation. Efficient hardware implementation and acceleration of these algorithms particularly when addressing large problems are becoming very chal-lenging and consume lot of power which leads to a number of issues including mobility, reliability concerns. To overcome the computation problems, Field Programmable Gate Arrays (FPGAs) are the technology of choice for accelerating computationally intensive applications due to their high performance. Addressing the power issue requires optimi- sation and awareness at all level of abstractions in the design flow.
The most important achievements of the work presented in this thesis are summarised
here.
Two factorisation methodologies for HWT which are called HWT Factorisation Method1 and (HWTFM1) and HWT Factorasation Method2 (HWTFM2) have been explored to increase number of zeros and reduce hardware resources. In addition, two novel efficient and optimised architectures for proposed methodologies based on Distributed Arithmetic (DA) principles have been proposed. The evaluation of the architectural results have shown that the proposed architectures results have reduced the arithmetics calculation (additions/subtractions) by 33% and 25% respectively compared to direct implementa-tion of HWT and outperformed existing results in place. The proposed HWTFM2 is implemented on advanced and low power FPGA devices using Handel-C language. The FPGAs implementation results have outperformed other existing results in terms of area and maximum frequency. In addition, a novel efficient architecture for Finite Radon Trans-form (FRAT) has also been proposed. The proposed architecture is integrated with the developed HWT architecture to build an optimised architecture for FRIT. Strategies such as parallelism and pipelining have been deployed at the architectural level for efficient im-plementation on different FPGA devices. The proposed FRIT architecture performance has been evaluated and the results outperformed some other existing architecture in place. Both FRAT and FRIT architectures have been implemented on FPGAs using Handel-C language. The evaluation of both architectures have shown that the obtained results out-performed existing results in place by almost 10% in terms of frequency and area. The proposed architectures are also applied on image data (256 Ā£ 256) and their Peak Signal to Noise Ratio (PSNR) is evaluated for quality purposes.
Two architectures for cyclic convolution based on systolic array using parallelism and pipelining which can be used as the main building block for the proposed FRIT architec-ture have been proposed. The first proposed architecture is a linear systolic array with pipelining process and the second architecture is a systolic array with parallel process. The second architecture reduces the number of registers by 42% compare to first architec-ture and both architectures outperformed other existing results in place. The proposed pipelined architecture has been implemented on different FPGA devices with vector size (N) 4,8,16,32 and word-length (W=8). The implementation results have shown a signifi-cant improvement and outperformed other existing results in place.
Ultimately, an in-depth evaluation of a high level power macromodelling technique for design space exploration and characterisation of custom IP cores for FPGAs, called func-tional level power modelling approach have been presented. The mathematical techniques that form the basis of the proposed power modeling has been validated by a range of custom IP cores. The proposed power modelling is scalable, platform independent and compares favorably with existing approaches. A hybrid, top-down design flow paradigm integrating functional level power modelling with commercially available design tools for systematic optimisation of IP cores has also been developed. The in-depth evaluation of this tool enables us to observe the behavior of different custom IP cores in terms of power consumption and accuracy using different design methodologies and arithmetic techniques on virous FPGA platforms. Based on the results achieved, the proposed model accuracy is almost 99% true for all IP core's Dynamic Power (DP) components.Thomas Gerald Gray Charitable Trus
- ā¦