236,791 research outputs found
Multi-aspect, robust, and memory exclusive guest os fingerprinting
Precise fingerprinting of an operating system (OS) is critical to many security and forensics applications in the cloud, such as virtual machine (VM) introspection, penetration testing, guest OS administration, kernel dump analysis, and memory forensics. The existing OS fingerprinting techniques primarily inspect network packets or CPU states, and they all fall short in precision and usability. As the physical memory of a VM always exists in all these applications, in this article, we present OS-Sommelier+, a multi-aspect, memory exclusive approach for precise and robust guest OS fingerprinting in the cloud. It works as follows: given a physical memory dump of a guest OS, OS-Sommelier+ first uses a code hash based approach from kernel code aspect to determine the guest OS version. If code hash approach fails, OS-Sommelier+ then uses a kernel data signature based approach from kernel data aspect to determine the version. We have implemented a prototype system, and tested it with a number of Linux kernels. Our evaluation results show that the code hash approach is faster but can only fingerprint the known kernels, and data signature approach complements the code signature approach and can fingerprint even unknown kernels
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
Report of the Attitude Control and Attitude Determination Panel
Failures and deficiencies in flight programs are reviewed and suggestions are made for avoiding them. The technology development problem areas considered are control configured vehicle design, gyros, solid state star sensors, control instrumentation, tolerant/accomodating control systems, large momentum exchange devices, and autonomous rendezvous and docking
Recommended from our members
Structure identification in relational data
This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is given a precise formal definition similar to that of learnability. Using this framework, we then explore if a tractable procedure exists for deciding whether a given relation is decomposable into a constraint network or a CNF theory with desirable topology and, if the answer is positive, identifying the desired decomposition. Finally, we address the problem of expressing a given relation as a Horn theory and, if this is impossible, finding the best k-Horn approximation to the given relation. We show that both problems can be solved in time polynomial in the length of the data
A comparative evaluation of dynamic visualisation tools
Despite their potential applications in software comprehension, it appears that dynamic visualisation tools are seldom used outside the research laboratory. This paper presents an empirical evaluation of five dynamic visualisation tools - AVID, Jinsight, jRMTool, Together ControlCenter diagrams and Together ControlCenter debugger. The tools were evaluated on a number of general software comprehension and specific reverse engineering tasks using the HotDraw objectoriented framework. The tasks considered typical comprehension issues, including identification of software structure and behaviour, design pattern extraction, extensibility potential, maintenance issues, functionality location, and runtime load. The results revealed that the level of abstraction employed by a tool affects its success in different tasks, and that tools were more successful in addressing specific reverse engineering tasks than general software comprehension activities. It was found that no one tool performs well in all tasks, and some tasks were beyond the capabilities of all five tools. This paper concludes with suggestions for improving the efficacy of such tools
Beyond simulation: designing for uncertainty and robust solutions
Simulation is an increasingly essential tool in the design of our environment, but any model is only as good as the initial assumptions on which it is built. This paper aims to outline some of the limits and potential dangers of reliance on simulation, and suggests how to make our models, and our buildings, more robust with respect to the uncertainty we face in design. It argues that the single analyses provided by most simulations display too precise and too narrow a result to be maximally useful in design, and instead a broader description is required, as might be provided by many differing simulations. Increased computing power now allows this in many areas. Suggestions are made for the further development of simulation tools for design, in that these increased resources should be dedicated not simply to the accuracy of single solutions, but to a bigger picture that takes account of a design’s robustness to change, multiple phenomena that cannot be predicted, and the wider range of possible solutions. Methods for doing so, including statistical methods, adaptive modelling, machine learning and pattern recognition algorithms for identifying persistent structures in models, will be identified. We propose a number of avenues for future research and how these fit into design process, particularly in the case of the design of very large buildings
- …