438 research outputs found

    Replication in mirrored disk systems

    Full text link

    Gravitating discs around black holes

    Full text link
    Fluid discs and tori around black holes are discussed within different approaches and with the emphasis on the role of disc gravity. First reviewed are the prospects of investigating the gravitational field of a black hole--disc system by analytical solutions of stationary, axially symmetric Einstein's equations. Then, more detailed considerations are focused to middle and outer parts of extended disc-like configurations where relativistic effects are small and the Newtonian description is adequate. Within general relativity, only a static case has been analysed in detail. Results are often very inspiring, however, simplifying assumptions must be imposed: ad hoc profiles of the disc density are commonly assumed and the effects of frame-dragging and completely lacking. Astrophysical discs (e.g. accretion discs in active galactic nuclei) typically extend far beyond the relativistic domain and are fairly diluted. However, self-gravity is still essential for their structure and evolution, as well as for their radiation emission and the impact on the environment around. For example, a nuclear star cluster in a galactic centre may bear various imprints of mutual star--disc interactions, which can be recognised in observational properties, such as the relation between the central mass and stellar velocity dispersion.Comment: Accepted for publication in CQG; high-resolution figures will be available from http://www.iop.org/EJ/journal/CQ

    Towards a Lacanian group psychology: the prisoner's dilemma and the trans-subjective

    Get PDF
    Revisiting Lacan's discussion of the puzzle of the prisoner's dilemma provides a means of elaborating a theory of the trans-subjective. An illustration of this dilemma provides the basis for two important arguments. Firstly, that we need to grasp a logical succession of modes of subjectivity: from subjectivity to inter-subjectivity, and from inter-subjectivity to a form of trans-subjective social logic. The trans-subjective, thus conceptualized, enables forms of social objectivity that transcend the level of (inter)subjectivity, and which play a crucial role in consolidating given societal groupings. The paper advances, secondly, that various declarative and symbolic activities are important non-psychological bases—trans-subjective foundations—for psychological identifications of an inter-subjective sort. These assertions link interesting to recent developments in the contemporary social psychology of interobjectivity, which likewise emphasize a type of objectivity that plays an indispensible part in co-ordinating human relations and understanding

    Adaptive and Power-aware Fault Tolerance for Future Extreme-scale Computing

    Get PDF
    Two major trends in large-scale computing are the rapid growth in HPC with in particular an international exascale initiative, and the dramatic expansion of Cloud infrastructures accompanied by the Big Data passion. To satisfy the continuous demands for increasing computing capacity, future extreme-scale systems will embrace a multi-fold increase in the number of computing, storage, and communication components, in order to support an unprecedented level of parallelism. Despite the capacity and economies benefits, making the upward transformation to extreme-scale poses numerous scientific and technological challenges, two of which are the power consumption and fault tolerance. With the increase in system scale, failure would become a norm rather than an exception, driving the system to significantly lower efficiency with unforeseen power consumption. This thesis aims at simultaneously addressing the above two challenges by introducing a novel fault-tolerant computational model, referred to as \textit{Leaping Shadows}. Based on Shadow Replication, Leaping Shadows associates with each main process a suite of coordinated shadow processes, which execute in parallel but at differential rates, to deal with failures and meet the QoS requirements of the underlying application under strict power/energy constraints. In failure-prone extreme-scale computing environments, this new model addresses the limitations of the basic Shadow Replication model, and achieves adaptive and power-aware fault tolerance that is more time and energy efficient than existing techniques. In this thesis, we first present an analytical model based optimization framework that demonstrates Shadow Replication's adaptivity and flexibility in achieving multi-dimensional QoS requirements. Then, we introduce Leaping Shadows as a novel power-aware fault tolerance model, which tolerates multiple types of failures, guarantees forward progress, and maintains a consistent level of resilience. Lastly, the details of a Leaping Shadows implementation in MPI is discussed, along with extensive performance evaluation that includes comparison to checkpoint/restart. Collectively, these efforts advocate an adaptive and power-aware fault tolerance alternative for future extreme-scale computing

    Research on reliable distributed computing

    Get PDF
    Issued as Quarterly funds expenditure reports [nos. 1-4], Quarterly progress reports [nos. 1-4], Final report and Appendix, Project no. G-36-62

    Prefetching and Caching Techniques in File Systems for Mimd Multiprocessors

    Get PDF
    The increasing speed of the most powerful computers, especially multiprocessors, makes it difficult to provide sufficient I/O bandwidth to keep them running at full speed for the largest problems. Trends show that the difference in the speed of disk hardware and the speed of processors is increasing, with I/O severely limiting the performance of otherwise fast machines. This widening access-time gap is known as the “I/O bottleneck crisis.” One solution to the crisis, suggested by many researchers, is to use many disks in parallel to increase the overall bandwidth. \par This dissertation studies some of the file system issues needed to get high performance from parallel disk systems, since parallel hardware alone cannot guarantee good performance. The target systems are large MIMD multiprocessors used for scientific applications, with large files spread over multiple disks attached in parallel. The focus is on automatic caching and prefetching techniques. We show that caching and prefetching can transparently provide the power of parallel disk hardware to both sequential and parallel applications using a conventional file system interface. We also propose a new file system interface (compatible with the conventional interface) that could make it easier to use parallel disks effectively. \par Our methodology is a mixture of implementation and simulation, using a software testbed that we built to run on a BBN GP1000 multiprocessor. The testbed simulates the disks and fully implements the caching and prefetching policies. Using a synthetic workload as input, we use the testbed in an extensive set of experiments. The results show that prefetching and caching improved the performance of parallel file systems, often dramatically

    The Detectability Limit of Organic Molecules Within Mars South Polar Laboratory Analogs

    Get PDF
    A series of laboratory experiments was carried out in order to generate a diagnostic spectrum for Polycyclic Aromatic Hydrocarbons (PAHs) of astrobiological interest in the context of the Martian South Polar Residual Cap (SPRC), to establish PAH spectral features more easily detectable in CO2 ice (mixed with small amounts of H2O ice) than the previously reported absorption feature at 3.29 µm in order to constrain their detectability limit. There is currently no existing literature on PAH detection within SPRC features, making this work novel and impactful given the recent discovery of a possible subglacial lake beneath the Martian South Pole. Although they have been detected in Martian meteorites, PAHs have not been detected yet on Mars, possibly due to the deleterious effects of ultraviolet radiation on the surface of the planet. SPRC features may provide protection to fragile molecules, and this work seeks to provide laboratory data to improve interpretation of orbital remote sensing spectroscopic imaging data. We also ascertain the effect of CO2 ice sublimation on organic spectra, as well as provide PAH reference spectra in mixtures relevant to Mars. A detectability limit of ∼0.04% has been recorded for observing PAHs in CO2 ice using laboratory instrument parameters emulating those of the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), with new spectral slope features revealed between 0.7 and 1.1 µm, and absorption features at 1.14 and, most sensitively, at 1.685 µm. Mars regolith analogue mixed with a concentration of 1.5% PAHs resulted in no discernible organic spectral features. These detectability limits measured in the laboratory are discussed and extrapolated to the effective conditions on the Mars South Polar Cap in terms of dust and water ice abundance and CO2 ice grain size for both the main perennial cap and the H2O ice-dust sublimation lag deposit

    Selective Dynamic Analysis of Virtualized Whole-System Guest Environments

    Get PDF
    Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the execution under analysis, architecture/OS specificity, being user-mode only, and lacking flexibility and extendability. This dissertation describes the design of the Dynamic Executable Code Analysis Framework (DECAF), a virtual machine-based, multi-target, whole-system dynamic binary analysis framework. In short, DECAF seeks to address the shortcomings of existing whole-system dynamic analysis tools and extend the state of the art by utilizing a combination of novel techniques to provide rich analysis functionality without crippling amounts of execution overhead. DECAF extends the mature QEMU whole-system emulator, a type-2 hypervisor capable of emulating every instruction that executes within a complete guest system environment. DECAF provides a novel, hardware event-based method of just-in-time virtual machine introspection (VMI) to address the semantic gap problem. It also implements a novel instruction-level taint tracking engine at bitwise level of granularity, ensuring that taint propagation is sound and highly precise throughout the guest environment. A formal analysis of the taint propagation rules is provided to verify that most instructions introduce neither false positives nor false negatives. DECAF’s design also provides a plugin architecture with a simple-to-use, event-driven programming interface that makes it both flexible and extendable for a variety of analysis tasks. The implementation of DECAF consists of 9550 lines of C++ code and 10270 lines of C code. Its performance is evaluated using CPU2006 SPEC benchmarks, which show an average overhead of 605% for system wide tainting and 12% for VMI. Three platformneutral DECAF plugins - Instruction Tracer, Keylogger Detector, and API Tracer - are described and evaluated in this dissertation to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. This dissertation also presents the Virtual Device Fuzzer (VDF), a scalable fuzz testing framework for discovering bugs within the virtual devices implemented as part of QEMU. Such bugs could be used by malicious software executing within a guest under analysis by DECAF, so the discovery, reproduction, and diagnosis of such bugs helps to protect DECAF against attack while improving QEMU and any analysis platforms built upon QEMU. VDF uses selective instrumentation to perform targeted fuzz testing, which explores only the branches of execution belonging to virtual devices under analysis. By leveraging record and replay of memory-mapped I/O activity, VDF quickly cycles virtual devices through an arbitrarily large number of states without requiring a guest OS to be booted or present. Once a test case is discovered that triggers a bug, VDF reduces the test case to the minimum number of reads/writes required to trigger the bug and generates source code suitable for reproducing the bug during debugging and analysis. VDF is evaluated by fuzz testing eighteen QEMU virtual devices, generating 1014 crash or hang test cases that reveal bugs in six of the tested devices. Over 80% of the crashes and hangs were discovered within the first day of testing. VDF covered an average of 62.32% of virtual device branches during testing, and the average test case was minimized to a reproduction test case only 18.57% of its original size
    corecore