309,983 research outputs found

    Design of a second life product family from the perspective of the remanufacturing agent

    Get PDF
    This thesis presents a method of solving a newly posed Second Life Product Family Design problem. This is unique in that the architecture of the product is not speci ed to be identical to one of the recaptured products, rather it is determined through optimization. The problem is framed using Conjoint Analysis and the Multi Nomial Logit Model, formatted with respect to components available for inclusion in the nal products and then solved using an implementation of Genetic Algorithms. The solution method is also encapsulated in a software module which can be disseminated to industrial users without a background in optimization or familiarity with Genetic Algorithms. A case study is performed to determine the e ectiveness of the proposed solution method, and analyze the in uences di erent market conditions and component similarities can have on the optimal design. It is concluded that the proposed method converges to an optimal Second Life Product Family Design

    A forensically-enabled IASS cloud computing architecture

    Get PDF
    Current cloud architectures do not support digital forensic investigators, nor comply with today’s digital forensics procedures largely due to the dynamic nature of the cloud. Whilst much research has focused upon identifying the problems that are introduced with a cloud-based system, to date there is a significant lack of research on adapting current digital forensic tools and techniques to a cloud environment. Data acquisition is the first and most important process within digital forensics – to ensure data integrity and admissibility. However, access to data and the control of resources in the cloud is still very much provider-dependent and complicated by the very nature of the multi-tenanted operating environment. Thus, investigators have no option but to rely on cloud providers to acquire evidence, assuming they would be willing or are required to by law. Furthermore, the evidence collected by the Cloud Service Providers (CSPs) is still questionable as there is no way to verify the validity of this evidence and whether evidence has already been lost. This paper proposes a forensic acquisition and analysis model that fundamentally shifts responsibility of the data back to the data owner rather than relying upon a third party. In this manner, organisations are free to undertaken investigations at will requiring no intervention or cooperation from the cloud provider. The model aims to provide a richer and complete set of admissible evidence than what current CSPs are able to provide

    Tools for modelling and simulating migration-based preservation

    No full text
    This report describes two tools for modelling and simulating the costs and risks of using IT storage systems for the long-term archiving of file-based AV assets. The tools include a model of storage costs, the ingest and access of files, the possibility of data corruption and loss from a range of mechanisms, and the impact of having limited resources with which to fulfill access requests and preservation actions. Applications include archive planning, development of a technology strategy, cost estimation for business planning, operational decision support, staff training and generally promoting awareness of the issues and challenges archives face in digital preservation

    A Laboratory Investigation of Supersonic Clumpy Flows: Experimental Design and Theoretical Analysis

    Get PDF
    We present a design for high energy density laboratory experiments studying the interaction of hypersonic shocks with a large number of inhomogeneities. These ``clumpy'' flows are relevant to a wide variety of astrophysical environments including the evolution of molecular clouds, outflows from young stars, Planetary Nebulae and Active Galactic Nuclei. The experiment consists of a strong shock (driven by a pulsed power machine or a high intensity laser) impinging on a region of randomly placed plastic rods. We discuss the goals of the specific design and how they are met by specific choices of target components. An adaptive mesh refinement hydrodynamic code is used to analyze the design and establish a predictive baseline for the experiments. The simulations confirm the effectiveness of the design in terms of articulating the differences between shocks propagating through smooth and clumpy environments. In particular, we find significant differences between the shock propagation speeds in a clumpy medium compared to a smooth one with the same average density. The simulation results are of general interest for foams in both inertial confinement fusion and laboratory astrophysics studies. Our results highlight the danger of using average properties of inhomogeneous astrophysical environments when comparing timescales for critical processes such as shock crossing and gravitational collapse times.Comment: 7 pages, 6 figures. Submitted to the Astrophysical Journal. For additional information, including simulation animations and the pdf and ps files of the paper with embedded high-quality images, see http://pas.rochester.edu/~wm

    Characterization of chromatin accessibility with a transposome hypersensitive sites sequencing (THS-seq) assay.

    Get PDF
    Chromatin accessibility captures in vivo protein-chromosome binding status, and is considered an informative proxy for protein-DNA interactions. DNase I and Tn5 transposase assays require thousands to millions of fresh cells for comprehensive chromatin mapping. Applying Tn5 tagmentation to hundreds of cells results in sparse chromatin maps. We present a transposome hypersensitive sites sequencing assay for highly sensitive characterization of chromatin accessibility. Linear amplification of accessible DNA ends with in vitro transcription, coupled with an engineered Tn5 super-mutant, demonstrates improved sensitivity on limited input materials, and accessibility of small regions near distal enhancers, compared with ATAC-seq

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications

    SEAPAK user's guide, version 2.0. Volume 1: System description

    Get PDF
    The SEAPAK is a user interactive satellite data analysis package that was developed for the processing and interpretation of Nimbus-7/Coastal Zone Color Scanner (CZCS) and the NOAA Advanced Very High Resolution Radiometer (AVHRR) data. Significant revisions were made to version 1.0 of the guide, and the ancillary environmental data analysis module was expanded. The package continues to emphasize user friendliness and user interactive data analyses. Additionally, because the scientific goals of the ocean color research being conducted have shifted to large space and time scales, batch processing capabilities for both satellite and ancillary environmental data analyses were enhanced, thus allowing large quantities of data to be ingested and analyzed in background

    Distributed Finite Element Analysis Using a Transputer Network

    Get PDF
    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the 80,000transputernetworkdemonstratedacost−performanceratioabout60timesbetterthanthe80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the 15,000,000 Cray X-MP24 system
    • …
    corecore