13,942 research outputs found

    Earth observations from space: Outlook for the geological sciences

    Get PDF
    Remote sensing from space platforms is discussed as another tool available to geologists. The results of Nimbus observations, the ERTS program, and Skylab EREP are reviewed, and a multidisciplinary approach is recommended for meeting the challenges of remote sensing

    The 2010 MW 6.8 Yushu (Qinghai, China) earthquake: constraints provided by InSAR and body wave seismology

    Get PDF
    By combining observations from satellite radar, body wave seismology and optical imagery, we have determined the fault segmentation and sequence of ruptures for the 2010 Mw 6.8 Yushu (China) earthquake. We have mapped the fault trace using displacements from SAR image matching, interferometric phase and coherence, and 2.5 m SPOT-5 satellite images. Modeling the event as an elastic dislocation with three segments fitted to the fault trace suggests that the southeast and northwest segments are near vertical, with the central segment dipping 70° to the southwest; slip occurs mainly in the upper 10 km, with a maximum slip of 1.5 m at a depth of 4 km on the southeastern segment. The maximum slip in the top 1 km (i.e., near surface) is up to 1.2 m, and inferred locations of significant surface rupture are consistent with displacements from SAR image matching and field observations. The radar interferograms show rupture over a distance of almost 80 km, much larger than initial seismological and field estimates of the length of the fault. Part of this difference can be attributed to slip on the northwestern segment of the fault being due to an Mw 6.1 aftershock two hours after the main event. The remaining difference can be explained by a non-uniform slip distribution with much of the moment release occurring at depths of less than 10 km. The rupture on the central and southeastern segments of the fault in the main shock propagated at a speed of 2.5 km/s southeastward toward the town of Yushu located at the end of this segment, accounting for the considerable building damage. Strain accumulation since the last earthquake on the fault segment beyond Yushu is equivalent to an Mw 6.5 earthquake

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Content-access QoS in peer-to-peer networks using a fast MDS erasure code

    Get PDF
    This paper describes an enhancement of content access Quality of Service in peer to peer (P2P) networks. The main idea is to use an erasure code to distribute the information over the peers. This distribution increases the users’ choice on disseminated encoded data and therefore statistically enhances the overall throughput of the transfer. A performance evaluation based on an original model using the results of a measurement campaign of sequential and parallel downloads in a real P2P network over Internet is presented. Based on a bandwidth distribution, statistical content-access QoS are guaranteed in function of both the content replication level in the network and the file dissemination strategies. A simple application in the context of media streaming is proposed. Finally, the constraints on the erasure code related to the proposed system are analysed and a new fast MDS erasure code is proposed, implemented and evaluated

    Surface deformation and elasticity studies in the Virgin Islands

    Get PDF
    The report consists of four sections. The first section describes tilt and leveling measurements on Anegada, the most northerly of the British Virgin Islands; the second section contains a discussion of sea-level measurements that were initiated in the region and which played a significant role in the development of a network of sea-level monitors now telemetered via satellite from the Alaskan Shumagin Islands. The third part of the report is a brief description of surface deformation measurements in Iceland using equipment and techniques developed by the subject grant. The final part of the report describes the predicted effects of block surface fragmentation in tectonic areas on the measurement of tilt and strain

    Efficient Path Delay Test Generation with Boolean Satisfiability

    Get PDF
    This dissertation focuses on improving the accuracy and efficiency of path delay test generation using a Boolean satisfiability (SAT) solver. As part of this research, one of the most commonly used SAT solvers, MiniSat, was integrated into the path delay test generator CodGen. A mixed structural-functional approach was implemented in CodGen where longest paths were detected using the K Longest Path Per Gate (KLPG) algorithm and path justification and dynamic compaction were handled with the SAT solver. Advanced techniques were implemented in CodGen to further speed up the performance of SAT based path delay test generation using the knowledge of the circuit structure. SAT solvers are inherently circuit structure unaware, and significant speedup can be availed if structure information of the circuit is provided to the SAT solver. The advanced techniques explored include: Dynamic SAT Solving (DSS), Circuit Observability Don’t Care (Cir-ODC), SAT based static learning, dynamic learnt clause management and Approximate Observability Don’t Care (ACODC). Both ISCAS 89 and ITC 99 benchmarks as well as industrial circuits were used to demonstrate that the performance of CodGen was significantly improved with MiniSat and the use of circuit structure

    Timing speculation and adaptive reliable overclocking techniques for aggressive computer systems

    Get PDF
    Computers have changed our lives beyond our own imagination in the past several decades. The continued and progressive advancements in VLSI technology and numerous micro-architectural innovations have played a key role in the design of spectacular low-cost high performance computing systems that have become omnipresent in today\u27s technology driven world. Performance and dependability have become key concerns as these ubiquitous computing machines continue to drive our everyday life. Every application has unique demands, as they run in diverse operating environments. Dependable, aggressive and adaptive systems improve efficiency in terms of speed, reliability and energy consumption. Traditional computing systems run at a fixed clock frequency, which is determined by taking into account the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable overclocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. The success of this design methodology relies on the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case design methodology. Better-than-worst-case design methodology is advocated by several recent research pursuits, which exploit dependability techniques to enhance computer system performance. In this dissertation, we address different aspects of timing speculation based adaptive reliable overclocking schemes, and evaluate their role in the design of low-cost, high performance, energy efficient and dependable systems. We visualize various control knobs in the design that can be favorably controlled to ensure different design targets. As part of this research, we extend the SPRIT3E, or Superscalar PeRformance Improvement Through Tolerating Timing Errors, framework, and characterize the extent of application dependent performance acceleration achievable in superscalar processors by scrutinizing the various parameters that impact the operation beyond worst-case limits. We study the limitations imposed by short-path constraints on our technique, and present ways to exploit them to maximize performance gains. We analyze the sensitivity of our technique\u27s adaptiveness by exploring the necessary hardware requirements for dynamic overclocking schemes. Experimental analysis based on SPEC2000 benchmarks running on a SimpleScalar Alpha processor simulator, augmented with error rate data obtained from hardware simulations of a superscalar processor, are presented. Even though reliable overclocking guarantees functional correctness, it leads to higher power consumption. As a consequence, reliable overclocking without considering on-chip temperatures will bring down the lifetime reliability of the chip. In this thesis, we analyze how reliable overclocking impacts the on-chip temperature of a microprocessor and evaluate the effects of overheating, due to such reliable dynamic frequency tuning mechanisms, on the lifetime reliability of these systems. We then evaluate the effect of performing thermal throttling, a technique that clamps the on-chip temperature below a predefined value, on system performance and reliability. Our study shows that a reliably overclocked system with dynamic thermal management achieves 25% performance improvement, while lasting for 14 years when being operated within 353K. Over the past five decades, technology scaling, as predicted by Moore\u27s law, has been the bedrock of semiconductor technology evolution. The continued downscaling of CMOS technology to deep sub-micron gate lengths has been the primary reason for its dominance in today\u27s omnipresent silicon microchips. Even as the transition to the next technology node is indispensable, the initial cost and time associated in doing so presents a non-level playing field for the competitors in the semiconductor business. As part of this thesis, we evaluate the capability of speculative reliable overclocking mechanisms to maximize performance at a given technology level. We evaluate its competitiveness when compared to technology scaling, in terms of performance, power consumption, energy and energy delay product. We present a comprehensive comparison for integer and floating point SPEC2000 benchmarks running on a simulated Alpha processor at three different technology nodes in normal and enhanced modes. Our results suggest that adopting reliable overclocking strategies will help skip a technology node altogether, or be competitive in the market, while porting to the next technology node. Reliability has become a serious concern as systems embrace nanometer technologies. In this dissertation, we propose a novel fault tolerant aggressive system that combines soft error protection and timing error tolerance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed Conjoined Pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate-level timing simulations, using 45nm technology, of a conjoined two-stage arithmetic pipeline and a conjoined five-stage DLX pipeline processor, with forwarding logic, show that our approach, even under a severe fault injection campaign, achieves near 100% fault coverage and an average performance improvement of about 20%, when dynamically overclocked

    Enhancement of fault injection techniques based on the modification of VHDL code

    Full text link
    Deep submicrometer devices are expected to be increasingly sensitive to physical faults. For this reason, fault-tolerance mechanisms are more and more required in VLSI circuits. So, validating their dependability is a prior concern in the design process. Fault injection techniques based on the use of hardware description languages offer important advantages with regard to other techniques. First, as this type of techniques can be applied during the design phase of the system, they permit reducing the time-to-market. Second, they present high controllability and reachability. Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high fault modeling capability. However, implementing automatically these techniques in a fault injection tool is difficult. Especially complex are the insertion of saboteurs and the generation of mutants. In this paper, we present new proposals to implement saboteurs and mutants for models in VHDL which are easy-to-automate, and whose philosophy can be generalized to other hardware description languages.Baraza Calvo, JC.; Gracia-MorĂĄn, J.; Blanc Clavero, S.; Gil TomĂĄs, DA.; Gil Vicente, PJ. (2008). Enhancement of fault injection techniques based on the modification of VHDL code. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 16(6):693-706. doi:10.1109/TVLSI.2008.2000254S69370616

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits
    • 

    corecore