3,019 research outputs found

    MOON: MapReduce On Opportunistic eNvironments

    Get PDF
    Abstract—MapReduce offers a flexible programming model for processing and generating large data sets on dedicated resources, where only a small fraction of such resources are every unavailable at any given time. In contrast, when MapReduce is run on volunteer computing systems, which opportunistically harness idle desktop computers via frameworks like Condor, it results in poor performance due to the volatility of the resources, in particular, the high rate of node unavailability. Specifically, the data and task replication scheme adopted by existing MapReduce implementations is woefully inadequate for resources with high unavailability. To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. The adaptive task and data scheduling algorithms in MOON distinguish between (1) different types of MapReduce data and (2) different types of node outages in order to strategically place tasks and data on both volatile and dedicated nodes. Our tests demonstrate that MOON can deliver a 3-fold performance improvement to Hadoop in volatile, volunteer computing environments

    Reliable Linear, Sesquilinear and Bijective Operations On Integer Data Streams Via Numerical Entanglement

    Get PDF
    A new technique is proposed for fault-tolerant linear, sesquilinear and bijective (LSB) operations on MM integer data streams (M3M\geq3), such as: scaling, additions/subtractions, inner or outer vector products, permutations and convolutions. In the proposed method, the MM input integer data streams are linearly superimposed to form MM numerically-entangled integer data streams that are stored in-place of the original inputs. A series of LSB operations can then be performed directly using these entangled data streams. The results are extracted from the MM entangled output streams by additions and arithmetic shifts. Any soft errors affecting any single disentangled output stream are guaranteed to be detectable via a specific post-computation reliability check. In addition, when utilizing a separate processor core for each of the MM streams, the proposed approach can recover all outputs after any single fail-stop failure. Importantly, unlike algorithm-based fault tolerance (ABFT) methods, the number of operations required for the entanglement, extraction and validation of the results is linearly related to the number of the inputs and does not depend on the complexity of the performed LSB operations. We have validated our proposal in an Intel processor (Haswell architecture with AVX2 support) via fast Fourier transforms, circular convolutions, and matrix multiplication operations. Our analysis and experiments reveal that the proposed approach incurs between 0.03%0.03\% to 7%7\% reduction in processing throughput for a wide variety of LSB operations. This overhead is 5 to 1000 times smaller than that of the equivalent ABFT method that uses a checksum stream. Thus, our proposal can be used in fault-generating processor hardware or safety-critical applications, where high reliability is required without the cost of ABFT or modular redundancy.Comment: to appear in IEEE Trans. on Signal Processing, 201

    A Data Analytics Framework for Smart Grids: Spatio-temporal Wind Power Analysis and Synchrophasor Data Mining

    Get PDF
    abstract: Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.Dissertation/ThesisPh.D. Electrical Engineering 201

    Mechanistic modeling of architectural vulnerability factor

    Get PDF
    Reliability to soft errors is a significant design challenge in modern microprocessors owing to an exponential increase in the number of transistors on chip and the reduction in operating voltages with each process generation. Architectural Vulnerability Factor (AVF) modeling using microarchitectural simulators enables architects to make informed performance, power, and reliability tradeoffs. However, such simulators are time-consuming and do not reveal the microarchitectural mechanisms that influence AVF. In this article, we present an accurate first-order mechanistic analytical model to compute AVF, developed using the first principles of an out-of-order superscalar execution. This model provides insight into the fundamental interactions between the workload and microarchitecture that together influence AVF. We use the model to perform design space exploration, parametric sweeps, and workload characterization for AVF

    Demonstrating high-precision photometry with a CubeSat: ASTERIA observations of 55 Cancri e

    Get PDF
    ASTERIA (Arcsecond Space Telescope Enabling Research In Astrophysics) is a 6U CubeSat space telescope (10 cm x 20 cm x 30 cm, 10 kg). ASTERIA's primary mission objective was demonstrating two key technologies for reducing systematic noise in photometric observations: high-precision pointing control and high-stabilty thermal control. ASTERIA demonstrated 0.5 arcsecond RMS pointing stability and ±\pm10 milliKelvin thermal control of its camera payload during its primary mission, a significant improvement in pointing and thermal performance compared to other spacecraft in ASTERIA's size and mass class. ASTERIA launched in August 2017 and deployed from the International Space Station (ISS) November 2017. During the prime mission (November 2017 -- February 2018) and the first extended mission that followed (March 2018 - May 2018), ASTERIA conducted opportunistic science observations which included collection of photometric data on 55 Cancri, a nearby exoplanetary system with a super-Earth transiting planet. The 55 Cancri data were reduced using a custom pipeline to correct CMOS detector column-dependent gain variations. A Markov Chain Monte Carlo (MCMC) approach was used to simultaneously detrend the photometry using a simple baseline model and fit a transit model. ASTERIA made a marginal detection of the known transiting exoplanet 55 Cancri e (2\sim2~\Rearth), measuring a transit depth of 374±170374\pm170 ppm. This is the first detection of an exoplanet transit by a CubeSat. The successful detection of super-Earth 55 Cancri e demonstrates that small, inexpensive spacecraft can deliver high-precision photometric measurements.Comment: 23 pages, 9 figures. Accepted in A

    Detection of replay attacks in cyber-physical systems using a frequency-based signature

    Get PDF
    This paper proposes a frequency-based approach for the detection of replay attacks affecting cyber-physical systems (CPS). In particular, the method employs a sinusoidal signal with a time-varying frequency (authentication signal) into the closed-loop system and checks whether the time profile of the frequency components in the output signal are compatible with the authentication signal or not. In order to carry out this target, the couplings between inputs and outputs are eliminated using a dynamic decoupling technique based on vector fitting. In this way, a signature introduced on a specific input channel will affect only the output that is selected to be associated with that input, which is a property that can be exploited to determine which channels are being affected. A bank of band-pass filters is used to generate signals whose energies can be compared to reconstruct an estimation of the time-varying frequency profile. By matching the known frequency profile with its estimation, the detector can provide the information about whether a replay attack is being carried out or not. The design of the signal generator and the detector are thoroughly discussed, and an example based on a quadruple-tank process is used to show the application and effectiveness of the proposed method.Peer ReviewedPostprint (author's final draft

    Assessing the Technical Specifications of Predictive Maintenance: A Case Study of Centrifugal Compressor

    Get PDF
    Dependability analyses in the design phase are common in IEC 60300 standards to assess the reliability, risk, maintainability, and maintenance supportability of specific physical assets. Reliability and risk assessment uses well-known methods such as failure modes, effects, and criticality analysis (FMECA), fault tree analysis (FTA), and event tree analysis (ETA)to identify critical components and failure modes based on failure rate, severity, and detectability. Monitoring technology has evolved over time, and a new method of failure mode and symptom analysis (FMSA) was introduced in ISO 13379-1 to identify the critical symptoms and descriptors of failure mechanisms. FMSA is used to estimate monitoring priority, and this helps to determine the critical monitoring specifications. However, FMSA cannot determine the effectiveness of technical specifications that are essential for predictive maintenance, such as detection techniques (capability and coverage), diagnosis (fault type, location, and severity), or prognosis (precision and predictive horizon). The paper proposes a novel predictive maintenance (PdM) assessment matrix to overcome these problems, which is tested using a case study of a centrifugal compressor and validated using empirical data provided by the case study company. The paper also demonstrates the possible enhancements introduced by Industry 4.0 technologies.publishedVersio
    corecore