42,278 research outputs found

    Robust H∞ control for networked systems with random packet losses

    Get PDF
    Copyright [2007] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, the robust Hinfin control problem Is considered for a class of networked systems with random communication packet losses. Because of the limited bandwidth of the channels, such random packet losses could occur, simultaneously, in the communication channels from the sensor to the controller and from the controller to the actuator. The random packet loss is assumed to obey the Bernoulli random binary distribution, and the parameter uncertainties are norm-bounded and enter into both the system and output matrices. In the presence of random packet losses, an observer-based feedback controller is designed to robustly exponentially stabilize the networked system in the sense of mean square and also achieve the prescribed Hinfin disturbance-rejection-attenuation level. Both the stability-analysis and controller-synthesis problems are thoroughly investigated. It is shown that the controller-design problem under consideration is solvable if certain linear matrix inequalities (LMIs) are feasible. A simulation example is exploited to demonstrate the effectiveness of the proposed LMI approach

    Reliable Data Processing Enabled By Program Analysis

    Get PDF
    Errors pose a serious threat to the output validity of modern data processing, which is often performed by computer programs. In scientific computation, data are collected through instruments or sensors that may be exposed to rough environmental conditions, leading to errors. Furthermore, during the computation process data may not be precisely represented due to the limited precision of the underlying machine, leading to representation errors. Computational processing of these data may hence produce unreliable output results or even faulty conclusions. We call them reliability problems. ^ We consider the reliability problems that are caused by two kinds of errors. The first kind of errors includes input and parameter errors, which originate from the external physical environment. We call these external errors. The other kind of errors is due to the limited representation of floating-point values. They occur when values cannot be precisely represented by machines. We call them internal representation errors, or internal errors. They are usually at a much smaller scale compared to external errors. Nonetheless, such tiny errors may still lead to unreliable results and serious problems. ^ In this dissertation, we develop program analysis techniques to enable reliable data processing. For external errors, we propose techniques to improve the sampling efficiency of Monte Carlo methods, namely execution coalescing and white-box sampling. For internal errors, we develop efficient monitoring techniques to detect instability problems at runtime in floating point program executions

    The uphill battle of environmental technologies: Analysis of local discourses on the acceptance and resistance of Green Bin programs

    Get PDF
    Many Canadian municipalities have been looking for alternative sustainable waste management solutions since landfill capacity has been decreasing and siting new facilities often results in vehement local opposition. In Ontario, there is no provincial mandate for organic waste diversion targets, where most large-sized municipalities have implemented a Green Bin program while other jurisdictions of varying size still have not. This paper uses discourse analysis to explore predominant and counter discourses that have resulted in Guelph sustaining a Green Bin program, while London has not implemented a Green Bin. Manuscript one explores the interaction of provincial and local municipal discourses in London, Ontario in not adopting a Green Bin program. The findings of this study contribute to understanding the power of discourses in technological and environmental debates to overcome the inertia of the status quo. To examine this further, manuscript two is a comparative case study focused on two municipalities, London and Guelph each with a different approach to the management of organic waste as it relates to Green Bin. This study identified the prominent discourses that represent eco-centric positions, as found in Guelph, are more often discursively juxtaposed against economic conservatism discourses, such as in London. In this study, the discursive positions (eco-centric and conservative) are ingrained within the local municipal discourse and is highly representative of a community coherence on an environmental issue. Overall, the implications of this study find that there is an interface between community coherence and perceived risk of new technology. Such that, in the face of crisis or perceived risk, the community tends to be risk averse, prompting less risky intermediary acceptable risks to be supported

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be ∌24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with ÎŽ<+34.5∘\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r∌27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Uncertainties in Galactic Chemical Evolution Models

    Get PDF
    We use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number of SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions, along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions
    • 

    corecore