62 research outputs found

    Indoor Environment Quality and Health in Energy-Efficient Buildings

    Get PDF
    This Special Issue addresses a topic that is of great relevance as, nowadays, in developed countries, individuals spend most of their time indoors and, depending on each person, the presence at home ranges between 60% and 90% of the day, with 30% of that time spent sleeping. Considering these data, indoor residential environments have a direct influence on human health, especially considering that, in developing countries, significant levels of indoor pollution make housing unsafe, having an impact on the health of inhabitants. Therefore, housing is a key health factor for people all over the world, and various parameters, such as air quality, ventilation, hygrothermal comfort, lighting, physical environment, and building efficiency, can contribute to healthy architecture, as well as to the conditions that can result from the poor application of these parameters. The articles in this Special Issue thus address issues concerning indoor environmental quality (IEQ), which is described, more simply, as the conditions inside a building. This includes air quality, but also access to daylight and views, pleasant acoustic conditions, and occupant control over lighting and thermal comfort. IEQ also includes the functional aspects of the space, such as whether the layout provides easy access to tools and people when needed and whether there is sufficient space for the occupants. Building managers and operators can increase building occupant satisfaction by considering all aspects of IEQ rather than focusing on temperature or air quality alone

    A formally verified compiler back-end

    Get PDF
    This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well

    Indoor Air Quality Design and Control in Low-Energy Residential Buildings, International Energy Agency, EBC Annex 68, Subtask 5 Final Report: Field measurements and case studies

    Get PDF
    IEA-EBC Annex 68: Indoor Air Quality Design and Control in Low Energy Residential Buildings investigates how to ensure that future low energy buildings are able to improve their energy performance while still providing comfortable and healthy indoor environments. More specifically, Subtask 5 of Annex 68 has dealt with generation of data for the verification of the models and strategies developed in the other Annex 68 Subtasks through controlled field tests and case study presentations

    Is CO2 a good proxy for Indoor Air Quality in school classrooms?

    Get PDF
    Background The increasing interest in Indoor Air Quality (IAQ) of educational buildings has been underpinned by the rising incidence of asthma and respiratory disease among children, who spend a substantial amount of their lives on the school premises. The susceptibility of children compared with adults has led to the formulation of guidelines regulating IAQ in school buildings. WHO guidelines provide the scienti c basis for legally enforceable standards for non-industrial environments. Re ecting the relative di culty and expense of obtaining measurements of speci c pollutants, guidelines for the provision of adequate IAQ in UK schools have been typically framed around thermal conditions, carbon dioxide (CO2) levels, and estimated ventilation rates as a primary indicator of IAQ. Aim Drawing on detailed monitoring data from 15 primary and three nursery London classrooms, this thesis sets to evaluate if indoor CO2 levels in classrooms are a good indicator in ensuring a healthy and satisfactory school environment. To fully answer this question this thesis aims to associate levels of speci c indoor pollutants with CO2 levels and ventilation rates after controlling for environmental and behavioural factors; to identify speci c exposures in the classroom that may a ect asthma prevalence, selfreported health symptoms and perceived IAQ. Method The study was organised as a case-crossover study of the heating and non-heating season, and employed a multi-disciplinary methodology, including direct-reading instrumental sampling, passive sampling for long-term measurements, and determination of microbiological contaminants with molecular methods. The monitored data were matched with school and classroom characteristics, self-reported health symptoms and IAQ perception of 376 primary school students attending 15 classrooms with standardised questionnaires. The integrated database was analysed with Bayesian multilevel models that provide a concordance between theoretical approaches and statistical analysis, while taking into account the hierarchy of the data. Results Indoor CO2 levels and estimated ventilation rates were a reliable predictor for some outcomes, such as indoor temperature, Particulate Matter (PM) and Volatile Organic Compounds (VOCs) levels. Overall evidence from this study suggests that limiting CO2 levels below 1000 ppm (which is lower than current guideline values of BB101 performance standard in England (DfE, 2014)) is necessary in order to achieve indoor PM levels in classrooms below WHO 2010 annual guideline values, after removing indoor furnishing acting as dust reservoirs. A strong relationship between indoor temperatures and Total VOCs (TVOCs) levels emerged, and the predictive models estimated that after removing indoor TVOCs sources, keeping indoor temperatures below 26 C, and preferably below 22 C depending on season, may keep indoor TVOCs levels below 250ppb. Based on the self-reported satisfaction with IAQ at baseline and follow-up period, it was found that keeping indoor temperatures below 26 C and CO2 levels below 1000ppm, may additionally reduce predicted percentage of dissatisfaction with IAQ below 30%. The air was perceived as less acceptable with increasing indoor temperature and CO2 levels, stressing the importance for an integrated approach for the simultaneous provision of thermal comfort and IAQ. However, indoor CO2 levels were a poor predictor of tra c related pollutants, such as indoor NO2 levels, which were signi cantly associated with the high asthma prevalence reported in this study (OR: 1.11, 95% CI: 1.04-1.19). Exposure to tra c-related pollution levels was additionally associated with increased IAQ dissatisfaction, and higher prevalence and incidence of Sick Building Syndrome SBS symptoms. SBS describes a constellation of nonspeci c health symptoms including mucosal, dermal, respiratory and general, that have no clear aetiology and are attributable to exposure to a particular building environment. Recommendations for future research The methodological framework used in this study could be potentially applied to large scale investigations enhancing our understanding of the factors a ecting indoor pollution levels in educational settings. More research is necessary to validate the predictive model of satisfaction with IAQ in di erent climatic and geographical areas. Implications for policy This study shows that complaints about poor air quality and health symptoms were related to de ciencies in the indoor school environment, and identi ed that management and operation of classrooms are key in creating healthy and comfortable school buildings. Greening programmes around school buildings, simple passive measures of the building envelope, altering ventilation strategies among seasons, and timely control of ventilation may improve perceived IAQ and alleviate SBS symptoms. Together with increasing average and background ventilation rates, elimination of indoor sources that impact IAQ is necessar

    The prediction of chemosensory effects of volatile organic compounds in humans

    Get PDF
    An introduction to indoor air pollution is given, and the chemosensory effects in humans of volatile organic compounds (VOCs), singly and in binary mixtures, are described, together with the bioassays already developed to quantify the effects of VOCs. The need for predictive models that can take over the bioassays is emphasised. Attention is drawn to the establishment of mathematical models to predict the chemosensory effects of VOCs in humans. Nasal pungency threshold (NPT), eye irritation threshold (EIT) and odour detection threshold (ODT) values are available for a series of VOCs that cover a large range of solute properties. Each of these sets of biological data are regressed against the corresponding solute descriptors, E, S, A, B and L to obtain quantitative structure activity relationships (QSARs) for log(l/NPT), log(l/ODT) and log(l/EIT) taking on the form: LogSP = c + e.E + s.S + a.A + b.B + l.L The availability of solute descriptors is investigated. It is shown that solute descriptors, E an excess molar refraction, S the solute dipolarity/polarizability, A the solute overall hydrogen-bond acidity, B the solute overall hydrogen-bond basicity and L the logarithmic value of the solute Ostwald solubility coefficient in hexadecane at 298K, can be obtained through the use of various thermodynamic measurements. In this way descriptors for some 300 solutes have been obtained. A headspace gas chromatographic method is also devised to determine the 1:1 complexation constant, K, between hydrogen bond donors and hydrogen bond acceptors in octan-1-ol. The 30 complexation constants measured are then correlated with α2H*, β2H, a combination of the solute 1:1 hydrogen bond acidity and basicity, respectively, to give: Log K1:1 = 2.950. α2H*β2H - 0.74

    Algorithms for enhancing pattern separability, feature selection and incremental learning with applications to gas sensing electronic nose systems

    Get PDF
    Three major issues in pattern recognition and data analysis have been addressed in this study and applied to the problem of identification of volatile organic compounds (VOC) for gas sensing applications. Various approaches have been proposed and discussed. These approaches are not only applicable to the VOC identification, but also to a variety of pattern recognition and data analysis problems. In particular, (1) enhancing pattern separability for challenging classification problems, (2) optimum feature selection problem, and (3) incremental learning for neural networks have been investigated;Three different approaches are proposed for enhancing pattern separability for classification of closely spaced, or possibly overlapping clusters. In the neurofuzzy approach, a fuzzy inference system that considers the dynamic ranges of individual features is developed. Feature range stretching (FRS) is introduced as an alternative approach for increasing intercluster distances by mapping the tight dynamic range of each feature to a wider range through a nonlinear function. Finally, a third approach, nonlinear cluster transformation (NCT), is proposed, which increases intercluster distances while preserving intracluster distances. It is shown that NCT achieves comparable, or better, performance than the other two methods at a fraction of the computational burden. The implementation issues and relative advantages and disadvantages of these approaches are systematically investigated;Selection of optimum features is addressed using both a decision tree based approach, and a wrapper approach. The hill-climb search based wrapper approach is applied for selection of the optimum features for gas sensing problems;Finally, a new method, Learn++, is proposed that gives classification algorithms, the capability of incrementally learning from new data. Learn++ is introduced for incremental learning of new data, when the original database is no longer available. Learn++ algorithm is based on strategically combining an ensemble of classifiers, each of which is trained to learn only a small portion of the pattern space. Furthermore, Learn++ is capable of learning new data even when new classes are introduced, and it also features a built-in mechanism for estimating the reliability of its classification decision;All proposed methods are explained in detail and simulation results are discussed along with directions for future work

    Applying optical and acoustic diagnostic tools to probe turbomachinery wakes, cardiac flows, and multi-phase media

    Get PDF
    This dissertation has three parts. In the first part, we study the complex structure of turbulent flow past an automotive cooling fan. A 360 gallon optically index-matched test facility is specially built for performing impeller phase-locked particle image velocimetry (PIV) measurements. A full-scale fan (30 cm diameter) and its shroud (70 cm outer diameter) designed by Robert Bosch LLC are installed in an acrylic test section. The fan and shroud are also made out of acrylic, and an aqueous solution of NaI (62% by weight), with the same refractive index as acrylic (n=1.49), is used as the working fluid, providing unobstructed optical access to the fan for flow measurements. A turbulence grid is placed two fan diameters upstream to simulate the effects of a radiator wake. Phase-locked 2D PIV measurements are performed focusing at the inlet, near wake and tip gap of the axial fan for multiple blade phases. To achieve high spatial resolution over large fields of view, nine sample areas cover the entire wake, three sample areas cover the inlet and a higher magnification region covers the tip gap. To sufficiently resolve the evolution of the wake and tip vortex system, data are acquired for 10 and 31 blade phases respectively. Inflow measurements quantify the turbulence intensity distribution generated by the grid upstream and the acceleration of the flow by the fan. Wake measurements reveal the formation and evolution of the tip vortex and three generations of the fan wake. The results show the entrainment of the blade wake by the tip vortex, as well as the decay in vortex strength, entrainment speed and wake velocity deficit with axial distance, as well as their associated turbulence characteristics. Higher magnification measurements focusing on the tip gap reveal the complex formation of the tip vortex due to the merging of multiple vortex filaments during the blade passage. In the second part, we study the complex blood flow characteristics that may be linked to the formation and cure of left ventricular thrombus (LVT) in cardiomyopathy patients. Echocardiographic PIV-PTV analysis is performed on routine in-vivo contrast ultrasound images acquired from four patients with LVT to obtain time-resolved velocity distributions. The contrast agent comprises of gas filled microbubbles that trace the blood flow and enhance the acoustic backscatter of the ultrasound signal. The concentration of the contrast agent is adjusted in each case to obtain time-resolved images with sufficiently discernable bubble motion. Due to constraints on the maximum possible spatial and temporal resolution, the data quality is not comparable to standard optical PIV. Hence, to obtain reliable velocity data, optimized procedures that integrate image enhancement, PIV and particle tracking velocimetry (PTV) are introduced. Initial steps involve performing cross-correlation based PIV analysis over several image enhancement and cross-correlation parameters to obtain multiple velocity vectors at each grid point. Optimization is subsequently performed using outlier removal and smoothing to select the correct vector. These vectors are then used as part of a multi-parameter PTV procedure to further refine the data. Phase averaged velocity and vorticity distributions reveal the LV vortex formation and evolution as it fragments and decays during the cardiac cycle. The formation of LVT predominantly occurs near the left ventricular (LV) apex. The efficacy of apical washing is estimated directly from the phase-time history of apical velocity as well as from the vortex-induced apical velocity, providing preliminary LVT risk quantification criteria. In the third part, we aim to characterize the aerosolization of crude oil-dispersant contaminated slicks due to bubble bursting. Bubble bursting observed in oceanic whitecaps is a well-known mechanism of marine aerosol generation. Although oil-spills occur frequently in the ocean, the emissions of oily marine aerosols from bubble bursting are not well characterized. Recent spills have witnessed the unprecedented use of chemical dispersants as a strategy for spill-response. They significantly modify several properties of crude oil, potentially altering the size, concentration, and composition of aerosolized particles. Hence, in this study, bubble plumes with controlled size distributions are injected into a vertical seawater column. They rise to the surface contaminated with slicks of crude oil-dispersant mixtures and burst. The distributions of aerosolized particles (10-380nm and 0.5-20μm) above the slicks are monitored before, during and after bubble injection. Measurements are performed at the same air injection rate for varying bubble plumes (Φ 86μm, 178μm and 595μm), slick thicknesses (50 and 500 μm) as well as interfacial mixtures (pure crude oil, pure dispersant Corexit 9500A and dispersant premixed with crude oil at a ratio (DOR) of 1:25). Results show an order of magnitude increase in the nano-size particle concentrations only when the largest bubble plumes burst on slicks of 500μm DOR-1:25 oil or 50μm pure dispersant. Small and medium-sized bubble plumes generate micron–size aerosols for thin crude oil slicks as well as those containing dispersants. Preliminary chemical analysis confirms the presence of crude oil in both micro- and nano-aerosols generated from slicks with DOR 1:25 oil. Potential operating mechanisms are discussed and aerosol emission factors per bubble are provided, enabling the risk-assessment associated with the inhalation of oily aerosols

    End-to-End Translation Validation for the Halide Language

    Get PDF
    International audienceThis paper considers the correctness of domain-specific compilers for tensor programming languages through the study of Halide, a popular representative. It describes a translation validation algorithm for affine Halide specifications, independently of the scheduling language. The algorithm relies on "propheticž annotations added by the compiler to the generated array assignments. The annotations provide a refinement mapping from assignments in the generated code to the tensor definitions from the specification. Our implementation leverages an affine solver and a general SMT solver, and scales to complete Halide benchmarks

    Controlled and effective interpolation

    Get PDF
    Model checking is a well established technique to verify systems, exhaustively and automatically. The state space explosion, known as the main difficulty in model checking scalability, has been successfully approached by symbolic model checking which represents programs using logic, usually at the propositional or first order theories level. Craig interpolation is one of the most successful abstraction techniques used in symbolic methods. Interpolants can be efficiently generated from proofs of unsatisfiability, and have been used as means of over-approximation to generate inductive invariants, refinement predicates, and function summaries. However, interpolation is still not fully understood. For several theories it is only possible to generate one interpolant, giving the interpolation-based application no chance of further optimization via interpolation. For the theories that have interpolation systems that are able to generate different interpolants, it is not understood what makes one interpolant better than another, and how to generate the most suitable ones for a particular verification task. The goal of this thesis is to address the problems of how to generate multiple interpolants for theories that still lack this flexibility in their interpolation algorithms, and how to aim at good interpolants. This thesis extends the state-of-the-art by introducing novel interpolation frameworks for different theories. For propositional logic, this work provides a thorough theoretical analysis showing which properties are desirable in a labeling function for the Labeled Interpolation Systems framework (LIS). The Proof-Sensitive labeling function is presented, and we prove that it generates interpolants with the smallest number of Boolean connectives in the entire LIS framework. Two variants that aim at controlling the logical strength of propositional interpolants while maintaining a small size are given. The new interpolation algorithms are compared to previous ones from the literature in different model checking settings, showing that they consistently lead to a better overall verification performance. The Equalities and Uninterpreted Functions (EUF)-interpolation system, presented in this thesis, is a duality-based interpolation framework capable of generating multiple interpolants for a single proof of unsatisfiability, and provides control over the logical strength of the interpolants it generates using labeling functions. The labeling functions can be theoretically compared with respect to their strength, and we prove that two of them generate the interpolants with the smallest number of equalities. Our experiments follow the theory, showing that the generated interpolants indeed have different logical strength. We combine propositional and EUF interpolation in a model checking setting, and show that the strength of the interpolation algorithms for different theories has to be aligned in order to generate smaller interpolants. This work also introduces the Linear Real Arithmetic (LRA)-interpolation system, an interpolation framework for LRA. The framework is able to generate infinitely many interpolants of different logical strength using the duality of interpolants. The strength of the LRA interpolants can be controlled by a normalized strength factor, which makes it straightforward for an interpolationbased application to choose the level of strength it wants for the interpolants. Our experiments with the LRA-interpolation system and a model checker show that it is very important for the application to be able to fine tune the strength of the LRA interpolants in order to achieve optimal performance. The interpolation frameworks were implemented and form the interpolation module in OpenSMT2, an open source efficient SMT solver. OpenSMT2 has been integrated to the propositional interpolation-based model checkers FunFrog and eVolCheck, and to the first order interpolation-based model checkerHiFrog. This thesis presents real life model checking experiments using the novel interpolation frameworks and the tools aforementioned, showing the viability and strengths of the techniques
    corecore