709 research outputs found

    The use of fabrics as formwork for concrete structures and elements

    Get PDF
    This paper presents and describes a series of studies into the use of flexible fabrics as formwork for concrete structures as alternative to conventional rigid formwork

    Factors associated with postharvest ripening heterogeneity of "Hass" avocados (Persea americana Mill)

    Get PDF
    IndexaciĂłn: Web of ScienceIntroduction. 'Hass' is the main avocado cultivar commercialized worldwide. The extended flowering period, very low percentage of fruit set and inability to ripen on the tree renders the fruit heterogeneous and unpredictable during postharvest management. The "triggered" and "ready-to-eat" growing markets for 'Hass' avocados are affected by the variable postharvest ripening or ripening heterogeneity which creates severe logistical problems for marketers and inconsistent quality delivery to consumers. Synthesis. The dry matter content, the current avocado harvest index that correlates very well with oil content, has been extensively used to harvest 'Hass' avocados to comply with the minimum standards to guarantee consumer satisfaction. However, previous work and empirical experience demonstrate that dry matter does not correlate on a fruit-to-fruit basis with time to reach edible ripeness. Thus, avocados of very different ages are harvested from individual trees, resulting in heterogeneous postharvest ripening of fruit within a specific batch. Several preharvest factors related to environmental and growing conditions and crop management as well as postharvest technology strategies influence the observed variability of postharvest ripening. Conclusion. Modern approaches based on studying the composition of individual fruits displaying contrasting postharvest ripening behavior, combined with non-destructive phenotyping techniques, seem to offer practical solutions for the fresh supply chain of avocados to sort fruit based on their ripening capacity.http://www.pubhort.org/fruits/2016/5/fruits160045.ht

    Influence of affluence on sustainable housing in Mysore, India

    Get PDF
    Mysore, the second largest city in the state of Karnataka, India, can be identified as an early adopter of sustainable design practices. Between 1903 and 1947 the use of local construction materials, the Swadeshi movement of 1905, robust planning and clear legislation resulted in sustainable urban development. However, post-colonial development fuelled by economic globalisation after the 1980s has transformed perceptions of the house among the growing middle class, becoming a commodity to demonstrate affluence and status. This paper examines the impact of changing social and cultural values on the aspirations of the growing middle classes on sustainable housing and neighbourhood development in Mysore. The methodology comprises literature and archive research to establish the historical content and review some important recent trends. Extensive fieldwork studies, including questionnaires over a wide range of participants (owners, builders and designers) and semistructured interviews with key players, including academics, architects and government agencies. The focus of development has shifted from community to individual, from energy conserving to a more consumerist attitude in the procurement of materials and finishes. The paper examines the impact of these changes. The results of the survey are summarised and reviewed under the categories of communities, site, entrance, house layout and materials

    A closer look at declarative interpretations

    Get PDF
    AbstractThree semantics have been proposed as the most promising candidates for a declarative interpretation for logic programs and pure Prolog programs: the least Herbrand model, the least term model, i.e., the C-semantics, and the I-semantics. Previous results show that a strictly increasing information ordering between these semantics exists for the class of all programs. In particular, the I-semantics allows us to model the computed answer substitutions, which is not the case for the other two.We study here the relationship between these three semantics for specific classes of programs. We show that for a large class of programs (which is Turing complete), these three semantics are isomorphic. As a consequence, given a query, we can extract from the least Herbrand model of a program in this class all computed answer substitutions. However, for specific programs the least Herbrand model is tedious to construct and reason about because it contains “ill-typed” facts. Therefore, we propose a fourth semantics that associates with a “correctly typed” program the “well-typed” subset of its least Herbrand model. This semantics is used to reason about partial correctness and absence of failures of correctly typed programs. The results are extended to programs with arithmetic

    GLocalX - From Local to Global Explanations of Black Box AI Models

    Get PDF
    Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications

    Opening the black box: a primer for anti-discrimination

    Get PDF
    The pervasive adoption of Artificial Intelligence (AI) models in the modern information society, requires counterbalancing the growing decision power demanded to AI models with risk assessment methodologies. In this paper, we consider the risk of discriminatory decisions and review approaches for discovering discrimination and for designing fair AI models. We highlight the tight relations between discrimination discovery and explainable AI, with the latter being a more general approach for understanding the behavior of black boxes

    Benchmarking and survey of explanation methods for black box models

    Get PDF
    The rise of sophisticated black-box machine learning models in Artificial Intelligence systems has prompted the need for explanation methods that reveal how these models work in an understandable way to users and decision makers. Unsurprisingly, the state-of-the-art exhibits currently a plethora of explainers providing many different types of explanations. With the aim of providing a compass for researchers and practitioners, this paper proposes a categorization of explanation methods from the perspective of the type of explanation they return, also considering the different input data formats. The paper accounts for the most representative explainers to date, also discussing similarities and discrepancies of returned explanations through their visual appearance. A companion website to the paper is provided as a continuous update to new explainers as they appear. Moreover, a subset of the most robust and widely adopted explainers, are benchmarked with respect to a repertoire of quantitative metrics

    The FPGA based trigger and data acquisition system for the CERN NA62 experiment

    Get PDF
    The main goal of the NA62 experiment at CERN is to measure the branching ratio of the ultra-rare K+ → π+vv decay, collecting about 100 events to test the Standard Model of Particle Physics. Readout uniformity of sub-detectors, scalability, efficient online selection and lossless high rate readout are key issues. The TDCB and TEL62 boards are the common blocks of the NA62 TDAQ system. TDCBs measure hit times from sub-detectors, TEL62s process and store them in a buffer, extracting only those requested by the trigger system following the matching of trigger primitives produced inside TEL62s themselves. During the NA62 Technical Run at the end of 2012 the TALK board has been used as prototype version of the L0 Trigger Processor

    Pre-Production and Quality Assurance of the Mu2e Calorimeter Silicon Photomultipliers

    Full text link
    The Mu2e electromagnetic calorimeter has to provide precise information on energy, time and position for ∌\sim100 MeV electrons. It is composed of 1348 un-doped CsI crystals, each coupled to two large area Silicon Photomultipliers (SiPMs). A modular and custom SiPM layout consisting of a 3×\times2 array of 6×\times6 mm2^2 UV-extended monolithic SiPMs has been developed to fulfill the Mu2e calorimeter requirements and a pre-production of 150 prototypes has been procured by three international firms (Hamamatsu, SensL and Advansid). A detailed quality assurance process has been carried out on this first batch of photosensors: the breakdown voltage, the gain, the quenching time, the dark current and the Photon Detection Efficiency (PDE) have been determined for each monolithic cell of each SiPMs array. One sample for each vendor has been exposed to a neutron fluency up to ∌\sim8.5~×\times~1011^{11} 1 MeV (Si) eq. n/cm2^{2} and a linear increase of the dark current up to tens of mA has been observed. Others 5 samples for each vendor have undergone an accelerated aging in order to verify a Mean Time To Failure (MTTF) higher than ∌\sim106^{6} hours.Comment: NDIP 2017 - New Developments In Photodetection, 3-7 July 2017, Tours (France

    Silicon microcantilever sensors to detect the reversible conformational change of a molecular switch, Spiropyan

    Get PDF
    The high sensitivity of silicon microcantilever sensors has expanded their use in areas ranging from gas sensing to bio-medical applications. Photochromic molecules also represent promising candidates for a large variety of sensing applications. In this work, the operating principles of these two sensing methods are combined in order to detect the reversible conformational change of a molecular switch, spiropyran. Thus, arrays of silicon microcantilever sensors were functionalized with spiropyran on the gold covered side and used as test microcantilevers. The microcantilever deflection response was observed, in five sequential cycles, as the transition from the spiropyran (SP) (CLOSED) to the merocyanine (MC) (OPEN) state and vice-versa when induced by UV and white light LED sources, respectively, proving the reversibility capabilities of this type of sensor. The microcantilever deflection direction was observed to be in one direction when changing to the MC state and in the opposite direction when changing back to the SP state. A tensile stress was induced in the microcantilever when the SP to MC transition took place, while a compressive stress was observed for the reverse transition. These different type of stresses are believed to be related to the spatial conformational changes induced in the photochromic molecule upon photo-isomerisation
    • 

    corecore