2,036 research outputs found

    A Geometric Derivation of the Irwin-Hall Distribution

    Get PDF
    The Irwin-Hall distribution is the distribution of the sum of a finite number of independent identically distributed uniform random variables on the unit interval. Many applications arise since round-off errors have a transformed Irwin-Hall distribution and the distribution supplies spline approximations to normal distributions. We review some of the distribution’s history. The present derivation is very transparent, since it is geometric and explicitly uses the inclusion-exclusion principle. In certain special cases, the derivation can be extended to linear combinations of independent uniform random variables on other intervals of finite length.The derivation adds to the literature about methodologies for finding distributions of sums of random variables, especially distributions that have domains with boundaries so that the inclusion-exclusion principle might be employed

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    Software reliability through fault-avoidance and fault-tolerance

    Get PDF
    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use

    Determining the Effects of Cooling Rate on Magma Crystallisation Using a High Temperature Heating Stage

    Get PDF
    In order to understand igneous rock textures and the history of erupted products and the dynamics of lava flow emplacement, it is necessary to understand how crystals grow and what effects different cooling histories have on their growth. Previous studies on crystal growth have often assumed constant crystal growth rates and focused on crystallisation over long timescales more appropriate to igneous intrusions. This work aims to quantify and describe crystal growth rates, morphological variations and textural development of crystals growing from natural lava flow samples, primarily focussing on plagioclase feldspar. High Temperature Heating Stage experiments were carried out at temperatures and cooling rates appropriate to basaltic lava flows, in which wafers of the glassy rind from Blue Glassy Pahoehoe were melted and re-crystallised. It was possible to directly observe and record crystal growth over time at controlled cooling rates, and to extract information from the quenched products. The experiments in this study grew crystals at very low undercoolings, maintaining an interface controlled growth regime and facetted crystal morphologies. Bulk evolution of crystal growth indicated that growth rates were not constant over time, decaying as they grew. The morphology and aspect ratio of these crystals changed over time, with aspect ratio increasing as growth was significantly faster in the length direction during the observed period. The relationship between mean aspect ratio and crystallisation time proposed by Holness (2014) was experimentally verified. We also observed the ‘true’ crystallisation time of crystals, highlighting a need for better constraint on how crystal growth times are used to calculate growth rates. The results of this study will contribute to better future interpretations of magmatic histories and crystallisation conditions in natural basaltic lava flows, as well as refinement of Crystal Size Distribution studies

    Beam-Induced Damage Mechanisms and their Calculation

    Full text link
    The rapid interaction of highly energetic particle beams with matter induces dynamic responses in the impacted component. If the beam pulse is sufficiently intense, extreme conditions can be reached, such as very high pressures, changes of material density, phase transitions, intense stress waves, material fragmentation and explosions. Even at lower intensities and longer time-scales, significant effects may be induced, such as vibrations, large oscillations, and permanent deformation of the impacted components. These lectures provide an introduction to the mechanisms that govern the thermomechanical phenomena induced by the interaction between particle beams and solids and to the analytical and numerical methods that are available for assessing the response of impacted components. An overview of the design principles of such devices is also provided, along with descriptions of material selection guidelines and the experimental tests that are required to validate materials and components exposed to interactions with energetic particle beams.Comment: 69 pages, contribution to the 2014 Joint International Accelerator School: Beam Loss and Accelerator Protection, Newport Beach, CA, USA , 5-14 Nov 201

    Group-based financial institutions for the rural poor in Bangladesh: an institutional- and household-level analysis

    Get PDF
    Table of Contents: Tables, Figures, Foreword, Acknowledgments, and Summary; 1. Introduction; 2. Determinants of the Placement and Outreach of Group-Based Financial Institutions:A County-Level Analysis; 3. Group-Based Financial Institutions:Structure, Conduct, and Performance; 4. Household Participation in Financial Markets; 5. Analysis of the Household-Level Impact of Group-Based Credit Institutions in Bangladesh; 6. Conclusions and Implications for Policy; Appendix A: Survey Modules, Sampling Frame, and Location of Survey Sites; Appendix B: Adult Equivalent Consumption Units Differentiated by Age and Gender; ReferencesRural poor, Financial institutions, Microenterprises, Household surveys,

    A Hardware Platform for Communication and Localization Performance Evaluation of Devices inside the Human Body

    Get PDF
    Body area networks (BAN) is a technology gaining widespread attention for application in medical examination, monitoring and emergency therapy. The basic concept of BAN is monitoring a set of sensors on or inside the human body which enable transfer of vital parameters between the patient´s location and the physician in charge. As body area network has certain characteristics, which impose new demands on performance evaluation of systems for wireless access and localization for medical sensors. However, real-time performance evaluation and localization in wireless body area networks is extremely challenging due to the unfeasibility of experimenting with actual devices inside the human body. Thus, we see a need for a real-time hardware platform, and this thesis addressed this need. In this thesis, we introduced a unique hardware platform for performance evaluation of body area wireless access and in-body localization. This hardware platform utilizes a wideband multipath channel simulator, the Elektrobit PROPSimâ„¢ C8, and a typical medical implantable device, the Zarlink ZL70101 Advanced Development Kit. For simulation of BAN channels, we adopt the channel model defined for the Medical Implant Communication Service (MICS) band. Packet Reception Rate (PRR) is analyzed as the criteria to evaluate the performance of wireless access. Several body area propagation scenarios simulated using this hardware platform are validated, compared and analyzed. We show that among three modulations, two forms of 2FSK and 4FSK. The one with lowest raw data rate achieves best PRR, in other word, best wireless access performance. We also show that the channel model inside the human body predicts better wireless access performance than through the human body. For in-body localization, we focus on a Received Signal Strength (RSS) based localization algorithm. An improved maximum likelihood algorithm is introduced and applied. A number of points along the propagation path in the small intestine are studied and compared. Localization error is analyzed for different sensor positions. We also compared our error result with the Cramèr- Rao lower bound (CRLB), shows that our localization algorithm has acceptable performance. We evaluate multiple medical sensors as device under test with our hardware platform, yielding satisfactory localization performance

    Fairness Testing: A Comprehensive Survey and Analysis of Trends

    Full text link
    Unfair behaviors of Machine Learning (ML) software have garnered increasing attention and concern among software engineers. To tackle this issue, extensive research has been dedicated to conducting fairness testing of ML software, and this paper offers a comprehensive survey of existing studies in this field. We collect 100 papers and organize them based on the testing workflow (i.e., how to test) and testing components (i.e., what to test). Furthermore, we analyze the research focus, trends, and promising directions in the realm of fairness testing. We also identify widely-adopted datasets and open-source tools for fairness testing

    Large state spaces and self-supervision in reinforcement learning

    Full text link
    L'apprentissage par renforcement (RL) est un paradigme d'apprentissage orienté agent qui s'intéresse à l'apprentissage en interagissant avec un environnement incertain. Combiné à des réseaux de neurones profonds comme approximateur de fonction, l'apprentissage par renforcement profond (Deep RL) nous a permis récemment de nous attaquer à des tâches très complexes et de permettre à des agents artificiels de maîtriser des jeux classiques comme le Go, de jouer à des jeux vidéo à partir de pixels et de résoudre des tâches de contrôle robotique. Toutefois, un examen plus approfondi de ces remarquables succès empiriques révèle certaines limites fondamentales. Tout d'abord, il a été difficile de combiner les caractéristiques souhaitables des algorithmes RL, telles que l'apprentissage hors politique et en plusieurs étapes, et l'approximation de fonctions, de manière à obtenir des algorithmes stables et efficaces dans de grands espaces d'états. De plus, les algorithmes RL profonds ont tendance à être très inefficaces en raison des stratégies d'exploration-exploitation rudimentaires que ces approches emploient. Enfin, ils nécessitent une énorme quantité de données supervisées et finissent par produire un agent étroit capable de résoudre uniquement la tâche sur laquelle il est entrainé. Dans cette thèse, nous proposons de nouvelles solutions aux problèmes de l'apprentissage hors politique et du dilemme exploration-exploitation dans les grands espaces d'états, ainsi que de l'auto-supervision dans la RL. En ce qui concerne l'apprentissage hors politique, nous apportons deux contributions. Tout d'abord, pour le problème de l'évaluation des politiques, nous montrons que la combinaison des méthodes populaires d'apprentissage hors politique et à plusieurs étapes avec une paramétrisation linéaire de la fonction de valeur pourrait conduire à une instabilité indésirable, et nous dérivons une variante de ces méthodes dont la convergence est prouvée. Deuxièmement, pour l'optimisation des politiques, nous proposons de stabiliser l'étape d'amélioration des politiques par une régularisation de divergence hors politique qui contraint les distributions stationnaires d'états induites par des politiques consécutives à être proches les unes des autres. Ensuite, nous étudions l'apprentissage en ligne dans de grands espaces d'états et nous nous concentrons sur deux hypothèses structurelles pour rendre le problème traitable : les environnements lisses et linéaires. Pour les environnements lisses, nous proposons un algorithme en ligne efficace qui apprend activement un partitionnement adaptatif de l'espace commun en zoomant sur les régions les plus prometteuses et fréquemment visitées. Pour les environnements linéaires, nous étudions un cadre plus réaliste, où l'environnement peut maintenant évoluer dynamiquement et même de façon antagoniste au fil du temps, mais le changement total est toujours limité. Pour traiter ce cadre, nous proposons un algorithme en ligne efficace basé sur l'itération de valeur des moindres carrés pondérés. Il utilise des poids exponentiels pour oublier doucement les données qui sont loin dans le passé, ce qui pousse l'agent à continuer à explorer pour découvrir les changements. Enfin, au-delà du cadre classique du RL, nous considérons un agent qui interagit avec son environnement sans signal de récompense. Nous proposons d'apprendre une paire de représentations qui mettent en correspondance les paires état-action avec un certain espace latent. Pendant la phase non supervisée, ces représentations sont entraînées en utilisant des interactions sans récompense pour encoder les relations à longue portée entre les états et les actions, via une carte d'occupation prédictive. Au moment du test, lorsqu'une fonction de récompense est révélée, nous montrons que la politique optimale pour cette récompense est directement obtenue à partir de ces représentations, sans aucune planification. Il s'agit d'une étape vers la construction d'agents entièrement contrôlables. Un thème commun de la thèse est la conception d'algorithmes RL prouvables et généralisables. Dans la première et la deuxième partie, nous traitons de la généralisation dans les grands espaces d'états, soit par approximation de fonctions linéaires, soit par agrégation d'états. Dans la dernière partie, nous nous concentrons sur la généralisation sur les fonctions de récompense et nous proposons un cadre d'apprentissage non-supervisé de représentation qui est capable d'optimiser toutes les fonctions de récompense.Reinforcement Learning (RL) is an agent-oriented learning paradigm concerned with learning by interacting with an uncertain environment. Combined with deep neural networks as function approximators, deep reinforcement learning (Deep RL) allowed recently to tackle highly complex tasks and enable artificial agents to master classic games like Go, play video games from pixels, and solve robotic control tasks. However, a closer look at these remarkable empirical successes reveals some fundamental limitations. First, it has been challenging to combine desirable features of RL algorithms, such as off-policy and multi-step learning with function approximation in a way that leads to both stable and efficient algorithms in large state spaces. Moreover, Deep RL algorithms tend to be very sample inefficient due to the rudimentary exploration-exploitation strategies these approaches employ. Finally, they require an enormous amount of supervised data and end up producing a narrow agent able to solve only the task that it was trained on. In this thesis, we propose novel solutions to the problems of off-policy learning and exploration-exploitation dilemma in large state spaces, as well as self-supervision in RL. On the topic of off-policy learning, we provide two contributions. First, for the problem of policy evaluation, we show that combining popular off-policy and multi-step learning methods with linear value function parameterization could lead to undesirable instability, and we derive a provably convergent variant of these methods. Second, for policy optimization, we propose to stabilize the policy improvement step through an off-policy divergence regularization that constrains the discounted state-action visitation induced by consecutive policies to be close to one another. Next, we study online learning in large state spaces and we focus on two structural assumptions to make the problem tractable: smooth and linear environments. For smooth environments, we propose an efficient online algorithm that actively learns an adaptive partitioning of the joint space by zooming in on more promising and frequently visited regions. For linear environments, we study a more realistic setting, where the environment is now allowed to evolve dynamically and even adversarially over time, but the total change is still bounded. To address this setting, we propose an efficient online algorithm based on weighted least squares value iteration. It uses exponential weights to smoothly forget data that are far in the past, which drives the agent to keep exploring to discover changes. Finally, beyond the classical RL setting, we consider an agent interacting with its environments without a reward signal. We propose to learn a pair of representations that map state-action pairs to some latent space. During the unsupervised phase, these representations are trained using reward-free interactions to encode long-range relationships between states and actions, via a predictive occupancy map. At test time, once a reward function is revealed, we show that the optimal policy for that reward is directly obtained from these representations, with no planning. This is a step towards building fully controllable agents. A common theme in the thesis is the design of provable RL algorithms that generalize. In the first and the second part, we deal with generalization in large state spaces either by linear function approximation or state aggregation. In the last part, we focus on generalization over reward functions and we propose a task-agnostic representation learning framework that is provably able to solve all reward functions
    corecore