1,268 research outputs found

    Sollen wir Bibliothekare jetzt alle Informatiker werden? : Forschungsdatenmanagement, Datenerhaltung und -pflege als neue Aufgabenfelder

    Get PDF
    Immer mehr Bibliotheken übernehmen Aufgaben im Bereich der Datenerhaltung und -pflege. Informatiker/innen tragen im Öffentlichen Dienst oft die Bezeichnung »Angestellte/r in der Datenverwaltung«, obwohl die Verwaltung gespeicherter Daten schon lange keine Aufgabe mehr nur für Informatiker/innen darstellt. Gleichzeitig geht die bibliothekarische Arbeit über die klassische Literaturversorgung hinaus und umfasst zunehmend mehr und mehr technische Bereiche. Ein hochaktuelles Tätigkeitsfeld für wissenschaftliche Bibliotheken stellt das Forschungsdatenmanagement dar. Was verbirgt sich dahinter und welche Anforderungen ergeben sich daraus für das bibliothekarische Berufsbild? Sollen wir jetzt alle Informatiker/ innen werden

    FLECSim-SoC: A Flexible End-to-End Co-Design Simulation Framework for System on Chips

    Get PDF
    Hardware accelerators for deep neural networks (DNNs) have established themselves over the past decade. Most developments have worked towards higher efficiency with an individual application in mind. This highlights the strong relationship between co-designing the accelerator together with the requirements of the application. Currently for a structured design flow, however, it lacks a tool to evaluate a DNN accelerator embedded in a System on Chip (SoC) platform.To address this gap in the state of the art, we introduce FLECSim, a tool framework that enables an end-to-end simulation of an SoC with dedicated accelerators, CPUs and memories. FLECSim offers flexible configuration of the system and straightforward integration of new accelerator models in both SystemC and RTL, which allows for early design verification. During the simulation, FLECSim provides metrics of the SoC, which can be used to explore the design space. Finally, we present the capabilities of FLECSim, perform an exemplary evaluation with a systolic array-based accelerator and explore the design parameters in terms of accelerator size, power and performance

    Data Movement Reduction for DNN Accelerators: Enabling Dynamic Quantization Through an eFPGA

    Get PDF
    Computational requirements for deep neural networks (DNNs) have been on a rising trend for years. Moreover, network dataflows and topologies are becoming more sophisticated to address more challenging applications. DNN accelerators cannot adopt quickly to the constantly changing DNNs. In this paper, we describe our approach to make a static accelerator more versatile by adding an embedded FPGA (eFPGA). The eFPGA is tightly coupled to the on-chip network, which allows us to pass data through the eFPGA before and after it is processed by the DNN accelerator. Hence, the proposed solution is able to quickly address changing requirements. To show the benefits of this approach, we propose an eFPGA application that enables dynamic quantization of data. We can fit four number converters on an 1.5mm21.5mm^2 eFPGA, which can process 400M data elements per second. We will practically validate our work in the near future, with a SoC tapeout in the ongoing EPI project

    Essays On the Optimal Interplay of Early and Late Education Subsidies and Taxation

    Get PDF
    In this dissertation I have extended the model of Krueger and Ludwig (2013) and (2016) by (i) linking the prospect of successful college completion to human capital, (ii) introducing taste shocks in order to deal with potential issues caused by the discrete college choice and (iii) incorporating the human capital process during primary and secondary education. This results in a large-scale OLG model, that accounts for the core mechanisms of the human capital literature and allows for endogenous responses within the whole human capital process of primary, secondary and tertiary education, to changes in college subsidies and non-tertiary education investments by the government. By embedding this setup into a large-scale OLG environment, we were able to compare the different paths of impact of the policy measures in a realistic framework. In partial equilibrium, non-tertiary education investments and college subsidies deviate regarding their distributional consequences. Both measures increase average human capital, which leads to higher aggregate production and consumption. However, while primary and secondary education expenses of the government increase the human capital level for children from all household types, college subsidies do not increase the human capital investments from education and income-poor parents, which leads to an increase of inequality. When wages respond to shifts in the labor market, both policy measures increase equality. This is accomplished by an endogenous decrease in the college wage premium, which lowers the return on parental human capital investments. The key difference between the two measures is, that while both policy instruments result in a more equal distribution of human capital, non-tertiary investments of the government compensate for the decreased human capital expenditures of parents and shift average human capital to a higher level. The bivariate experiments underline the interdependence of the two policy measures. Benefits of college subsidies can only be claimed if young adults have the skills to successfully complete college. Early human capital investments remain unused, if college education can only be afforded by a small fraction in the population. In all experiments we have performed, the best policy mix calls for an increase in primary, secondary and tertiary education investments by the government, financed by higher labor taxes relative to the current status quo

    Broad redshifted line as a signature of outflow

    Full text link
    We formulate and solve the diffusion problem of line photon propagation in a bulk outflow from a compact object (black hole or neutron star) using a generic assumption regarding the distribution of line photons within the outflow. Thomson scattering of the line photons within the expanding flow leads to a decrease of their energy which is of first order in v/c, where v is the outflow velocity and c is the speed of light. We demonstrate that the emergent line profile is closely related to the time distribution of photons diffusing through the flow (the light curve) and consists of a broad redshifted feature. We analyzed the line profiles for the general case of outflow density distribution. We emphasize that the redshifted lines are intrinsic properties of the powerful outflow that are supposed to be in many compact objects.Comment: 16 pages, 1 black-white figure and 2 color figures; accepted for publication in the Astrophysical Journa

    Joint Representations for Reinforcement Learning with Multiple Sensors

    Full text link
    Combining inputs from multiple sensor modalities effectively in reinforcement learning (RL) is an open problem. While many self-supervised representation learning approaches exist to improve performance and sample complexity for image-based RL, they usually neglect other available information, such as robot proprioception. However, using this proprioception for representation learning can help algorithms to focus on relevant aspects and guide them toward finding better representations. In this work, we systematically analyze representation learning for RL from multiple sensors by building on Recurrent State Space Models. We propose a combination of reconstruction-based and contrastive losses, which allows us to choose the most appropriate method for each sensor modality. We demonstrate the benefits of joint representations, particularly with distinct loss functions for each modality, for model-free and model-based RL on complex tasks. Those include tasks where the images contain distractions or occlusions and a new locomotion suite. We show that combining reconstruction-based and contrastive losses for joint representation learning improves performance significantly compared to a post hoc combination of image representations and proprioception and can also improve the quality of learned models for model-based RL

    The Memory of Beta Factors

    Get PDF
    Researchers and practitioners employ a variety of time-series processes to forecast betas, using either short-memory models or implicitly imposing infinite memory. We find that both approaches are inadequate: beta factors show consistent long-memory properties. For the vast majority of stocks, we reject both the short-memory and difference-stationary (random walk) alternatives. A pure long- memory model reliably provides superior beta forecasts compared to all alternatives. Finally, we document the relation of firm characteristics with the forecast error differentials that result from inadequately imposing short-memory or random walk instead of long-memory processes

    Measuring the polymerization stress of self-adhesive resin composite cements by crack propagation

    Get PDF
    OBJECTIVES To test the polymerization stress of nine self-adhesive resin composite cements (G-CEM, iCEM, Bifix SE, Maxcem Elite, PANAVIA SA, SoloCem, SmartCem 2, SpeedCEM, RelyX Unicem 2) and one glass ionomer cement (control group; Ketac Cem). MATERIALS AND METHODS The crack propagation of a feldspar ceramic (n = 130) was determined by measuring crack lengths that originated from Vickers indentations, prior to and after the application and polymerization of the self-adhesive resin cements. Results for crack propagation were converted to polymerization stress values, and statistical analysis was performed using one-way ANOVA followed by Scheffé post hoc test. RESULTS SmartCem 2 presented higher stress values than iCEM, SoloCem, and Ketac Cem, while Ketac Cem showed lower values than Bifix SE, Maxcem Elite, SmartCem 2, SpeedCEM, and RelyX Unicem 2. CONCLUSIONS Self-adhesive resin composite cements differ in their polymerization stress, which may affect the durability of the restoration. For restorations made from ceramics with lower flexural strength, such as feldspar ceramics, resin composite cement materials with less polymerization stress should be preferred. CLINICAL RELEVANCE As a high polymerization shrinkage may increase crack propagation, the determination of the polymerization stress of self-adhesive resin composite cements employed for fixing all-ceramic restorations is an important factor
    corecore