344 research outputs found
Search literacy: learning to search to learn
People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users don’t quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search
Better Together: Unifying Datalog and Equality Saturation
We present egglog, a fixpoint reasoning system that unifies Datalog and
equality saturation (EqSat). Like Datalog, it supports efficient incremental
execution, cooperating analyses, and lattice-based reasoning. Like EqSat, it
supports term rewriting, efficient congruence closure, and extraction of
optimized terms.
We identify two recent applications--a unification-based pointer analysis in
Datalog and an EqSat-based floating-point term rewriter--that have been
hampered by features missing from Datalog but found in EqSat or vice-versa. We
evaluate egglog by reimplementing those projects in egglog. The resulting
systems in egglog are faster, simpler, and fix bugs found in the original
systems.Comment: PLDI 202
Rückwirkung von Positioniersystemen auf die Feldverteilung in einer GTEM-Zelle
Eine GTEM-Zelle (Gigahertz Transverse Electromagnetic Cell) ist eine standardisierte Mess- und Prüfumgebung für elektromagnetische Verträglichkeit (EMV). Sie zeichnet sich durch die Erzeugung homogener elektromagnetischer Felder über ein breites Frequenzspektrum aus [1]. Für die Untersuchung feldgebundener EMV-Probleme ist neben der analytischen und numerischen Betrachtung, die präzise Messung [2] elektromagnetischer Felder von großer Bedeutung. Automatisierte Positioniersysteme für Feldsonden [3, 4] ermöglichen dabei eine Reduzierung der benötigten Messzeit und eine Erhöhung der Reproduzierbarkeit, wobei das verwendete System möglichst keinen Einfluss auf das Feld haben sollte. In dieser Arbeit werden die Rückwirkungen von zwei verschiedenartigen Positioniersystemen auf das zu messende Feld in einer GTEM-Zelle untersucht und aufgezeigt. Dazu werden ein automatisierter Kreuztisch und ein Positioniersystem mit einer hängenden Feldsonde in die Zelle eingesetzt
Reconstruction of primary vertices at the ATLAS experiment in Run 1 proton–proton collisions at the LHC
This paper presents the method and performance of primary vertex reconstruction in proton–proton collision data recorded by the ATLAS experiment during Run 1 of the LHC. The studies presented focus on data taken during 2012 at a centre-of-mass energy of √s=8 TeV. The performance has been measured as a function of the number of interactions per bunch crossing over a wide range, from one to seventy. The measurement of the position and size of the luminous region and its use as a constraint to improve the primary vertex resolution are discussed. A longitudinal vertex position resolution of about 30μm is achieved for events with high multiplicity of reconstructed tracks. The transverse position resolution is better than 20μm and is dominated by the precision on the size of the luminous region. An analytical model is proposed to describe the primary vertex reconstruction efficiency as a function of the number of interactions per bunch crossing and of the longitudinal size of the luminous region. Agreement between the data and the predictions of this model is better than 3% up to seventy interactions per bunch crossing
Fluent integration of laboratory data into biocatalytic process simulation using EnzymeML, DWSIM, and ontologies
The importance of biocatalysis for ecologically sustainable syntheses in the chemical industry and for applications in everyday life is increasing. To design efficient applications, it is important to know the related enzyme kinetics; however, the measurement is laborious and error-prone. Flow reactors are suitable for rapid reaction parameter screening; here, a novel workflow is proposed including digital image processing (DIP) for the quantification of product concentrations, and the use of structured data acquisition with EnzymeML spreadsheets combined with ontology-based semantic information, leading to rapid and smooth data integration into a simulation tool for kinetics evaluation. One of the major findings is that a flexibly adaptive ontology is essential for FAIR (findability, accessibility, interoperability, reusability) data handling. Further, Python interfaces enable consistent data transfer.ASB, E
Hoosiers’ Health in a Changing Climate: A Report from the Indiana Climate Change Impacts Assessment
In the coming decades, Indiana’s changing climate will bring with it higher temperatures, longer heat waves, more extremely hot days and more frequent extreme storm events. Those changes will affect the health of Hoosiers in every part of the state. This report from the Indiana Climate Change Impacts Assessment (IN CCIA) describes historical and future climate-related health impacts that affect Hoosiers
Recommended from our members
Transcript errors generate amyloid-like proteins in human cells
Aging is characterized by the accumulation of proteins that display amyloid-like behavior. However, the molecular mechanisms by which these proteins arise remain unclear. Here, we demonstrate that amyloid-like proteins are produced in a variety of human cell types, including stem cells, brain organoids and fully differentiated neurons by mistakes that occur in messenger RNA molecules. Some of these mistakes generate mutant proteins already known to cause disease, while others generate proteins that have not been observed before. Moreover, we show that these mistakes increase when cells are exposed to DNA damage, a major hallmark of human aging. When taken together, these experiments suggest a mechanistic link between the normal aging process and age-related diseases
- …
