1,921 research outputs found
A Back-to-Basics Empirical Study of Priority Queues
The theory community has proposed several new heap variants in the recent
past which have remained largely untested experimentally. We take the field
back to the drawing board, with straightforward implementations of both classic
and novel structures using only standard, well-known optimizations. We study
the behavior of each structure on a variety of inputs, including artificial
workloads, workloads generated by running algorithms on real map data, and
workloads from a discrete event simulator used in recent systems networking
research. We provide observations about which characteristics are most
correlated to performance. For example, we find that the L1 cache miss rate
appears to be strongly correlated with wallclock time. We also provide
observations about how the input sequence affects the relative performance of
the different heap variants. For example, we show (both theoretically and in
practice) that certain random insertion-deletion sequences are degenerate and
can lead to misleading results. Overall, our findings suggest that while the
conventional wisdom holds in some cases, it is sorely mistaken in others
Scaling of information costs in firms
What a firm does is more revealing than how much it makes, but firms are
often described with economic metrics. Here, we characterize what firms read,
their information footprint, using a data set of hundreds of millions of
records of news articles accessed by employees in millions of firms. We relate
a firm's information footprint with economic variables, showing that the former
grows superlinearly with the latter. This exaggerates the classic Zipf's law
inequality in the economic size of firms and reveals an economy of scale with
respect to information. Second, we discover that the reading habits of firms
are of limited diversity. Firms above a certain size reduce the relative
diversity of information they consume, indicating the sudden onset of a
coordination cost. Third, we reconstruct the topic graph firms inhabit and
propose a simple model of firm growth in this space. Firms adhere to a mixed
strategy of local exploration and recurrent exploitation on the topic graph.
This strategy is costly, and we predict that firms consume a prodigious amount
of information over their lifetime because they cumulatively add to their
information portfolio instead of diversifying. This shows that the costs of
information and the structure of the space of ideas provide a useful but little
explored perspective on firm growth
Forest Management: are Double or Mixed Rotations Desirable?
In this paper, we study a particular uneven-aged forest stand management pattern that is often advocated in practice. The forest structure under consideration is similar to a normalized forest à la Faustmann, with the following difference: rather than being single aged, each forest tract contains trees of two age classes so that it is submitted to a form of selective cutting. Each harvest involves all of the older trees and only a fraction of the younger ones; hence the name mixed rotation. Trees left standing at harvest help stimulate natural regeneration and improve various environmental and amenity characteristics of the forest. We model this effect by using a cost function that varies with respect to the harvest rate of younger trees. We derive the properties that this cost function must exhibit in order some form of mixed rotation to be superior to the conventional single rotation à la Faustmann; we also characterize the mixed rotation in terms of duration and the harvest rate of younger trees, and we compare its properties with Faustman’s rule. Nous étudions un cas particulier d’aménagement forestier inéquien qui est recommandé dans la pratique actuelle. La structure de la forêt est similaire à une forêt normalisée à la Faustmann avec la différence suivante: au lieu d’être équien, chaque lot comporte deux classes d’âge; il est soumis à une forme de coupe sélective. À chaque récolte, on coupe tous les arbres les plus vieux ainsi qu’une fraction des arbres les plus jeunes; d’où le nom de rotation mixte. Les arbres non coupés aident la régénération naturelle et améliorent diverses caractéristiques environnementales et esthétiques de la forêt. Nous modélisons cet effet en utilisant une fonction de coût qui varie avec le taux de récolte des arbres jeunes. Nous dérivons les propriétés que cette fonction de coût doit satisfaire pour que la rotation mixte soit préférable à la rotation standard à la Faustmann; nous caractérisons la rotation mixte en termes de durée et de taux de récolte des jeunes arbres, que nous comparons avec le cas de Faustmann.forest management, Faustmann’s rule, normal forest, synchronized forest, uneven-aged lots, amenity value, mixed rotation, selective cutting, aménagement forestier, règle de Faustmann, forêt normalisée, forêt synchronisée, forêt inéquienne, aménités, rotation mixte, coupe sélective
How binding are legal limits? Transitions from temporary to permanent work in Spain
This paper studies the duration pattern of …xed-term contracts and the determinants of their conversion into permanent ones in Spain, where the share of …xed-term employment is the highest in Europe. We estimate a duration model for temporary employment, with competing risks of terminating into permanent employment versus alternative states, and ‡exible duration dependence. We …nd that conversion rates are generally below 10%. Our estimated conversion rates roughly increase with tenure, with a pronounced spike at the legal limit, when there is no legal way to retain the worker on a temporary contract. We argue that estimated di¤erences in conversion rates across categories of workers can stem from di¤erences in worker outside options and thus the power to credibly threat to quit temporary jobs.Fixed-term contracts, duration models
Reducing CSF partial volume effects to enhance diffusion tensor imaging metrics of brain microstructure
Technological advances over recent decades now allow for in vivo observation of human brain tissue through the use of neuroimaging methods. While this field originated with techniques capable of capturing macrostructural details of brain anatomy, modern methods such as diffusion tensor imaging (DTI) that are now regularly implemented in research protocols have the ability to characterize brain microstructure. DTI has been used to reveal subtle micro-anatomical abnormalities in the prodromal phase ofº various diseases and also to delineate “normal” age-related changes in brain tissue across the lifespan. Nevertheless, imaging artifact in DTI remains a significant limitation for identifying true neural signatures of disease and brain-behavior relationships. Cerebrospinal fluid (CSF) contamination of brain voxels is a main source of error on DTI scans that causes partial volume effects and reduces the accuracy of tissue characterization. Several methods have been proposed to correct for CSF artifact though many of these methods introduce new limitations that may preclude certain applications. The purpose of this review is to discuss the complexity of signal acquisition as it relates to CSF artifact on DTI scans and review methods of CSF suppression in DTI. We will then discuss a technique that has been recently shown to effectively suppress the CSF signal in DTI data, resulting in fewer errors and improved measurement of brain tissue. This approach and related techniques have the potential to significantly improve our understanding of “normal” brain aging and neuropsychiatric and neurodegenerative diseases. Considerations for next-level applications are discussed
Combining Binary Search Trees
We present a general transformation for combining a constant number of binary search tree data structures (BSTs) into a single BST whose running time is within a constant factor of the minimum of any “well-behaved” bound on the running time of the given BSTs, for any online access sequence. (A BST has a well-behaved bound with f(n) overhead if it spends at most O(f(n)) time per access and its bound satisfies a weak sense of closure under subsequences.) In particular, we obtain a BST data structure that is O(loglogn) competitive, satisfies the working set bound (and thus satisfies the static finger bound and the static optimality bound), satisfies the dynamic finger bound, satisfies the unified bound with an additive O(loglogn) factor, and performs each access in worst-case O(logn) time
- …