11,242 research outputs found

    An O ( n log n ) algorithm for the two-machine ow shop problem with controllable machine speeds

    Get PDF
    Paper presented at the AEA-Conference in Göteborg, Sweden, 9-11 May 1996 In this paper we discuss the influence of tax shifting on wages and employment. The paper is related to earlier research in this field, both for the Netherlands and for other European welfare states. Our approach differs since we pay explicit attention to the well-known theoretical result that it does not matter which side of the market is taxed (Dalton''''s Law). We will analyse the mechanisms behind tax shifting. Further we want to analyse whether a shift from employers'''' to employees'''' burden has an influence on wages and employment. The paper discusses the influence of taxes on wages and employment in various bargaining settings: the perfect competition model, right-to-manage models (including that of bilateral monopoly) and efficiency wage models are analysed. We conclude that the results depend on the framework that is used in the description of wage setting behaviour. The theorem that it is irrelevant which side of the market is taxed, does not hold for right-to-manage and efficiency wage models. In estimations for the Netherlands, the elasticity of wage costs with respect to employers'''' taxes is usually found to lie around 0.9, whereas the elasticity with respect to employees'''' taxes usually is found to lie around 0.4. This apparent violation of Dalton''''s Law has never been explained before. However, it can be explained from our analysis. Moreover, we show the importance of this result for the impact of a recent tax reform in the Netherlands on wages.public economics ;

    Diffusion and electrical properties of Boron and Arsenic doped poly-Si and poly-GexSi1x(x 0.3)Ge_xSi_1-x(x~0.3) as gate material for sub-0.25 µm complementary metal oxide semiconductor applications

    Get PDF
    In this paper the texture, morphology, diffusion and electrical (de‐) activation of dopants in polycrystalline GexSi1-x and Si have been studied in detail. For gate doping B+,BF2+ and As+ were used and thermal budgets were chosen to be compatible with deep submicron CMOS processes. Diffusion of dopants is different for GeSi alloys, B diffuses significantly more slowly and As has a much faster diffusion in GeSi. For B doped samples both electrical activation and mobility are higher compared to poly‐Si. Also for the first time, BF2+ data of doped layers are presented, these show the same trend as the B doped samples but with an overall higher sheet resistance. For arsenic doping, activation and mobility are lower compared to poly‐Si, resulting in a higher sheet resistance. The dopant deactivation due to long low temperature steps after the final activation anneal is also found to be quite different. Boron‐doped GeSi samples show considerable reduced deactivation whereas arsenic shows a higher deactivation rate. The electrical properties are interpreted in terms of different grain size, quality and properties of the grain boundaries, defects, dopant clustering, and segregation, and the solid solubility of the dopants

    Multisource Self-calibration for Sensor Arrays

    Full text link
    Calibration of a sensor array is more involved if the antennas have direction dependent gains and multiple calibrator sources are simultaneously present. We study this case for a sensor array with arbitrary geometry but identical elements, i.e. elements with the same direction dependent gain pattern. A weighted alternating least squares (WALS) algorithm is derived that iteratively solves for the direction independent complex gains of the array elements, their noise powers and their gains in the direction of the calibrator sources. An extension of the problem is the case where the apparent calibrator source locations are unknown, e.g., due to refractive propagation paths. For this case, the WALS method is supplemented with weighted subspace fitting (WSF) direction finding techniques. Using Monte Carlo simulations we demonstrate that both methods are asymptotically statistically efficient and converge within two iterations even in cases of low SNR.Comment: 11 pages, 8 figure

    Fundamental Imaging Limits of Radio Telescope Arrays

    Full text link
    The fidelity of radio astronomical images is generally assessed by practical experience, i.e. using rules of thumb, although some aspects and cases have been treated rigorously. In this paper we present a mathematical framework capable of describing the fundamental limits of radio astronomical imaging problems. Although the data model assumes a single snapshot observation, i.e. variations in time and frequency are not considered, this framework is sufficiently general to allow extension to synthesis observations. Using tools from statistical signal processing and linear algebra, we discuss the tractability of the imaging and deconvolution problem, the redistribution of noise in the map by the imaging and deconvolution process, the covariance of the image values due to propagation of calibration errors and thermal noise and the upper limit on the number of sources tractable by self calibration. The combination of covariance of the image values and the number of tractable sources determines the effective noise floor achievable in the imaging process. The effective noise provides a better figure of merit than dynamic range since it includes the spatial variations of the noise. Our results provide handles for improving the imaging performance by design of the array.Comment: 12 pages, 8 figure

    Optimal staffing under an annualized hours regime using Cross-Entropy optimization

    Get PDF
    This paper discusses staffing under annualized hours. Staffing is the selection of the most cost-efficient workforce to cover workforce demand. Annualized hours measure working time per year instead of per week, relaxing the restriction for employees to work the same number of hours every week. To solve the underlying combinatorial optimization problem this paper develops a Cross-Entropy optimization implementation that includes a penalty function and a repair function to guarantee feasible solutions. Our experimental results show Cross-Entropy optimization is efficient across a broad range of instances, where real-life sized instances are solved in seconds, which significantly outperforms an MILP formulation solved with CPLEX. In addition, the solution quality of Cross-Entropy closely approaches the optimal solutions obtained by CPLEX. Our Cross-Entropy implementation offers an outstanding method for real-time decision making, for example in response to unexpected staff illnesses, and scenario analysis

    Metabifurcation analysis of a mean field model of the cortex

    Full text link
    Mean field models (MFMs) of cortical tissue incorporate salient features of neural masses to model activity at the population level. One of the common aspects of MFM descriptions is the presence of a high dimensional parameter space capturing neurobiological attributes relevant to brain dynamics. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two families. After investigating and characterizing these, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli, distribution of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They emerge when the parameter space is partitioned according to bifurcation responses. This partitioning cannot be achieved by the investigation of only a small number of parameter sets, but is the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces

    Calibration Challenges for Future Radio Telescopes

    Full text link
    Instruments for radio astronomical observations have come a long way. While the first telescopes were based on very large dishes and 2-antenna interferometers, current instruments consist of dozens of steerable dishes, whereas future instruments will be even larger distributed sensor arrays with a hierarchy of phased array elements. For such arrays to provide meaningful output (images), accurate calibration is of critical importance. Calibration must solve for the unknown antenna gains and phases, as well as the unknown atmospheric and ionospheric disturbances. Future telescopes will have a large number of elements and a large field of view. In this case the parameters are strongly direction dependent, resulting in a large number of unknown parameters even if appropriately constrained physical or phenomenological descriptions are used. This makes calibration a daunting parameter estimation task, that is reviewed from a signal processing perspective in this article.Comment: 12 pages, 7 figures, 20 subfigures The title quoted in the meta-data is the title after release / final editing

    An Attempt to Detect the Galactic Bulge at 12 microns with IRAS

    Full text link
    Surface brightness maps at 12 microns, derived from observations with the Infrared Astronomical Satellite (IRAS), are used to estimate the integrated flux at this wavelength from the Galactic bulge as a function of galactic latitude along the minor axis. A simple model was used to remove Galactic disk emission (e.g. unresolved stars and dust) from the IRAS measurements. The resulting estimates are compared with predictions for the 12 micron bulge surface brightness based on observations of complete samples of optically identified M giants in several minor axis bulge fields. No evidence is found for any significant component of 12m emission in the bulge other than that expected from the optically identified M star sample plus normal, lower luminosity stars. Known large amplitude variables and point sources from the IRAS catalogue contribute only a small fraction to the total 12 micron flux.Comment: Accepted for publication in ApJ; 13 pages of text including tables in MS WORD97 generated postscript; 3 figures in postscript by Sigma Plo

    OH-selected AGB and post-AGB objects I.Infrared and maser properties

    Full text link
    Using 766 compact objects from a survey of the galactic Plane in the 1612-MHz OH line, new light is cast on the infrared properties of evolved stars on the TP-AGB and beyond. The usual mid-infrared selection criteria, based on IRAS colours, largely fail to distinguish early post-AGB stages. A two-colour diagram from narrower-band MSX flux densities, with bimodal distributions, provides a better tool to do the latter. Four mutually consistent selection criteria for OH-masing red PPNe are given, as well as two for early post-AGB masers and one for all post--AGB masers, including the earliest ones. All these criteria miss a group of blue, high-outflow post-AGB sources with 60-mum excess; these will be discussed in detail in Paper II. The majority of post-AGB sources show regular double-peaked spectra in the OH 1612-MHz line, with fairly low outflow velocities, although the fractions of single peaks and irregular spectra may vary with age and mass. The OH flux density shows a fairly regular relation with the stellar flux and the envelope optical depth, with the maser efficiency increasing with IRAS colour R21. The OH flux density is linearly correlated with the 60-mum flux density.Comment: 16 pages, LaTex, 22 figures, AJ (accepted

    Tidewater calving

    Get PDF
    This is the publisher's version, copyright by the International Glaciological Society.Data from Columbia Glacier are used to identify processes that control calving from a temperate tidewater glacier and to re-evaluate models that have been proposed to describe iceberg calving. Since 1981, Columbia Glacier has been retreating rapidly, with an almost seven-fold increase in calving rate from the mid- 1970s to 1993. At the same time, the speed of the glacier increased almost as much, so that the actual rate of retreat increased more slowly. According to the commonly accepted model, the calving rate is linearly related to the water depth at the terminus, with retreat of the glacier snout into deeper water, leading to larger calving rates and accelerated retreat. The Columbia Glacier data show that the calving rate is not simply linked to observed quantities such as water depth or stretching rate near the terminus. During the retreat, the thickness at the terminus appears to be linearly correlated with the water depth; at the terminus, the thickness in excess of flotation remained at about 50 m. This suggests that retreat may be initiated when the terminal thickness becomes too small, with the rate of retreat controlled by the rate at which the snout is thinning and by the basal slope. The implication is that the rapid retreat of Columbia Glacier (and other comparable tidewater glaciers) is not the result of an increase in calving as the glacier retreated into deeper water. Instead, the retreat was initiated and maintained by thinning of the glacier. For Columbia Glacier, the continued thinning is probably associated with the increase in glacier speed and retreat may be expected to continue as long as these large speeds are maintained. It is not clear what mechanism may be responsible for the speed-up but the most likely candidate is a change in basal conditions or subglacial drainage. Consequently, the behavior of tidewater glaciers may be controlled by processes acting at the glacier bed rather than by what happens at the glacier terminus
    corecore