299 research outputs found
The Effects of Privatization on Efficiency: How Does Privatization Work?
Cataloged from PDF version of article.Uncovering the effects of privatization is difficult, because privatization of a particular firm usually is not an accident. This paper tests the effects of privatization on productive and allocative (market) efficiency using a rich panel data set of 22 privatized cement plants from Turkey in the 1983-99 period. Since, all public cement firms were privatized and we have preand post-privatization data for all, we are able to avoid the problem of endogeneity associated with sample selection. Our analysis goes beyond just examining the privatization effects and explores how privatization really works. Changes in objectives of the firm (ownership effect) and changes in market structure (environment effect) may both be responsible for privatization outcomes. We find that ownership effects are sufficient to achieve improvements in labor productivity. Our results on allocative efficiency, however, are dependent on changes in the competitive environment. While all plants seem to improve labor productivity through works force reductions, plants privatized to foreign buyers also increase their capital and investment significantly. (c) 2006 Elsevier Ltd. All rights reserved
Rapid Corner Detection Using FPGAs
In order to perform precision landings for space missions, a control system must be accurate to within ten meters. Feature detection applied against images taken during descent and correlated against the provided base image is computationally expensive and requires tens of seconds of processing time to do just one image while the goal is to process multiple images per second. To solve this problem, this algorithm takes that processing load from the central processing unit (CPU) and gives it to a reconfigurable field programmable gate array (FPGA), which is able to compute data in parallel at very high clock speeds. The workload of the processor then becomes simpler; to read an image from a camera, it is transferred into the FPGA, and the results are read back from the FPGA. The Harris Corner Detector uses the determinant and trace to find a corner score, with each step of the computation occurring on independent clock cycles. Essentially, the image is converted into an x and y derivative map. Once three lines of pixel information have been queued up, valid pixel derivatives are clocked into the product and averaging phase of the pipeline. Each x and y derivative is squared against itself, as well as the product of the ix and iy derivative, and each value is stored in a WxN size buffer, where W represents the size of the integration window and N is the width of the image. In this particular case, a window size of 5 was chosen, and the image is 640 480. Over a WxN size window, an equidistance Gaussian is applied (to bring out the stronger corners), and then each value in the entire window is summed and stored. The required components of the equation are in place, and it is just a matter of taking the determinant and trace. It should be noted that the trace is being weighted by a constant k, a value that is found empirically to be within 0.04 to 0.15 (and in this implementation is 0.05). The constant k determines the number of corners available to be compared against a threshold sigma to mark a valid corner. After a fixed delay from when the first pixel is clocked in (to fill the pipeline), a score is achieved after each successive clock. This score corresponds with an (x,y) location within the image. If the score is higher than the predetermined threshold sigma, then a flag is set high and the location is recorded
Field Programmable Gate Array Apparatus, Method, and Computer Program
An apparatus is provided that includes a plurality of modules, a plurality of memory banks, and a multiplexor. Each module includes at least one agent that interfaces between a module and a memory bank. Each memory bank includes an arbiter that interfaces between the at least one agent of each module and the memory bank. The multiplexor is configured to assign data paths between the at least one agent of each module and a corresponding arbiter of each memory bank based on the assigned data path. The at least one agent of each module is configured to read data from the corresponding arbiter of the memory bank or write modified data to the corresponding arbiter of the memory bank
SAD5 Stereo Correlation Line-Striping in an FPGA
High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms in addition to SAD5 to be run on the same FPGA
FPGA Vision Data Architecture
JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module
A bargaining procedure leading to the serial rule in games with veto players
This paper studies an allocation procedure for coalitional games with veto players. The procedure is similar to the one presented by Arin and Feltkamp (J Math Econ 43:855-870, 2007), which is based on Dagan et al. (Games Econ Behav 18:55-72, 1997). A distinguished player makes a proposal that the remaining players must accept or reject, and conflict is solved bilaterally between the rejector and the proposer. We allow the proposer to make sequential proposals over several periods. If responders are myopic maximizers (i.e. consider each period in isolation), the only equilibrium outcome is the serial rule of Arin and Feltkamp (Eur J Oper Res 216:208-213, 2012) regardless of the order of moves. If all players are fully rational, the serial rule still arises as the unique subgame perfect equilibrium outcome if the order of moves is such that stronger players respond to the proposal after weaker ones
Quanta Burst Photography
Single-photon avalanche diodes (SPADs) are an emerging sensor technology
capable of detecting individual incident photons, and capturing their
time-of-arrival with high timing precision. While these sensors were limited to
single-pixel or low-resolution devices in the past, recently, large (up to 1
MPixel) SPAD arrays have been developed. These single-photon cameras (SPCs) are
capable of capturing high-speed sequences of binary single-photon images with
no read noise. We present quanta burst photography, a computational photography
technique that leverages SPCs as passive imaging devices for photography in
challenging conditions, including ultra low-light and fast motion. Inspired by
recent success of conventional burst photography, we design algorithms that
align and merge binary sequences captured by SPCs into intensity images with
minimal motion blur and artifacts, high signal-to-noise ratio (SNR), and high
dynamic range. We theoretically analyze the SNR and dynamic range of quanta
burst photography, and identify the imaging regimes where it provides
significant benefits. We demonstrate, via a recently developed SPAD array, that
the proposed method is able to generate high-quality images for scenes with
challenging lighting, complex geometries, high dynamic range and moving
objects. With the ongoing development of SPAD arrays, we envision quanta burst
photography finding applications in both consumer and scientific photography.Comment: A version with better-quality images can be found on the project
webpage: http://wisionlab.cs.wisc.edu/project/quanta-burst-photography
Measuring Children’s Perceptions of Their Mother’s Depression: The Children’s Perceptions of Others’ Depression Scale – Mother Version
Several theoretical perspectives suggest that knowledge of children’s perceptions of and beliefs about their parents’ depression may be critical for understanding its impact on children. This paper describes the development and preliminary evidence for the psychometric properties of a new measure, the Children’s Perceptions of Others’ Depression – Mother Version (CPOD-MV), which assesses theoretically- and empirically driven constructs related to children’s understanding and beliefs about their mothers’ depression. These constructs include children’s perceptions of the severity, chronicity, and impairing nature of their mothers’ depression; self-blame for their mother’s depression; and beliefs about their abilities to deal with their mother\u27s depression by personally coping or alleviating the mother’s depression. The CPOD-MV underwent two stages of development. First: (1) a review of the literature to identify the key constructs; (2) focus groups to help generate items; and (3) clinicians’ ratings on the relevance and comprehensibility of the drafted items. Second was a study of the measure’s psychometric properties. The literature review, focus groups, and item reduction techniques yielded a 21-item measure. Reliability, factor structure, and discriminant, convergent and concurrent validity were tested in a sample of 91 10- to17- year-old children whose mothers had been treated for depression. The scale had good internal consistency, factor structure suggestive of a single construct, discriminant, concurrent, convergent, and incremental validity, suggesting the importance of measuring children’s perceptions of their mothers’ depression, beyond knowledge of mothers’ depression symptom level, when explaining which children have the greatest risk for emotional and behavioral problems among children of depressed mothers. These findings support continued development and beginning clinical applications of the scale
Is Roger Federer more loss averse than Serena Williams?
Using data from the high-stakes 2013 Dubai professional tennis tournament, we find that, compared with a tied score, (i) male players have a higher serve speed and thus exhibit more effort when behind in score, and their serve speeds get less sensitive to losses or gains when score difference gets too large, and (ii) female players do not change their serve speed when behind, while serving slower when ahead. Thus, male players comply more with Prospect Theory exhibiting more loss aversion and reflection effect. Our results are robust to controlling for player fixed effects and characteristics with player random effects. © 2016 Informa UK Limited, trading as Taylor & Francis Group
HIV provider and patient perspectives on the Development of a Health Department “Data to Care” Program: a qualitative study
Abstract Background U.S. health departments have not historically used HIV surveillance data for disease control interventions with individuals, but advances in HIV treatment and surveillance are changing public health practice. Many U.S. health departments are in the early stages of implementing “Data to Care” programs to assists persons living with HIV (PLWH) with engaging in care, based on information collected for HIV surveillance. Stakeholder engagement is a critical first step for development of these programs. In Seattle-King County, Washington, the health department conducted interviews with HIV medical care providers and PLWH to inform its Data to Care program. This paper describes the key themes of these interviews and traces the evolution of the resulting program. Methods Disease intervention specialists conducted individual, semi-structured qualitative interviews with 20 PLWH randomly selected from HIV surveillance who had HIV RNA levels >10,000 copies/mL in 2009–2010. A physician investigator conducted key informant interviews with 15 HIV medical care providers. Investigators analyzed de-identified interview transcripts, developed a codebook of themes, independently coded the interviews, and identified codes used most frequently as well as illustrative quotes for these key themes. We also trace the evolution of the program from 2010 to 2015. Results PLWH generally accepted the idea of the health department helping PLWH engage in care, and described how hearing about the treatment experiences of HIV seropositive peers would assist them with engagement in care. Although many physicians were supportive of the Data to Care concept, others expressed concern about potential health department intrusion on patient privacy and the patient-physician relationship. Providers emphasized the need for the health department to coordinate with existing efforts to improve patient engagement. As a result of the interviews, the Data to Care program in Seattle-King County was designed to incorporate an HIV-positive peer component and to ensure coordination with HIV care providers in the process of relinking patients to care. Conclusions Health departments can build support for Data to Care efforts by gathering input of key stakeholders, such as HIV medical and social service providers, and coordinating with clinic-based efforts to re-engage patients in care
- …