1,425 research outputs found

    Testing Transitivity of Preferences on Two-Alternative Forced Choice Data

    Get PDF
    As Duncan Luce and other prominent scholars have pointed out on several occasions, testing algebraic models against empirical data raises difficult conceptual, mathematical, and statistical challenges. Empirical data often result from statistical sampling processes, whereas algebraic theories are nonprobabilistic. Many probabilistic specifications lead to statistical boundary problems and are subject to nontrivial order constrained statistical inference. The present paper discusses Luce's challenge for a particularly prominent axiom: Transitivity. The axiom of transitivity is a central component in many algebraic theories of preference and choice. We offer the currently most complete solution to the challenge in the case of transitivity of binary preference on the theory side and two-alternative forced choice on the empirical side, explicitly for up to five, and implicitly for up to seven, choice alternatives. We also discuss the relationship between our proposed solution and weak stochastic transitivity. We recommend to abandon the latter as a model of transitive individual preferences

    Cooperation Without Coordination: Influence Dynamics and the Emergence of Synchrony in Inter-Organizational Networks

    Get PDF
    This paper explores the emergence of synchrony in cooperative inter-organizational networks. While some research suggests that synchronizing organizational actions like product releases is a form of collective behavior that generates advantages for organizations, most existing network theory focuses on dyads and not the larger organizational groups where networked cooperation is relevant. As a result, we know a lot about resource mobilization and information diffusion across dyads, but very little about how cooperation occurs in larger networks where collective behaviors like synchrony are important. Using a simple computational model grounded in prior research on inter-organizational networks, this paper develops a theoretical framework linking temporal dynamics to network theory that sheds light on the emergence of synchrony, why it emerges faster in some networks than others, and how organizations can shape synchrony to their own advantage. Specifically, I find that synchrony emerges from influence across network ties without the need for a central coordinator or exogenous technology cycle. It emerges though a series of cooptation events across network ties wherein social influence accumulates to synchronize some organizations with others. The magnitude and time to reach synchrony varies predictably with features of network structure such as network size (N), mean degree (K), and tie strength (e), although an unexpected finding is that clustering (CC) diminishes synchrony by generating coalitions with rhythms that vary too widely. These dependencies can be understood with reference to three mechanisms – accelerated, coalitional, and conflicting influence – that shape cooptation dynamics. Finally, intentional coordination across interorganizational relationships accelerates the time to synchronize the entire network, creating temporal spillovers to non-coordinating organizations; moreover, coordinating organizations benefit from increased synchrony performance – i.e., they increase the relative likelihood that network synchronization tips to their own underlying rhythm. The magnitude of this performance advantage depends on network size (N) and mean degree (K), but not on tie strength (e) or clustering (CC).Support for this research was provided by MIT’s Sloan School of Management

    Computed tomography measures of nutrition in patients with end-stage liver disease provide a novel approach to characterize deficits

    Get PDF
    Aim Patients with cirrhosis and end-stage liver disease (ESLD) develop severe nutrition deficits that impact on morbidity and mortality. Laboratory measures of nutrition fail to fully assess clinical deficits in muscle mass and fat stores. This study employs computed tomography imaging to assess muscle mass and subcutaneous and visceral fat stores in patients with ESLD. Methods This 1:1 case-control study design compares ESLD patients with healthy controls. Study patients were selected from a database of ESLD patients using a stratified method to assure a representative sample based on age, body mass index (BMI), gender, and model for end-stage liver disease score (MELD). Control patients were trauma patients with a low injury severity score (<10) who had a CT scan during evaluation. Cases and controls were matched for age +/- 5 years, gender, and BMI +/- 2. Results There were 90 subjects and 90 controls. ESLD patients had lower albumin levels (p<0.001), but similar total protein levels (p=0.72). ESLD patients had a deficit in muscle mass (-19%, p<0.001) and visceral fat (-13%, p<0.001), but similar subcutaneous fat (-1%, p=0.35). ESLD patients at highest risk for sarcopenia included those over age 60, BMI< 25.0, and female gender. We found degree of sarcopenia to be independent of MELD score. Conclusions These results support previous research demonstrating substantial nutrition deficits in ESLD patients that are not adequately measured by laboratory testing. Patients with ESLD have significant deficits of muscle and visceral fat stores, but a similar amount of subcutaneous fat

    “You Can Sort of Feel It”: Exploring Metacognition and the Feeling of Knowing Among Undergraduate Students

    Get PDF
    Traditional research on the metacognitive practice of calibration has been primarily investigated within the realm of quantitative experimental methodologies. This article expands the research scope of metacognitive calibration by offering a qualitative approach to the growing body of literature. More specifically, the current study investigates the learners’ perspective on the calibration process. Ten undergraduate students were selected to participate in a structured interview on their previous calibration performances (five students low in calibration processing and five proficient in calibration processing). Ultimately nine students (N=9) participated in individual interviews. Participant interviews are qualitatively assessed through the mediums of (1) Serra and Matcalfe’s original work on the “feelings of knowing” and (2) self-regulated learning theory (SRL). Results indicate a difference in feelings of knowing between low and proficient calibrators across a battery of themes: effort, strategies, planning, and evaluation. Implications of the results and direction for future research are explored

    Signal Intensity Analysis and Optimization for in Vivo Imaging of Cherenkov and Excited Luminescence.

    Get PDF
    During external beam radiotherapy (EBRT), in vivo Cherenkov optical emissions can be used as a dosimetry tool or to excite luminescence, termed Cherenkov-excited luminescence (CEL) with microsecond-level time-gated cameras. The goal of this work was to develop a complete theoretical foundation for the detectable signal strength, in order to provide guidance on optimization of the limits of detection and how to optimize near real time imaging. The key parameters affecting photon production, propagation and detection were considered and experimental validation with both tissue phantoms and a murine model are shown. Both the theoretical analysis and experimental data indicate that the detection level is near a single photon-per-pixel for the detection geometry and frame rates commonly used, with the strongest factor being the signal decrease with the square of distance from tissue to camera. Experimental data demonstrates how the SNR improves with increasing integration time, but only up to the point where the dominance of camera read noise is overcome by stray photon noise that cannot be suppressed. For the current camera in a fixed geometry, the signal to background ratio limits the detection of light signals, and the observed in vivo Cherenkov emission is on the order of 100×  stronger than CEL signals. As a result, imaging signals from depths  \u3c 15 mm is reasonable for Cherenkov light, and depths  \u3c 3 mm is reasonable for CEL imaging. The current investigation modeled Cherenkov and CEL imaging of two oxygen sensing phosphorescent compounds, but the modularity of the code allows for easy comparison of different agents or alternative cameras, geometries or tissues

    The Masses of Population II White Dwarfs

    Full text link
    Globular star clusters are among the first stellar populations to have formed in the Milky Way, and thus only a small sliver of their initial spectrum of stellar types are still burning hydrogen on the main-sequence today. Almost all of the stars born with more mass than 0.8 M_sun have evolved to form the white dwarf cooling sequence of these systems, and the distribution and properties of these remnants uniquely holds clues related to the nature of the now evolved progenitor stars. With ultra-deep HST imaging observations, rich white dwarf populations of four nearby Milky Way globular clusters have recently been uncovered, and are found to extend an impressive 5 - 8 magnitudes in the faint-blue region of the H-R diagram. In this paper, we characterize the properties of these population II remnants by presenting the first direct mass measurements of individual white dwarfs near the tip of the cooling sequence in the nearest of the Milky Way globulars, M4. Based on Gemini/GMOS and Keck/LRIS multiobject spectroscopic observations, our results indicate that 0.8 M_sun population II main-sequence stars evolving today form 0.53 +/- 0.01 M_sun white dwarfs. We discuss the implications of this result as it relates to our understanding of stellar structure and evolution of population II stars and for the age of the Galactic halo, as measured with white dwarf cooling theory.Comment: Accepted for Publication in Astrophys. J. on Aug. 05th, 2009. 19 pages including 9 figures and 2 tables (journal format
    • …
    corecore