619 research outputs found

    Experiences of aiding autobiographical memory Using the SenseCam

    Get PDF
    Human memory is a dynamic system that makes accessible certain memories of events based on a hierarchy of information, arguably driven by personal significance. Not all events are remembered, but those that are tend to be more psychologically relevant. In contrast, lifelogging is the process of automatically recording aspects of one's life in digital form without loss of information. In this article we share our experiences in designing computer-based solutions to assist people review their visual lifelogs and address this contrast. The technical basis for our work is automatically segmenting visual lifelogs into events, allowing event similarity and event importance to be computed, ideas that are motivated by cognitive science considerations of how human memory works and can be assisted. Our work has been based on visual lifelogs gathered by dozens of people, some of them with collections spanning multiple years. In this review article we summarize a series of studies that have led to the development of a browser that is based on human memory systems and discuss the inherent tension in storing large amounts of data but making the most relevant material the most accessible

    Late gadolinium enhancement and subclinical cardiac dysfunction on cardiac MRI in asymptomatic HIV-positive men

    Get PDF
    Background: HIV is associated with an increased risk of cardiovascular disease (CVD) and related clinical events. While traditional risk factors play an important role in the pathology of cardiovascular disease, HIV infection and its sequelae of immune activation and inflammation may have significant effects on the myocardium before becoming clinically evident. Cardiac MRI (CMR) can be used to detect the pattern of these subclinical changes. This will lead to a better understanding of risk factors contributing to cardiovascular disease prior to it becoming clinically significant in HIV-positive patients. Methods: Prospective cohort study of 127 asymptomatic HIV-positive men on ART compared to 35 matched controls. Baseline demographics, HIV parameters, 12-lead ECG, routine biochemistry, and traditional cardiovascular risk factors were recorded. Images were acquired on a 3T Achieva Philips MRI scanner with 5 channel phase array cardiac coil and weight-based IV gadolinium was given at 0.15 mmol/kg dose with post-contrast inversion recovery imaging after 10 minutes. Results: 6/127 (4.7%) of asymptomatic HIV-positive men had late gadolinium enhancement (LGE) on MRI verses 1/35 (2.9%) in the control group. In 3/6 (50%) of cases this was in a classical infarction pattern with subendocardial involvement. 3/6 (50%) were consistent with prior myocarditis. There was no significant difference in mean LVEF (66.93% vs 65.18%), LVMI (60.05g/m2 vs 55.94g/m2) or posterolateral wall thickness (8.28 mm and 8.16 mm) between cases and controls respectively. There was significantly more diastolic dysfunction, E:A ratio < 1, found in the HIV-positive group, 18% vs 7% of controls (p = 0.037). Framingham risk did not predict either of these outcomes. Conclusions: There is an increased incidence of LGE detected on CMR in this asymptomatic HIV-positive cohort. Two distinct pathological processes were identifed as causing these changes, myocardial infarction and myocarditis. These findings were independent of traditional cardiac risk factors, duration of HIV infection and ART therapy. Sub clinical cardiac dysfunction may be underreported in other cardiac evaluation studies. The true impact of other potential risk factors may also be underestimated, highlighting the need for the development of more complex prediction models

    Towards Precision LSST Weak-Lensing Measurement - I: Impacts of Atmospheric Turbulence and Optical Aberration

    Full text link
    The weak-lensing science of the LSST project drives the need to carefully model and separate the instrumental artifacts from the intrinsic lensing signal. The dominant source of the systematics for all ground based telescopes is the spatial correlation of the PSF modulated by both atmospheric turbulence and optical aberrations. In this paper, we present a full FOV simulation of the LSST images by modeling both the atmosphere and the telescope optics with the most current data for the telescope specifications and the environment. To simulate the effects of atmospheric turbulence, we generated six-layer phase screens with the parameters estimated from the on-site measurements. For the optics, we combined the ray-tracing tool ZEMAX and our simulated focal plane data to introduce realistic aberrations and focal plane height fluctuations. Although this expected flatness deviation for LSST is small compared with that of other existing cameras, the fast f-ratio of the LSST optics makes this focal plane flatness variation and the resulting PSF discontinuities across the CCD boundaries significant challenges in our removal of the systematics. We resolve this complication by performing PCA CCD-by-CCD, and interpolating the basis functions using conventional polynomials. We demonstrate that this PSF correction scheme reduces the residual PSF ellipticity correlation below 10^-7 over the cosmologically interesting scale. From a null test using HST/UDF galaxy images without input shear, we verify that the amplitude of the galaxy ellipticity correlation function, after the PSF correction, is consistent with the shot noise set by the finite number of objects. Therefore, we conclude that the current optical design and specification for the accuracy in the focal plane assembly are sufficient to enable the control of the PSF systematics required for weak-lensing science with the LSST.Comment: Accepted to PASP. High-resolution version is available at http://dls.physics.ucdavis.edu/~mkjee/LSST_weak_lensing_simulation.pd

    Generalized sine-Gordon/massive Thirring models and soliton/particle correspondences

    Get PDF
    We consider a real Lagrangian off-critical submodel describing the soliton sector of the so-called conformal affine sl(3)(1)sl(3)^{(1)} Toda model coupled to matter fields (CATM). The theory is treated as a constrained system in the context of Faddeev-Jackiw and the symplectic schemes. We exhibit the parent Lagrangian nature of the model from which generalizations of the sine-Gordon (GSG) or the massive Thirring (GMT) models are derivable. The dual description of the model is further emphasized by providing the relationships between bilinears of GMT spinors and relevant expressions of the GSG fields. In this way we exhibit the strong/weak coupling phases and the (generalized) soliton/particle correspondences of the model. The sl(n)(1)sl(n)^{(1)} case is also outlined.Comment: 22 pages, LaTex, some comments and references added, conclusions unchanged, to appear in J. Math. Phy

    Insight and Evidence Motivating the Simplification of Dual-Analysis Hybrid Systems into Single-Analysis Hybrid Systems

    Get PDF
    Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component

    Non-Equilibrium Statistical Physics of Currents in Queuing Networks

    Get PDF
    We consider a stable open queuing network as a steady non-equilibrium system of interacting particles. The network is completely specified by its underlying graphical structure, type of interaction at each node, and the Markovian transition rates between nodes. For such systems, we ask the question ``What is the most likely way for large currents to accumulate over time in a network ?'', where time is large compared to the system correlation time scale. We identify two interesting regimes. In the first regime, in which the accumulation of currents over time exceeds the expected value by a small to moderate amount (moderate large deviation), we find that the large-deviation distribution of currents is universal (independent of the interaction details), and there is no long-time and averaged over time accumulation of particles (condensation) at any nodes. In the second regime, in which the accumulation of currents over time exceeds the expected value by a large amount (severe large deviation), we find that the large-deviation current distribution is sensitive to interaction details, and there is a long-time accumulation of particles (condensation) at some nodes. The transition between the two regimes can be described as a dynamical second order phase transition. We illustrate these ideas using the simple, yet non-trivial, example of a single node with feedback.Comment: 26 pages, 5 figure

    Genic and nongenic contributions to natural variation of quantitative traits in maize

    Get PDF
    The complex genomes of many economically important crops present tremendous challenges to understand the genetic control of many quantitative traits with great importance in crop production, adaptation, and evolution. Advances in genomic technology need to be integrated with strategic genetic design and novel perspectives to break new ground. Complementary to individual-gene-targeted research, which remains challenging, a global assessment of the genomic distribution of trait-associated SNPs (TASs) discovered from genome scans of quantitative traits can provide insights into the genetic architecture and contribute to the design of future studies. Here we report the first systematic tabulation of the relative contribution of different genomic regions to quantitative trait variation in maize. We found that TASs were enriched in the nongenic regions, particularly within a 5-kb window upstream of genes, which highlights the importance of polymorphisms regulating gene expression in shaping the natural variation. Consistent with these findings, TASs collectively explained 44%-59% of the total phenotypic variation across maize quantitative traits, and on average, 79% of the explained variation could be attributed to TASs located in genes or within 5 kb upstream of genes, which together comprise only 13% of the genome. Our findings suggest that efficient, cost-effective genome-wide association studies (GWAS) in species with complex genomes can focus on genic and promoter regions

    The impact of the revised 17 O(p, \u3b1)14 N reaction rate on 17 O stellar abundances and yields

    Get PDF
    Context. Material processed by the CNO cycle in stellar interiors is enriched in 17O. When mixing processes from the stellar surface reach these layers, as occurs when stars become red giants and undergo the first dredge up, the abundance of 17O increases. Such an occurrence explains the drop of the 16O/17O observed in RGB stars with mass larger than solar mass 1:5M solar mass. As a consequence, the interstellar medium is continuously polluted by the wind of evolved stars enriched in 17O . Aims. Recently, the Laboratory for Underground Nuclear Astrophysics (LUNA) collaboration released an improved rate of the 17O(p; a)14N reaction. In this paper we discuss the impact that the revised rate has on the 16O/17O ratio at the stellar surface and on 17O stellar yields. Methods.We computed stellar models of initial mass between 1 and 20M solar mass and compared the results obtained by adopting the revised rate of the 17O(p; a)14N to those obtained using previous rates. Results. The post-first dredge up 16O/17O ratios are about 20% larger than previously obtained. Negligible variations are found in the case of the second and the third dredge up. In spite of the larger 17O(p; a)14N rate, we confirm previous claims that an extra-mixing process on the red giant branch, commonly invoked to explain the low carbon isotopic ratio observed in bright low-mass giant stars, marginally affects the 16O/17O ratio. Possible effects on AGB extra-mixing episodes are also discussed. As a whole, a substantial reduction of 17O stellar yields is found. In particular, the net yield of stars with mass ranging between 2 and 20 solar mass is 15 to 40% smaller than previously estimated. Conclusions. The revision of the 17O(p; a)14N rate has a major impact on the interpretation of the 16O/17O observed in evolved giants, in stardust grains and on the 17O stellar yields
    corecore