4,427 research outputs found

    Observational Constraints on Exponential Gravity

    Full text link
    We study the observational constraints on the exponential gravity model of f(R)=-beta*Rs(1-e^(-R/Rs)). We use the latest observational data including Supernova Cosmology Project (SCP) Union2 compilation, Two-Degree Field Galaxy Redshift Survey (2dFGRS), Sloan Digital Sky Survey Data Release 7 (SDSS DR7) and Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP7) in our analysis. From these observations, we obtain a lower bound on the model parameter beta at 1.27 (95% CL) but no appreciable upper bound. The constraint on the present matter density parameter is 0.245< Omega_m^0<0.311 (95% CL). We also find out the best-fit value of model parameters on several cases.Comment: 14pages, 3 figures, accepted by PR

    Sensor Selection and Integration to Improve Video Segmentation in Complex Environments

    Get PDF
    Background subtraction is often considered to be a required stage of any video surveillance system being used to detect objects in a single frame and/or track objects across multiple frames in a video sequence. Most current state-of-the-art techniques for object detection and tracking utilize some form of background subtraction that involves developing a model of the background at a pixel, region, or frame level and designating any elements that deviate from the background model as foreground. However, most existing approaches are capable of segmenting a number of distinct components but unable to distinguish between the desired object of interest and complex, dynamic background such as moving water and high reflections. In this paper, we propose a technique to integrate spatiotemporal signatures of an object of interest from different sensing modalities into a video segmentation method in order to improve object detection and tracking in dynamic, complex scenes. Our proposed algorithm utilizes the dynamic interaction information between the object of interest and background to differentiate between mistakenly segmented components and the desired component. Experimental results on two complex data sets demonstrate that our proposed technique significantly improves the accuracy and utility of state-of-the-art video segmentation technique. © 2014 Adam R. Reckley et al

    Observational Constraints on Teleparallel Dark Energy

    Full text link
    We use data from Type Ia Supernovae (SNIa), Baryon Acoustic Oscillations (BAO), and Cosmic Microwave Background (CMB) observations to constrain the recently proposed teleparallel dark energy scenario based on the teleparallel equivalent of General Relativity, in which one adds a canonical scalar field, allowing also for a nonminimal coupling with gravity. Using the power-law, the exponential and the inverse hyperbolic cosine potential ansatzes, we show that the scenario is compatible with observations. In particular, the data favor a nonminimal coupling, and although the scalar field is canonical the model can describe both the quintessence and phantom regimes.Comment: 19 pages, 6 figures, version accepted by JCA

    VLSI Implementation of a Cost-Efficient Loeffler-DCT Algorithm with Recursive CORDIC for DCT-Based Encoder

    Get PDF
    This paper presents a low-cost and high-quality; hardware-oriented; two-dimensional discrete cosine transform (2-D DCT) signal analyzer for image and video encoders. In order to reduce memory requirement and improve image quality; a novel Loeffler DCT based on a coordinate rotation digital computer (CORDIC) technique is proposed. In addition; the proposed algorithm is realized by a recursive CORDIC architecture instead of an unfolded CORDIC architecture with approximated scale factors. In the proposed design; a fully pipelined architecture is developed to efficiently increase operating frequency and throughput; and scale factors are implemented by using four hardware-sharing machines for complexity reduction. Thus; the computational complexity can be decreased significantly with only 0.01 dB loss deviated from the optimal image quality of the Loeffler DCT. Experimental results show that the proposed 2-D DCT spectral analyzer not only achieved a superior average peak signal–noise ratio (PSNR) compared to the previous CORDIC-DCT algorithms but also designed cost-efficient architecture for very large scale integration (VLSI) implementation. The proposed design was realized using a UMC 0.18-μm CMOS process with a synthesized gate count of 8.04 k and core area of 75,100 μm2. Its operating frequency was 100 MHz and power consumption was 4.17 mW. Moreover; this work had at least a 64.1% gate count reduction and saved at least 22.5% in power consumption compared to previous designs

    Correction: Dynamic Remodeling of Dendritic Arbors in GABAergic Interneurons of Adult Visual Cortex

    Get PDF
    Chronic in vivo imaging of fluorescent-labeled neurons in adult mice reveals extension and retraction of dendrites in GABAergic non-pyramidal interneurons of the cerebral cortex

    Micro-spec: an Integrated Direct-detection Spectrometer for Far-infrared Space Telescopes

    Get PDF
    The far-infrared and submillimeter portions of the electromagnetic spectrum provide a unique view of the astrophysical processes present in the early universe. Our ability to fully explore this rich spectral region has been limited, however, by the size and cost of the cryogenic spectrometers required to carry out such measurements.Micro-Spec (-Spec) is a high-sensitivity, direct-detection spectrometer concept working in the 450-1000 (micrometers) wavelength range which will enable a wide range of flight missions that would otherwise be challenging due tothe large size of current instruments with the required spectral resolution and sensitivity. The spectrometer design utilizes two internal antenna arrays, one for transmitting and one for receiving, superconducting microstrip transmission lines for power division and phase delay, and an array of microwave kinetic inductance detectors (MKIDs) to achieve these goals. The instrument will be integrated on a approximately 10 sq cm silicon chip and can therefore become an important capability under the low background conditions accessible via space and high-altitude borne platforms. In this paper, an optical design methodology for micro-Spec is presented, with particular attention given to its two-dimensional diffractive region, where the light of different wavelengths is focused on the different detectors. The method is based on the maximization of the instrument resolving power and minimization of the RMS phase error on the instrument focal plane. This two-step optimization can generate geometrical configurations given specific requirements on spectrometer size, operating spectral range and performance.Two point designs with resolving power of 260 and 520 and an RMS phase error less than approximately 0.004 radians were developed for initial demonstration and will be the basis of future instruments with resolving power up to about 1200

    Micro-Spec: An Ultra-Compact, High-Sensitivity Spectrometer for Far-Infrared and Sub-Millimeter Astronomy

    Get PDF
    High-performance, integrated spectrometers operating in the far-infrared and sub-millimeter promise to be powerful tools for the exploration of the epochs of reionization and initial galaxy formation. These devices, using high-efficiency superconducting transmission lines, can achieve the performance of a meter-scale grating spectrometer in an instrument implemented on a four-inch silicon wafer. Such a device, when combined with a cryogenic telescope in space, provides an enabling capability for studies of the early universe. Here, the optical design process for Micro-Spec (mu-Spec) is presented, with particular attention given to its two-dimensional diffractive region, where the light of different wavelengths is focused on the different detectors. The method is based on the stigmatization and minimization of the light path function in this bounded region, which results in an optimized geometrical configuration. A point design with an efficiency of approx. 90% has been developed for initial demonstration, and can serve as the basis for future instruments. Design variations on this implementation are also discussed, which can lead to lower efficiencies due to diffractive losses in the multimode region

    Development of the State Optimism Measure

    Get PDF
    Background Optimism, or positive expectations about the future, is associated with better health. It is commonly assessed as a trait, but it may change over time and circumstance. Accordingly, we developed a measure of state optimism. Methods An initial 29-item pool was generated based on literature reviews and expert consultations. It was administered to three samples: sample 1 was a general healthy population (n = 136), sample 2 was people with cardiac disease (n = 96), and sample 3 was persons recovering from problematic substance use (n = 265). Exploratory factor analysis and item-level descriptive statistics were used to select items to form a unidimensional State Optimism Measure (SOM). Confirmatory factor analysis (CFA) was performed to test fit. Results The selected seven SOM items demonstrated acceptable to high factor loadings on a single dominant factor (loadings: 0.64–0.93). There was high internal reliability across samples (Cronbach\u27s alphas: 0.92–0.96), and strong convergent validity correlations in hypothesized directions. The SOM\u27s correlations with other optimism measures indicate preliminary construct validity. CFA statistics indicated acceptable fit of the SOM model. Conclusions We developed a psychometrically-sound measure of state optimism that can be used in various settings. Predictive and criterion validity will be tested in future studies
    • …
    corecore