4,427 research outputs found
Observational Constraints on Exponential Gravity
We study the observational constraints on the exponential gravity model of
f(R)=-beta*Rs(1-e^(-R/Rs)). We use the latest observational data including
Supernova Cosmology Project (SCP) Union2 compilation, Two-Degree Field Galaxy
Redshift Survey (2dFGRS), Sloan Digital Sky Survey Data Release 7 (SDSS DR7)
and Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP7) in our analysis.
From these observations, we obtain a lower bound on the model parameter beta at
1.27 (95% CL) but no appreciable upper bound. The constraint on the present
matter density parameter is 0.245< Omega_m^0<0.311 (95% CL). We also find out
the best-fit value of model parameters on several cases.Comment: 14pages, 3 figures, accepted by PR
Sensor Selection and Integration to Improve Video Segmentation in Complex Environments
Background subtraction is often considered to be a required stage of any video surveillance system being used to detect objects in a single frame and/or track objects across multiple frames in a video sequence. Most current state-of-the-art techniques for object detection and tracking utilize some form of background subtraction that involves developing a model of the background at a pixel, region, or frame level and designating any elements that deviate from the background model as foreground. However, most existing approaches are capable of segmenting a number of distinct components but unable to distinguish between the desired object of interest and complex, dynamic background such as moving water and high reflections. In this paper, we propose a technique to integrate spatiotemporal signatures of an object of interest from different sensing modalities into a video segmentation method in order to improve object detection and tracking in dynamic, complex scenes. Our proposed algorithm utilizes the dynamic interaction information between the object of interest and background to differentiate between mistakenly segmented components and the desired component. Experimental results on two complex data sets demonstrate that our proposed technique significantly improves the accuracy and utility of state-of-the-art video segmentation technique. © 2014 Adam R. Reckley et al
Observational Constraints on Teleparallel Dark Energy
We use data from Type Ia Supernovae (SNIa), Baryon Acoustic Oscillations
(BAO), and Cosmic Microwave Background (CMB) observations to constrain the
recently proposed teleparallel dark energy scenario based on the teleparallel
equivalent of General Relativity, in which one adds a canonical scalar field,
allowing also for a nonminimal coupling with gravity. Using the power-law, the
exponential and the inverse hyperbolic cosine potential ansatzes, we show that
the scenario is compatible with observations. In particular, the data favor a
nonminimal coupling, and although the scalar field is canonical the model can
describe both the quintessence and phantom regimes.Comment: 19 pages, 6 figures, version accepted by JCA
VLSI Implementation of a Cost-Efficient Loeffler-DCT Algorithm with Recursive CORDIC for DCT-Based Encoder
This paper presents a low-cost and high-quality; hardware-oriented; two-dimensional discrete cosine transform (2-D DCT) signal analyzer for image and video encoders. In order to reduce memory requirement and improve image quality; a novel Loeffler DCT based on a coordinate rotation digital computer (CORDIC) technique is proposed. In addition; the proposed algorithm is realized by a recursive CORDIC architecture instead of an unfolded CORDIC architecture with approximated scale factors. In the proposed design; a fully pipelined architecture is developed to efficiently increase operating frequency and throughput; and scale factors are implemented by using four hardware-sharing machines for complexity reduction. Thus; the computational complexity can be decreased significantly with only 0.01 dB loss deviated from the optimal image quality of the Loeffler DCT. Experimental results show that the proposed 2-D DCT spectral analyzer not only achieved a superior average peak signal–noise ratio (PSNR) compared to the previous CORDIC-DCT algorithms but also designed cost-efficient architecture for very large scale integration (VLSI) implementation. The proposed design was realized using a UMC 0.18-μm CMOS process with a synthesized gate count of 8.04 k and core area of 75,100 μm2. Its operating frequency was 100 MHz and power consumption was 4.17 mW. Moreover; this work had at least a 64.1% gate count reduction and saved at least 22.5% in power consumption compared to previous designs
Recommended from our members
Radiological contamination penetration depth in Fernald transite panels
To characterize the penetration depth of radiological contamination through the thickness of transite (an asbestos-cement building material) from the Department of Energy (DOE) Fernald site, both destructive and non-destructive analysis techniques were used. The destructive techniques were based on progressively removing layers of material and subsequent direct analysis of successive surfaces. These laminar analyses included quantitative measurements using a Geiger-Mueller (G-M) detector and qualitative measurements based on autoradiography and ultraviolet photography. G-M detector measurements during layer removal provided quantitative distributions consistent with diffusion theory and have served to validate a novel non-destructive technique. The ultraviolet analysis provided qualitative information with the advantage of instantaneous results that may be useful for screening samples. The autoradiographic analysis also provided qualitative results for comparison and image analysis. Both quantitative and qualitative results from this study indicated that the contamination did penetrate into the volume of the transite. However, this penetration depth was observed to be strongly dependent on the manner in which the transite was exposed to the contamination. Consequently, it is likely that significantly different penetration depths will be observed for different processes, buildings, and sites
Recommended from our members
Identifying temporal molecular signatures underlying cardiovascular diseases: A data science platform
ObjectiveDuring cardiovascular disease progression, molecular systems of myocardium (e.g., a proteome) undergo diverse and distinct changes. Dynamic, temporally-regulated alterations of individual molecules underlie the collective response of the heart to pathological drivers and the ultimate development of pathogenesis. Advances in high-throughput omics technologies have enabled cost-effective, temporal profiling of targeted systems in animal models of human diseases. However, computational analysis of temporal patterns from omics data remains challenging. In particular, bioinformatic pipelines involving unsupervised statistical approaches to support cardiovascular investigations are lacking, which hinders one's ability to extract biomedical insights from these complex datasets.Approach and resultsWe developed a non-parametric data analysis platform to resolve computational challenges unique to temporal omics datasets. Our platform consists of three modules. Module I preprocesses the temporal data using either cubic splines or principal component analysis (PCA), and it simultaneously accomplishes the tasks on missing data imputation and denoising. Module II performs an unsupervised classification by K-means or hierarchical clustering. Module III evaluates and identifies biological entities (e.g., molecular events) that exhibit strong associations to specific temporal patterns. The jackstraw method for cluster membership has been applied to estimate p-values and posterior inclusion probabilities (PIPs), both of which guided feature selection. To demonstrate the utility of the analysis platform, we employed a temporal proteomics dataset that captured the proteome-wide dynamics of oxidative stress induced post-translational modifications (O-PTMs) in mouse hearts undergoing isoproterenol (ISO)-induced hypertrophy.ConclusionWe have created a platform, CV.Signature.TCP, to identify distinct temporal clusters in omics datasets. We presented a cardiovascular use case to demonstrate its utility in unveiling biological insights underlying O-PTM regulations in cardiac remodeling. This platform is implemented in an open source R package (https://github.com/UCLA-BD2K/CV.Signature.TCP)
Correction: Dynamic Remodeling of Dendritic Arbors in GABAergic Interneurons of Adult Visual Cortex
Chronic in vivo imaging of fluorescent-labeled neurons in adult mice reveals extension and retraction of dendrites in GABAergic non-pyramidal interneurons of the cerebral cortex
Micro-spec: an Integrated Direct-detection Spectrometer for Far-infrared Space Telescopes
The far-infrared and submillimeter portions of the electromagnetic spectrum provide a unique view of the astrophysical processes present in the early universe. Our ability to fully explore this rich spectral region has been limited, however, by the size and cost of the cryogenic spectrometers required to carry out such measurements.Micro-Spec (-Spec) is a high-sensitivity, direct-detection spectrometer concept working in the 450-1000 (micrometers) wavelength range which will enable a wide range of flight missions that would otherwise be challenging due tothe large size of current instruments with the required spectral resolution and sensitivity. The spectrometer design utilizes two internal antenna arrays, one for transmitting and one for receiving, superconducting microstrip transmission lines for power division and phase delay, and an array of microwave kinetic inductance detectors (MKIDs) to achieve these goals. The instrument will be integrated on a approximately 10 sq cm silicon chip and can therefore become an important capability under the low background conditions accessible via space and high-altitude borne platforms. In this paper, an optical design methodology for micro-Spec is presented, with particular attention given to its two-dimensional diffractive region, where the light of different wavelengths is focused on the different detectors. The method is based on the maximization of the instrument resolving power and minimization of the RMS phase error on the instrument focal plane. This two-step optimization can generate geometrical configurations given specific requirements on spectrometer size, operating spectral range and performance.Two point designs with resolving power of 260 and 520 and an RMS phase error less than approximately 0.004 radians were developed for initial demonstration and will be the basis of future instruments with resolving power up to about 1200
Micro-Spec: An Ultra-Compact, High-Sensitivity Spectrometer for Far-Infrared and Sub-Millimeter Astronomy
High-performance, integrated spectrometers operating in the far-infrared and sub-millimeter promise to be powerful tools for the exploration of the epochs of reionization and initial galaxy formation. These devices, using high-efficiency superconducting transmission lines, can achieve the performance of a meter-scale grating spectrometer in an instrument implemented on a four-inch silicon wafer. Such a device, when combined with a cryogenic telescope in space, provides an enabling capability for studies of the early universe. Here, the optical design process for Micro-Spec (mu-Spec) is presented, with particular attention given to its two-dimensional diffractive region, where the light of different wavelengths is focused on the different detectors. The method is based on the stigmatization and minimization of the light path function in this bounded region, which results in an optimized geometrical configuration. A point design with an efficiency of approx. 90% has been developed for initial demonstration, and can serve as the basis for future instruments. Design variations on this implementation are also discussed, which can lead to lower efficiencies due to diffractive losses in the multimode region
Development of the State Optimism Measure
Background Optimism, or positive expectations about the future, is associated with better health. It is commonly assessed as a trait, but it may change over time and circumstance. Accordingly, we developed a measure of state optimism. Methods An initial 29-item pool was generated based on literature reviews and expert consultations. It was administered to three samples: sample 1 was a general healthy population (n = 136), sample 2 was people with cardiac disease (n = 96), and sample 3 was persons recovering from problematic substance use (n = 265). Exploratory factor analysis and item-level descriptive statistics were used to select items to form a unidimensional State Optimism Measure (SOM). Confirmatory factor analysis (CFA) was performed to test fit. Results The selected seven SOM items demonstrated acceptable to high factor loadings on a single dominant factor (loadings: 0.64–0.93). There was high internal reliability across samples (Cronbach\u27s alphas: 0.92–0.96), and strong convergent validity correlations in hypothesized directions. The SOM\u27s correlations with other optimism measures indicate preliminary construct validity. CFA statistics indicated acceptable fit of the SOM model. Conclusions We developed a psychometrically-sound measure of state optimism that can be used in various settings. Predictive and criterion validity will be tested in future studies
- …