4,009 research outputs found

    The Complete Calibration of the Color-Redshift Relation (C3R2) Survey: Survey Overview and Data Release 1

    Get PDF
    A key goal of the Stage IV dark energy experiments Euclid, LSST and WFIRST is to measure the growth of structure with cosmic time from weak lensing analysis over large regions of the sky. Weak lensing cosmology will be challenging: in addition to highly accurate galaxy shape measurements, statistically robust and accurate photometric redshift (photo-z) estimates for billions of faint galaxies will be needed in order to reconstruct the three-dimensional matter distribution. Here we present an overview of and initial results from the Complete Calibration of the Color-Redshift Relation (C3R2) survey, designed specifically to calibrate the empirical galaxy color-redshift relation to the Euclid depth. These redshifts will also be important for the calibrations of LSST and WFIRST. The C3R2 survey is obtaining multiplexed observations with Keck (DEIMOS, LRIS, and MOSFIRE), the Gran Telescopio Canarias (GTC; OSIRIS), and the Very Large Telescope (VLT; FORS2 and KMOS) of a targeted sample of galaxies most important for the redshift calibration. We focus spectroscopic efforts on under-sampled regions of galaxy color space identified in previous work in order to minimize the number of spectroscopic redshifts needed to map the color-redshift relation to the required accuracy. Here we present the C3R2 survey strategy and initial results, including the 1283 high confidence redshifts obtained in the 2016A semester and released as Data Release 1.Comment: Accepted to ApJ. 11 pages, 5 figures. Redshifts can be found at http://c3r2.ipac.caltech.edu/c3r2_DR1_mrt.tx

    Joint Hybrid Precoder and Combiner Design for mmWave Spatial Multiplexing Transmission

    Full text link
    Millimeter-wave (mmWave) communications have been considered as a key technology for future 5G wireless networks because of the orders-of-magnitude wider bandwidth than current cellular bands. In this paper, we consider the problem of codebook-based joint analog-digital hybrid precoder and combiner design for spatial multiplexing transmission in a mmWave multiple-input multiple-output (MIMO) system. We propose to jointly select analog precoder and combiner pair for each data stream successively aiming at maximizing the channel gain while suppressing the interference between different data streams. After all analog precoder/combiner pairs have been determined, we can obtain the effective baseband channel. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further mitigate the interference and maximize the sum-rate. Simulation results demonstrate that our proposed algorithm exhibits prominent advantages in combating interference between different data streams and offer satisfactory performance improvement compared to the existing codebook-based hybrid beamforming schemes

    Cosmological Horizons, Uncertainty Principle and Maximum Length Quantum Mechanics

    Full text link
    The cosmological particle horizon is the maximum measurable length in the Universe. The existence of such a maximum observable length scale implies a modification of the quantum uncertainty principle. Thus due to non-locality of quantum mechanics, the global properties of the Universe could produce a signature on the behaviour of local quantum systems. A Generalized Uncertainty Principle (GUP) that is consistent with the existence of such a maximum observable length scale lmaxl_{max} is ΔxΔp2  11αΔx2\Delta x \Delta p \geq \frac{\hbar}{2}\;\frac{1}{1-\alpha \Delta x^2} where α=lmax2(H0/c)2\alpha = l_{max}^{-2}\simeq (H_0/c)^2 (H0H_0 is the Hubble parameter and cc is the speed of light). In addition to the existence of a maximum measurable length lmax=1αl_{max}=\frac{1}{\sqrt \alpha}, this form of GUP implies also the existence of a minimum measurable momentum pmin=334αp_{min}=\frac{3 \sqrt{3}}{4}\hbar \sqrt{\alpha}. Using appropriate representation of the position and momentum quantum operators we show that the spectrum of the one dimensional harmonic oscillator becomes Eˉn=2n+1+λnαˉ\bar{\mathcal{E}}_n=2n+1+\lambda_n \bar{\alpha} where Eˉn2En/ω\bar{\mathcal{E}}_n\equiv 2E_n/\hbar \omega is the dimensionless properly normalized nthn^{th} energy level, αˉ\bar{\alpha} is a dimensionless parameter with αˉα/mω\bar{\alpha}\equiv \alpha \hbar/m \omega and λnn2\lambda_n\sim n^2 for n1n\gg 1 (we show the full form of λn\lambda_n in the text). For a typical vibrating diatomic molecule and lmax=c/H0l_{max}=c/H_0 we find αˉ1077\bar{\alpha}\sim 10^{-77} and therefore for such a system, this effect is beyond reach of current experiments. However, this effect could be more important in the early universe and could produce signatures in the primordial perturbation spectrum induced by quantum fluctuations of the inflaton field.Comment: 11 pages, 7 Figures. The Mathematica file that was used for the production of the Figures may be downloaded from http://leandros.physics.uoi.gr/maxlenqm

    Is "Better Data" Better than "Better Data Miners"? (On the Benefits of Tuning SMOTE for Defect Prediction)

    Full text link
    We report and fix an important systematic error in prior studies that ranked classifiers for software analytics. Those studies did not (a) assess classifiers on multiple criteria and they did not (b) study how variations in the data affect the results. Hence, this paper applies (a) multi-criteria tests while (b) fixing the weaker regions of the training data (using SMOTUNED, which is a self-tuning version of SMOTE). This approach leads to dramatically large increases in software defect predictions. When applied in a 5*5 cross-validation study for 3,681 JAVA classes (containing over a million lines of code) from open source systems, SMOTUNED increased AUC and recall by 60% and 20% respectively. These improvements are independent of the classifier used to predict for quality. Same kind of pattern (improvement) was observed when a comparative analysis of SMOTE and SMOTUNED was done against the most recent class imbalance technique. In conclusion, for software analytic tasks like defect prediction, (1) data pre-processing can be more important than classifier choice, (2) ranking studies are incomplete without such pre-processing, and (3) SMOTUNED is a promising candidate for pre-processing.Comment: 10 pages + 2 references. Accepted to International Conference of Software Engineering (ICSE), 201

    Discussion on "Sparse graphs using exchangeable random measures" by F. Caron and E. B. Fox

    Full text link
    Discussion on "Sparse graphs using exchangeable random measures" by F. Caron and E. B. Fox. In this discussion we contribute to the analysis of the GGP model as compared to the Erdos-Renyi (ER) and the preferential attachment (AB) models, using different measures such as number of connected components, global clustering coefficient, assortativity coefficient and share of nodes in the core.Comment: 2 pages, 1 figur

    Magnetic field generation in finite beam plasma system

    Full text link
    For finite systems boundaries can introduce remarkable novel features. A well known example is the Casimir effect [1, 2] that is observed in quantum electrodynamic systems. In classical systems too novel effects associated with finite boundaries have been observed, for example the surface plasmon mode [3] that appears when the plasma has a finite extension. In this work a novel instability associated with the finite transverse size of a beam owing through a plasma system has been shown to exist. This instability leads to distinct characteristic features of the associated magnetic field that gets generated. For example, in contrast to the well known unstable Weibel mode of a beam plasma system which generates magnetic field at the skin depth scale, this instability generates magnetic field at the scales length of the transverse beam dimension [4]. The existence of this new instability is demonstrated by analytical arguments and by simulations conducted with the help of a variety of Particle - In - Cell (PIC) codes (e.g. OSIRIS, EPOCH, PICPSI). Two fluid simulations have also been conducted which confirm the observations. Furthermore, laboratory experiments on laser plasma system also provides evidence of such an instability mechanism at work

    Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

    Full text link
    The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present. These algorithms are used when the risk functions are non-smooth and involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate αi\alpha^i to within an O(μ)O(\mu)-neighborhood of the optimizer, for some α(0,1)\alpha \in (0,1) and small step-size μ\mu. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems (such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker assumptions automatically (but not the previously used conditions from the literature). These results revealed that sub-gradient learning methods have more favorable behavior than originally thought when used to enable continuous adaptation and learning. The results of Part I were exclusive to single-agent adaptation. The purpose of the current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ)O(\mu) of the optimizer

    Solar wind collisional heating

    Get PDF
    To properly describe heating in weakly collisional turbulent plasmas such as the solar wind, inter-particle collisions should be taken into account. Collisions can convert ordered energy into heat by means of irreversible relaxation towards the thermal equilibrium. Recently, Pezzi et al. (Phys. Rev. Lett., vol. 116, 2016, p. 145001) showed that the plasma collisionality is enhanced by the presence of fine structures in velocity space. Here, the analysis is extended by directly comparing the effects of the fully nonlinear Landau operator and a linearized Landau operator. By focusing on the relaxation towards the equilibrium of an out of equilibrium distribution function in a homogeneous force-free plasma, here it is pointed out that it is significant to retain nonlinearities in the collisional operator to quantify the importance of collisional effects. Although the presence of several characteristic times associated with the dissipation of different phase space structures is recovered in both the cases of the nonlinear and the linearized operators, the influence of these times is different in the two cases. In the linearized operator case, the recovered characteristic times are systematically larger than in the fully nonlinear operator case, this suggesting that fine velocity structures are dissipated slower if nonlinearities are neglected in the collisional operator

    Locus model for space-time fabric and quantum indeterminacies

    Full text link
    A simple locus model for the space-time fabric is presented and is compared with quantum foam and random walk models. The induced indeterminacies in momentum are calculated and it is shown that these space-time fabric indeterminacies are, in most cases, negligible compared with the quantum mechanical indeterminacies. This result restricts the possibilities of an experimental observation of the space-time fabric

    Some Remarks about the Complexity of Epidemics Management

    Full text link
    Recent outbreaks of Ebola, H1N1 and other infectious diseases have shown that the assumptions underlying the established theory of epidemics management are too idealistic. For an improvement of procedures and organizations involved in fighting epidemics, extended models of epidemics management are required. The necessary extensions consist in a representation of the management loop and the potential frictions influencing the loop. The effects of the non-deterministic frictions can be taken into account by including the measures of robustness and risk in the assessment of management options. Thus, besides of the increased structural complexity resulting from the model extensions, the computational complexity of the task of epidemics management - interpreted as an optimization problem - is increased as well. This is a serious obstacle for analyzing the model and may require an additional pre-processing enabling a simplification of the analysis process. The paper closes with an outlook discussing some forthcoming problems
    corecore