9,576 research outputs found
CfAIR2: Near Infrared Light Curves of 94 Type Ia Supernovae
CfAIR2 is a large homogeneously reduced set of near-infrared (NIR) light
curves for Type Ia supernovae (SN Ia) obtained with the 1.3m Peters Automated
InfraRed Imaging TELescope (PAIRITEL). This data set includes 4607 measurements
of 94 SN Ia and 4 additional SN Iax observed from 2005-2011 at the Fred
Lawrence Whipple Observatory on Mount Hopkins, Arizona. CfAIR2 includes JHKs
photometric measurements for 88 normal and 6 spectroscopically peculiar SN Ia
in the nearby universe, with a median redshift of z~0.021 for the normal SN Ia.
CfAIR2 data span the range from -13 days to +127 days from B-band maximum. More
than half of the light curves begin before the time of maximum and the coverage
typically contains ~13-18 epochs of observation, depending on the filter. We
present extensive tests that verify the fidelity of the CfAIR2 data pipeline,
including comparison to the excellent data of the Carnegie Supernova Project.
CfAIR2 contributes to a firm local anchor for supernova cosmology studies in
the NIR. Because SN Ia are more nearly standard candles in the NIR and are less
vulnerable to the vexing problems of extinction by dust, CfAIR2 will help the
supernova cosmology community develop more precise and accurate extragalactic
distance probes to improve our knowledge of cosmological parameters, including
dark energy and its potential time variation.Comment: 31 pages, 15 figures, 10 tables. Accepted to ApJS. v2 modified to
more closely match journal versio
Why is it hard to beat for Longest Common Weakly Increasing Subsequence?
The Longest Common Weakly Increasing Subsequence problem (LCWIS) is a variant
of the classic Longest Common Subsequence problem (LCS). Both problems can be
solved with simple quadratic time algorithms. A recent line of research led to
a number of matching conditional lower bounds for LCS and other related
problems. However, the status of LCWIS remained open.
In this paper we show that LCWIS cannot be solved in strongly subquadratic
time unless the Strong Exponential Time Hypothesis (SETH) is false.
The ideas which we developed can also be used to obtain a lower bound based
on a safer assumption of NC-SETH, i.e. a version of SETH which talks about NC
circuits instead of less expressive CNF formulas
Type Ia supernova Hubble diagram with near-infrared and optical observations
We main goal of this paper is to test whether the NIR peak magnitudes of SNe
Ia could be accurately estimated with only a single observation obtained close
to maximum light, provided the time of B band maximum and the optical stretch
parameter are known. We obtained multi-epoch UBVRI and single-epoch J and H
photometric observations of 16 SNe Ia in the redshift range z=0.037-0.183,
doubling the leverage of the current SN Ia NIR Hubble diagram and the number of
SNe beyond redshift 0.04. This sample was analyzed together with 102 NIR and
458 optical light curves (LCs) of normal SNe Ia from the literature. The
analysis of 45 well-sampled NIR LCs shows that a single template accurately
describes them if its time axis is stretched with the optical stretch
parameter. This allows us to estimate the NIR peak magnitudes even with one
observation obtained within 10 days from B-band maximum. We find that the NIR
Hubble residuals show weak correlation with DM_15 and E(B-V), and for the first
time we report a possible dependence on the J_max-H_max color. The intrinsic
NIR luminosity scatter of SNe Ia is estimated to be around 0.10 mag, which is
smaller than what can be derived for a similarly heterogeneous sample at
optical wavelengths. In conclusion, we find that SNe Ia are at least as good
standard candles in the NIR as in the optical. We showed that it is feasible to
extended the NIR SN Ia Hubble diagram to z=0.2 with very modest sampling of the
NIR LCs, if complemented by well-sampled optical LCs. Our results suggest that
the most efficient way to extend the NIR Hubble diagram to high redshift would
be to obtain a single observation close to the NIR maximum. (abridged)Comment: 39 pages, 15 figures, accepted by A&
Lagrangian Data-Driven Reduced Order Modeling of Finite Time Lyapunov Exponents
There are two main strategies for improving the projection-based reduced
order model (ROM) accuracy: (i) improving the ROM, i.e., adding new terms to
the standard ROM; and (ii) improving the ROM basis, i.e., constructing ROM
bases that yield more accurate ROMs. In this paper, we use the latter. We
propose new Lagrangian inner products that we use together with Eulerian and
Lagrangian data to construct new Lagrangian ROMs. We show that the new
Lagrangian ROMs are orders of magnitude more accurate than the standard
Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to
construct the ROM basis. Specifically, for the quasi-geostrophic equations, we
show that the new Lagrangian ROMs are more accurate than the standard Eulerian
ROMs in approximating not only Lagrangian fields (e.g., the finite time
Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction).
We emphasize that the new Lagrangian ROMs do not employ any closure modeling to
model the effect of discarded modes (which is standard procedure for
low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase
in the new Lagrangian ROMs' accuracy is entirely due to the novel Lagrangian
inner products used to build the Lagrangian ROM basis
Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS
We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size |Sigma|. For the problem of deciding whether the LCS of strings x,y has length at least L, we obtain a sketch size and streaming space usage of O(L^{|Sigma| - 1} log L). We also prove matching unconditional lower bounds.
As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an O(min{nm, n + m^{|Sigma|}})-time algorithm for this problem, on strings x,y of length n,m, with n >= m. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis
- …