54,244 research outputs found
Millimeter line observations toward four local galaxies
We present results of millimeter line observations toward four local gas-rich
galaxies (NGC 3079, NGC 4258, NGC 6240 and VII Zw 31) with the IRAM 30 meter
millimeter telescope. More than 33 lines in these four sources were detected,
including normal dense gas tracers (HCN 1-0, HCO 1-0, and CH 1-0, etc)
and their isotopic species. HCN (1-0) and HCO (1-0) are
detected for the first time in NGC 4258. Optical depths of HCN 1-0 and
HCO 1-0 were estimated with detected isotopic lines in NGC 4258, which
were 4.1 and 2.6, respectively. HCN , which requires high volume
density and high temperature to excite, was detected in NGC 6240. High ratios
of HCO/HCN in NGC 4258 and NGC 6240 imply that this ratio might not be a
perfect diagnostic tool between AGN and starburst environments, due to
contamination/combination of both processes. The low HCN/HCN line ratios
with less than 0.15 in NGC 4258, NGC 6240 and the non-detection of HCN line
in NGC 3079 and VII Zw 31 indicates that these four galaxies are HCN-poor
galaxies. The variation of fractional abundance of CN in different types of
galaxies is large.Comment: 15pages, 13 figures; accepted for publication in MNRA
Multivariate varying coefficient model for functional responses
Motivated by recent work studying massive imaging data in the neuroimaging
literature, we propose multivariate varying coefficient models (MVCM) for
modeling the relation between multiple functional responses and a set of
covariates. We develop several statistical inference procedures for MVCM and
systematically study their theoretical properties. We first establish the weak
convergence of the local linear estimate of coefficient functions, as well as
its asymptotic bias and variance, and then we derive asymptotic bias and mean
integrated squared error of smoothed individual functions and their uniform
convergence rate. We establish the uniform convergence rate of the estimated
covariance function of the individual functions and its associated eigenvalue
and eigenfunctions. We propose a global test for linear hypotheses of varying
coefficient functions, and derive its asymptotic distribution under the null
hypothesis. We also propose a simultaneous confidence band for each individual
effect curve. We conduct Monte Carlo simulation to examine the finite-sample
performance of the proposed procedures. We apply MVCM to investigate the
development of white matter diffusivities along the genu tract of the corpus
callosum in a clinical study of neurodevelopment.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1045 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices
Deploying deep neural networks on mobile devices is a challenging task.
Current model compression methods such as matrix decomposition effectively
reduce the deployed model size, but still cannot satisfy real-time processing
requirement. This paper first discovers that the major obstacle is the
excessive execution time of non-tensor layers such as pooling and normalization
without tensor-like trainable parameters. This motivates us to design a novel
acceleration framework: DeepRebirth through "slimming" existing consecutive and
parallel non-tensor and tensor layers. The layer slimming is executed at
different substructures: (a) streamline slimming by merging the consecutive
non-tensor and tensor layer vertically; (b) branch slimming by merging
non-tensor and tensor branches horizontally. The proposed optimization
operations significantly accelerate the model execution and also greatly reduce
the run-time memory cost since the slimmed model architecture contains less
hidden layers. To maximally avoid accuracy loss, the parameters in new
generated layers are learned with layer-wise fine-tuning based on both
theoretical analysis and empirical verification. As observed in the experiment,
DeepRebirth achieves more than 3x speed-up and 2.5x run-time memory saving on
GoogLeNet with only 0.4% drop of top-5 accuracy on ImageNet. Furthermore, by
combining with other model compression techniques, DeepRebirth offers an
average of 65ms inference time on the CPU of Samsung Galaxy S6 with 86.5% top-5
accuracy, 14% faster than SqueezeNet which only has a top-5 accuracy of 80.5%.Comment: AAAI 201
- β¦