204 research outputs found
Study on the feasibility of V/STOL concepts for short haul transport aircraft
Weight standardization, tilt and stopped rotor optimization, and noise sensitivity analysis of V/STOL concepts for short-haul transport aircraf
Multiple-locus heterozygosity, physiology and growth at two different stages in the life cycle of the Chilean oyster Ostrea chilensis
A random sample of 150 individuals of a laboratory-produced cohort of Ostrea chilensis Philippi, 1845 was taken at 10 and 36 mo of age to estimate physiological variables and individual heterozygosity using 4 loci (Lap, Pgi, Pgm and Ca). Juveniles of 10 mo of age showed a mean D value of 0.134 (p > 0.05) and a positive correlation between oyster size and multiple-locus heterozygosity (MLH) (p 0.05), oxygen consumption rate (p < 0.05) and MLH was found. The K2 value (standardized net growth efficiency) was positively correlated (p < 0.05) with MLH. At 36 mo a heterozygote deficiency was present with a mean value D = -0.431 (p < 0.05). No relationship between standard dry weight and MLH and also a negative correlation between the scope for growth and MLH were found. The oxygen consumption and excretion rates also showed an increase in large size individuals. The slopes for filtration and excretion rates against MLH were negative and not statistically significant. However, ingestion and absorption rates showed significant (p < 0.05) decrease with an increase in heterozygosity. The results seem to indicate that within sexually immature individuals of O. chilensis, a positive correlation between growth rate and MLH can be found, while in adults the higher energy allocation for reproduction precludes the detection of this relationship
Effect of litter birth weight standardization before first suckling on colostrum intake, passive immunization, pre-weaning survival and growth of the piglets
Within-litter variation in birth weight is a relevant factor in pig production. This study aimed at comparing preweaning mortality, colostrum intake (CI), passive immunization, and growth of piglets from litters of uniform (UN) or heterogeneous (HET) birth weights. The study included 52 multiparous sows (Large White × Landrace)and their litters. Two types of litters were constituted based on birth weight, namely: UN or HET, the control
group, using piglets from two to three sows farrowing approximately at the same time. At birth, piglets were weighed, identified, and placed in a box under an IR lamp. At the end of farrowing, piglets were re-weighed
and allotted to groups UN or HET (12 per litter) with average weights of 1394 and 1390 g, respectively, and allowed to suckle (time 0). They were re-weighed 24 h later to estimate CI and sows' colostrum yield. At time 0, the average intra-litter CV (%) inweight of experimental litters were 9.3±0.8 (SEM) and 27.8±0.7 in groups UN and HET, respectively (P<0.001). At 2 days of age, blood samples were taken from the piglets of 11 litters five UN and six HET) and serum Immunoglobulin G(IgG) contents were determined. Mean CI/piglet/litter was similar
in both groups, that is, 415 ± 13 in UN and 395 ± 13 g in HET (P = 0.28), but was less variable in UN litters
(CV = 22.4 ± 2 vs 36.0 ± 2%, P < 0.001). The IgG levels at 2 days of age were higher in piglets from UN litters
(22.5 ± 0.8 vs 18.4 ± 0.7 g/l; P < 0.001) but the CV of IgG levels was not different between litter type (P = 0.46). Mortality up to 21 days of age was lower in UN litters (6.4 vs 11.9%, P = 0.03). The BW at 21 days
was not different between litter type (P = 0.25) but it was less variable among piglets from UN litters (CV: 17.1 ± 1.3 vs 25.7 ± 1.3%; P = 0.01). Results reveal that CI is less variable and mortality is lower in piglets from litters of UN birth weight. The results infer that genetic improvement to decrease variation in birth weight
within-litter could have a positive effect on homogeneous CI and thus contribute to reducing piglet mortality
Is Normalization Indispensable for Multi-domain Federated Learning?
Federated learning (FL) enhances data privacy with collaborative in-situ
training on decentralized clients. Nevertheless, FL encounters challenges due
to non-independent and identically distributed (non-i.i.d) data, leading to
potential performance degradation and hindered convergence. While prior studies
predominantly addressed the issue of skewed label distribution, our research
addresses a crucial yet frequently overlooked problem known as multi-domain FL.
In this scenario, clients' data originate from diverse domains with distinct
feature distributions, as opposed to label distributions. To address the
multi-domain problem in FL, we propose a novel method called Federated learning
Without normalizations (FedWon). FedWon draws inspiration from the observation
that batch normalization (BN) faces challenges in effectively modeling the
statistics of multiple domains, while alternative normalization techniques
possess their own limitations. In order to address these issues, FedWon
eliminates all normalizations in FL and reparameterizes convolution layers with
scaled weight standardization. Through comprehensive experimentation on four
datasets and four models, our results demonstrate that FedWon surpasses both
FedAvg and the current state-of-the-art method (FedBN) across all experimental
setups, achieving notable improvements of over 10% in certain domains.
Furthermore, FedWon is versatile for both cross-silo and cross-device FL,
exhibiting strong performance even with a batch size as small as 1, thereby
catering to resource-constrained devices. Additionally, FedWon effectively
tackles the challenge of skewed label distribution
Kernel Normalized Convolutional Networks
Existing deep convolutional neural network (CNN) architectures frequently
rely upon batch normalization (BatchNorm) to effectively train the model.
BatchNorm significantly improves model performance in centralized training, but
it is unsuitable for federated learning and differential privacy settings. Even
in centralized learning, BatchNorm performs poorly with smaller batch sizes. To
address these limitations, we propose kernel normalization and kernel
normalized convolutional layers, and incorporate them into kernel normalized
convolutional networks (KNConvNets) as the main building blocks. We implement
KNConvNets corresponding to the state-of-the-art CNNs such as VGGNets and
ResNets while forgoing BatchNorm layers. Through extensive experiments, we
illustrate KNConvNets consistently outperform their batch, group, and layer
normalized counterparts in terms of both accuracy and convergence rate in
centralized, federated, and differentially private learning settings
Deep data compression for approximate ultrasonic image formation
In many ultrasonic imaging systems, data acquisition and image formation are
performed on separate computing devices. Data transmission is becoming a
bottleneck, thus, efficient data compression is essential. Compression rates
can be improved by considering the fact that many image formation methods rely
on approximations of wave-matter interactions, and only use the corresponding
part of the data. Tailored data compression could exploit this, but extracting
the useful part of the data efficiently is not always trivial. In this work, we
tackle this problem using deep neural networks, optimized to preserve the image
quality of a particular image formation method. The Delay-And-Sum (DAS)
algorithm is examined which is used in reflectivity-based ultrasonic imaging.
We propose a novel encoder-decoder architecture with vector quantization and
formulate image formation as a network layer for end-to-end training.
Experiments demonstrate that our proposed data compression tailored for a
specific image formation method obtains significantly better results as opposed
to compression agnostic to subsequent imaging. We maintain high image quality
at much higher compression rates than the theoretical lossless compression rate
derived from the rank of the linear imaging operator. This demonstrates the
great potential of deep ultrasonic data compression tailored for a specific
image formation method.Comment: IEEE International Ultrasonics Symposium 202
- …