139 research outputs found
Large-Scale MIMO versus Network MIMO for Multicell Interference Mitigation
This paper compares two important downlink multicell interference mitigation
techniques, namely, large-scale (LS) multiple-input multiple-output (MIMO) and
network MIMO. We consider a cooperative wireless cellular system operating in
time-division duplex (TDD) mode, wherein each cooperating cluster includes
base-stations (BSs), each equipped with multiple antennas and scheduling
single-antenna users. In an LS-MIMO system, each BS employs antennas not
only to serve its scheduled users, but also to null out interference caused to
the other users within the cooperating cluster using zero-forcing (ZF)
beamforming. In a network MIMO system, each BS is equipped with only
antennas, but interference cancellation is realized by data and channel state
information exchange over the backhaul links and joint downlink transmission
using ZF beamforming. Both systems are able to completely eliminate
intra-cluster interference and to provide the same number of spatial degrees of
freedom per user. Assuming the uplink-downlink channel reciprocity provided by
TDD, both systems are subject to identical channel acquisition overhead during
the uplink pilot transmission stage. Further, the available sum power at each
cluster is fixed and assumed to be equally distributed across the downlink
beams in both systems. Building upon the channel distribution functions and
using tools from stochastic ordering, this paper shows, however, that from a
performance point of view, users experience better quality of service, averaged
over small-scale fading, under an LS-MIMO system than a network MIMO system.
Numerical simulations for a multicell network reveal that this conclusion also
holds true with regularized ZF beamforming scheme. Hence, given the likely
lower cost of adding excess number of antennas at each BS, LS-MIMO could be the
preferred route toward interference mitigation in cellular networks.Comment: 13 pages, 7 figures; IEEE Journal of Selected Topics in Signal
Processing, Special Issue on Signal Processing for Large-Scale MIMO
Communication
Family meal environment differentially conditions the prospective association between early childhood screen time and key social relationships in adolescent girls
Background: Despite screen time recommendations, children are increasingly spending time
on electronic devices, rendering it an important risk factor for subsequent social and developmental
outcomes. Sharing meals could offer a way to promote psychosocial development. This study
examines the interaction between family meal environment and early childhood screen time on
key adolescent social relationships. Methods: Participants are 1455 millennial children (49% boys)
from the Quebec Longitudinal Study of Child Development birth cohort. Parents reported on child
screen use at ages 2 and 6 years and family meal environment quality at age 6 years. Parents and
children reported on parent–child relationships and peer victimization experiences, respectively,
at age 13 years. Sex-stratified multiple regression estimated the direct association between screen
time trends, family meal environment quality, and their interaction on later social relationship
outcomes. Results: For girls, when preschool screen time increased, sharing family meals in highquality environments was associated with more positive and less conflictual relationships with their
mothers, whereas meals shared in low- and moderate-quality environments were associated with
fewer instances of victimization by their peers. Non-linear associations were not significant for boys.
Conclusion: Capitalizing on family meal environment represents a simple/cost-efficient activity that
can compensate for some long-term risks associated with increased screen use, above and beyond
pre-existing and concurrent individual and family characteristics. Public health initiatives may benefit
from considering family meals as a complementary intervention strategy to screen use guidelines
Prospective associations between maternal depressive symptoms during early infancy and growth deficiency from childhood to adolescence
Maternal health represents an important predictor of child development; yet it often goes
unnoticed during pediatric visits. Previous work suggests that mental state affects parenting. The
relationship between infant exposure to maternal depressive symptoms suggests conflicting findings
on physical growth. Body mass index (BMI) has not been rigorously examined across development.
Using a prospective-longitudinal birth cohort of 2120 infants (50.7% boys), we estimated the prospective relationship between symptoms of maternal depressive symptoms at 5 months postpartum and
later BMI in typically developing children. We hypothesized that maternal depressive symptom
severity would predict later BMI through to adolescence. Mothers self-reported depressive symptoms at 5 months. Child BMI was measured by a trained research assistant at ages 6, 8, 10, 13, and
15 years. We estimated a series of sex-stratified regressions in which BMI was linearly regressed on
maternal symptoms, while controlling for potential pre-existing/concurrent individual and family
confounding factors. Boys born to mothers with more severe depressive symptoms at age 5 months
had a significantly lower BMI than other boys at subsequent ages. There were no such associations
observed for girls. Maternal depressive symptoms were prospectively associated with later BMI
for sons and not daughters, predicting risk of faltering in growth through to adolescence. Health
practitioners should routinely assess maternal psychological functioning during pediatric visits to
optimize parent and child flourishment
Reducing motor variability enhances myoelectric control robustness across untrained limb positions
The limb position effect is a multi-faceted problem, associated with decreased upper-limb prosthesis control acuity following a change in arm position. Factors contributing to this problem can arise from distinct environmental or physiological sources. Despite their differences in origin, the effect of each factor manifests similarly as increased input data variability. This variability can cause incorrect decoding of user intent. Previous research has attempted to address this by better capturing input data variability with data abundance. In this paper, we take an alternative approach and investigate the effect of reducing trial-to-trial variability by improving the consistency of muscle activity through user training. Ten participants underwent 4 days of myoelectric training with either concurrent or delayed feedback in a single arm position. At the end of training participants experienced a zero-feedback retention test in multiple limb positions. In doing so, we tested how well the skill learned in a single limb position generalized to untrained positions. We found that delayed feedback training led to more consistent muscle activity across both the trained and untrained limb positions. Analysis of patterns of activations in the delayed feedback group suggest a structured change in muscle activity occurs across arm positions. Our results demonstrate that myoelectric user-training can lead to the retention of motor skills that bring about more robust decoding across untrained limb positions. This work highlights the importance of reducing motor variability with practice, prior to examining the underlying structure of muscle changes associated with limb position
Cystatin C Falsely Underestimated GFR in a Critically Ill Patient with a New Diagnosis of AIDS
Cystatin C has been suggested to be a more accurate glomerular filtration rate (GFR) surrogate than creatinine in patients with acquired immunodeficiency syndrome (AIDS) because it is unaffected by skeletal muscle mass and dietary influences. However, little is known about the utility of this marker for monitoring medications in the critically ill. We describe the case of a 64-year-old female with opportunistic infections associated with a new diagnosis of AIDS. During her course, she experienced neurologic, cardiac, and respiratory failure; yet her renal function remained preserved as indicated by an eGFR ≥ 120 mL/min and a urine output > 1 mL/kg/hr without diuresis. The patient was treated with nephrotoxic agents; therefore cystatin C was assessed to determine if cachexia was resulting in a falsely low serum creatinine. Cystatin C measured 1.50 mg/L which corresponded to an eGFR of 36 mL/min. Given the >60 mL/min discrepancy, serial 8-hour urine samples were collected and a GFR > 120 mL/min was confirmed. It is unclear why cystatin C was falsely elevated, but we hypothesize that it relates to the proinflammatory state with AIDS, opportunistic infections, and corticosteroids. More research is needed before routine use of cystatin C in this setting can be recommended
Forced oscillation detection amid communication uncertainties
This article proposes a novel technique for the detection of forced oscillation (FO) in a power system with the uncertainty in the measured signals. The impacts of communication uncertainties on measured signals are theoretically investigated based on the mathematical models developed in this article. A data recovery method is proposed and applied to reconstruct the signal under the effects of communication losses. The proposed FO detection with communication uncertainties is evaluated in the modified 14-machine Southeast Australian power system. A rigorous comparative analysis is made to validate the effectiveness of the proposed data recovery and FO detection methods
Derivation and validation of cutoffs for clinical use of cell cycle arrest biomarkers
Background Acute kidney injury (AKI) remains a deadly condition. Tissue inhibitor of metalloproteinases (TIMP)-2 and insulin-like growth factor binding protein (IGFBP)7 are two recently discovered urinary biomarkers for AKI. We now report on the development, and diagnostic accuracy of two clinical cutoffs for a test using these markers. Methods We derived cutoffs based on sensitivity and specificity for prediction of Kidney Disease: Improving Global Outcomes Stages 2–3 AKI within 12 h using data from a previously published multicenter cohort (Sapphire). Next, we verified these cutoffs in a new study (Opal) enrolling 154 critically ill adults from six sites in the USA. Results One hundred subjects (14%) in Sapphire and 27 (18%) in Opal met the primary end point. The results of the Opal study replicated those of Sapphire. Relative risk (95% CI) in both studies for subjects testing at ≤0.3 versus \u3e0.3–2 were 4.7 (1.5–16) and 4.4 (2.5–8.7), or 12 (4.2–40) and 18 (10–37) for ≤0.3 versus \u3e2. For the 0.3 cutoff, sensitivity was 89% in both studies, and specificity 50 and 53%. For 2.0, sensitivity was 42 and 44%, and specificity 95 and 90%. Conclusions Urinary [TIMP-2]•[IGFBP7] values of 0.3 or greater identify patients at high risk and those \u3e2 at highest risk for AKI and provide new information to support clinical decision-making. Clinical Trials Registration Clintrials.gov # NCT01209169 (Sapphire) and NCT01846884 (Opal)
- …