105 research outputs found
A Data-Adaptive Targeted Learning Approach of Evaluating Viscoelastic Assay Driven Trauma Treatment Protocols
Estimating the impact of trauma treatment protocols is complicated by the
high dimensional yet finite sample nature of trauma data collected from
observational studies. Viscoelastic assays are highly predictive measures of
hemostasis. However, the effectiveness of thromboelastography(TEG) based
treatment protocols has not been statistically evaluated.To conduct robust and
reliable estimation with sparse data, we built an estimation "machine" for
estimating causal impacts of candidate variables using the collaborative
targeted maximum loss-based estimation(CTMLE) framework.The computational
efficiency is achieved by using the scalable version of CTMLE such that the
covariates are pre-ordered by summary statistics of their importance before
proceeding to the estimation steps.To extend the application of the estimator
in practice, we used super learning in combination with CTMLE to flexibly
choose the best convex combination of algorithms. By selecting the optimal
covariates set in high dimension and reducing constraints in choosing
pre-ordering algorithms, we are able to construct a robust and data-adaptive
model to estimate the parameter of interest.Under this estimation framework,
CTMLE outperformed the other doubly robust estimators(IPW,AIPW,stabilized
IPW,TMLE) in the simulation study. CTMLE demonstrated very accurate estimation
of the target parameter (ATE). Applying CTMLE on the real trauma data, the
treatment protocol (using TEG values immediately after injury) showed
significant improvement in trauma patient hemostasis status (control of
bleeding), and a decrease in mortality rate at 6h compared to standard care.The
estimation results did not show significant change in mortality rate at 24h
after arrival
Scalable Content-Based Analysis of Images in Web Archives with TensorFlow and the Archives Unleashed Toolkit
We demonstrate the integration of the Archives Unleashed Toolkit, a scalable platform for exploring web archives, with Google's TensorFlow deep learning toolkit to provide scholars with content-based image analysis capabilities. By applying pretrained deep neural networks for object detection, we are able to extract images of common objects from a 4TB web archive of GeoCities, which we then compile into browsable collages. This case study illustrates the types of interesting analyses enabled by combining big data and deep learning capabilities.This work was primarily supported by the Natural Sciences and Engineering Research Council of Canada. Additional funding for this project has come from the Andrew W. Mellon Foundation. Our sincerest thanks to the Internet Archive for providing us with the GeoCities web archive
Localisation par méthodes "range-based" et "range-free" de stations mobiles communicantes dans un réseau sans fil
Session Posters & DemosInternational audienceLa localisation des équipements mobiles communicants est une problématique importante pour les réseaux de capteurs sans fil, en particulier en intérieur, là où le GPS est inutilisable. Les algorithmes de localisation existants se rangent en deux catégories : "range-based" et "range-free". Les techniques "range- based" partent d'une évaluation de la distance entre émetteur et récepteur radio. Nous présentons, pour cette catégorie de systèmes de localisation, un état de l'art suivi de nos premiers travaux de métrologie, résultats utiles aux propositions et modélisations futures. Par rapport au principe "range-based", la technique "range-free" est plus économique en matériel car elle se contente de l'information de connectivité liée à la portée radio. Nous proposons un nouvel algorithme " range-free " qui fonctionne sur deux types de nœuds classés suivant le nombre d'ancres à portée. Les résultats de nos simulations montrent que l'algorithme proposé a une meilleure précision que les méthodes existantes, telles que Centroid, CPE ou DV-hop
SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation
Depth estimation from images serves as the fundamental step of 3D perception
for autonomous driving and is an economical alternative to expensive depth
sensors like LiDAR. The temporal photometric constraints enables
self-supervised depth estimation without labels, further facilitating its
application. However, most existing methods predict the depth solely based on
each monocular image and ignore the correlations among multiple surrounding
cameras, which are typically available for modern self-driving vehicles. In
this paper, we propose a SurroundDepth method to incorporate the information
from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views
and propose a cross-view transformer to effectively fuse the information from
multiple views. We apply cross-view self-attention to efficiently enable the
global interactions between multi-camera feature maps. Different from
self-supervised monocular depth estimation, we are able to predict real-world
scales given multi-camera extrinsic matrices. To achieve this goal, we adopt
the two-frame structure-from-motion to extract scale-aware pseudo depths to
pretrain the models. Further, instead of predicting the ego-motion of each
individual camera, we estimate a universal ego-motion of the vehicle and
transfer it to each view to achieve multi-view ego-motion consistency. In
experiments, our method achieves the state-of-the-art performance on the
challenging multi-camera depth estimation datasets DDAD and nuScenes.Comment: Accepted to CoRL 2022. Project page:
https://surrounddepth.ivg-research.xyz Code:
https://github.com/weiyithu/SurroundDept
Improvement of range-free localization technology by a novel DV-hop protocol in wireless sensor networks
International audienceLocalization is a fundamental issue for many applications in wireless sensor networks. Without the need of additional ranging devices, the range-free localization technology is a cost-effective solution for low-cost indoor and outdoor wireless sensor networks. Among range-free algorithms, DV-hop (Distance Vector - hop) has the advantage to localize the mobile nodes which has less than three neighbour anchors. Based on the original DV-hop algorithm, this paper presents two improved algorithms (Checkout DV-hop and Selective 3-Anchor DV-hop). Checkout DV-hop algorithm estimates the mobile node position by using the nearest anchor, while Selective 3-Anchor DV-hop algorithm chooses the best 3 anchors to improve localization accuracy. Then, in order to implement these DV-hop based algorithms in network scenarios, a novel DV-hop localization protocol is proposed. This new protocol is presented in detail in this paper, including the format of data payloads, the improved collision reduction method E-CSMA/CA, as well as parameters used in deciding the end of each DV-hop step. Finally, using our localization protocol, we investigate the performance of typical DV-hop based algorithms in terms of localization accuracy, mobility, synchronization and overhead. Simulation results prove that Selective 3-Anchor DV-hop algorithm offers the best performance compared to Checkout DV-hop and the original DV-hop algorithm
Recommended from our members
Towards Understanding Treatment Effect Heterogeneity
Understanding treatment effect heterogeneity has been an increasingly important task in variousfields. Treatment effect heterogeneity not only adds granularity to the understanding of
everyday matters but also assists better-informed decision-making on many scientific frontiers.
In biomedical studies, learning treatment effect heterogeneity helps clinicians to apply
personalized treatments to patient subpopulations with different genetic profiles. Instead of
prescribing one drug for all, refined prescription strategies can potentially improve patients’
overall welfare. In social science studies, evaluating the treatment effect heterogeneity of
candidate policies provides guidance for policymakers to implement future social programs.
In technology companies, understanding treatment effect heterogeneity helps decision-makers
to depict market segregation so that advertisement budgets can be strategically allocated
to particular consumer subpopulations among which a new product is more likely to earn
profits. This dissertation provides a set of statistical methodologies for understanding the treatment
effect heterogeneity and is organized into three chapters with three separate aims: (1) estimating
treatment effect heterogeneity, (2) confirming treatment effect heterogeneity, and
(3) designing adaptive experiments toward learning treatment effect heterogeneity
Chapter 1 introduces a statistical methodology aiming to estimate treatment effect heterogeneity
efficiently. We take a model-free semiparametric perspective and aim to efficiently
evaluate the heterogeneous treatment effects of multiple subgroups simultaneously under
the one-step targeted maximum-likelihood estimation framework. When the number of subgroups
is large, we further expand this path of research by looking at a variation of the
one-step TMLE that is robust to the presence of small estimated propensity scores in finite
samples. Chapter 2 proposes a statistical methodology for confirming the estimated heterogeneous
treatment effects. Understanding the impact of the most effective treatments on outcome
variables is crucial in various disciplines. Due to the widespread winner’s curse phenomenon,
conventional statistical inference assuming that the top policies are chosen independent of
the random sample may lead to overly optimistic evaluations of the best policies. In addition, given the increased availability of large datasets, such an issue can be further complicated
when researchers include many covariates to estimate the policy or treatment effects in an
attempt to control for potential confounders. To simultaneously address the above-mentioned
issues, we propose a resampling-based procedure that not only lifts the winner’s curse in
evaluating the best policies observed in a random sample but also is robust to the presence
of many covariates. The proposed inference procedure yields accurate point estimates and
valid frequentist confidence intervals that achieve the exact nominal level as the sample size
goes to infinity for multiple best policy effect sizes. Chapter 3 provides an alternative perspective of studying the treatment effect heterogeneity. While much of the existing work in this research area has focused on either analyzing
observational data based on untestable causal assumptions or conducting post hoc analyses
of existing randomized controlled trial data, little work has gone into designing randomized
experiments specifically for uncovering treatment effect heterogeneity. In this chapter, we
develop a unified adaptive experimental design framework towards better learning treatment
effect heterogeneity by efficiently identifying subgroups with enhanced treatment effects from
a frequentist viewpoint. The adaptive nature of our framework allows practitioners to sequentially
allocate experimental efforts adapting to the accrued evidence during the experiment.
The resulting design framework can not only complement A/B tests in e-commerce but also
unify enrichment designs and response adaptive randomization designs in clinical settings.
Our theoretical investigations illustrate the trade-offs between complete randomization and
our adaptive experimental algorithms
- …