1,261 research outputs found

    A Bayesian Approach to Manifold Topology Reconstruction

    Get PDF
    In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated

    Evaluation de la capacité de complexation des eaux naturelles de la rivière Saguenay, Canada

    Get PDF
    La rivière Saguenay est un affluent majeur du fleuve Saint-Laurent, Québec, Canada. La rivière Saguenay draine une région très industrialisée et se divise en deux sections : la section supérieure est peu profonde et constituée d'eau douce, tandis que la section en aval renferme un fjord profond caractérisé par une thermohalocline à environ 25 m. Nous avons caractérisé la capacité de complexation (CC) et la constante de stabilité critique (CSC) de ses eaux douces, dans la section supérieure de la rivière. Cinq différentes stations ont été échantillonnées le même jour; ces échantillons ont été fractionnés en fonction de la masse moléculaire nominale (NMM) des ligands dissous à l'aide de quatre colonnes de chromatographie par perméation de gel (GPC) Séphadex G-10, G-15, G-25 et G-50 utilisées en série. Pour les échantillons globaux, la CC diminue d'amont en aval passant de 0,32 à 0,14 µM. Nous n'avons pu identifier la cause de cette diminution qui pourrait être un simple effet de dilution ou une augmentation d'ions métalliques en solution. Une fois fractionnés, nous trouvons que la CC augmente avec NMM; par contre, la CC normalisée par unité de carbone est plus grande pour les ligands de plus faible NMM. Les CSC obtenues sont toutes similaires, environ 5 x 107 L mol-1, sauf pour les ligands ayant une NMM entre 700 et 1 800 g mol-1 avec une CSC de 27 x 107 L mol-1.The Saguenay River is a major affluent of the St. Lawrence River, Quebec, Canada. The Saguenay River which drains a heavily industrialized region can be subdivided into two sections: the upper section is rather shallow and contains freshwater as the lower one is a deep fjord characterized by a thermohalocline at about 25 m. This work aimed at identifying the possible modifications brought up by anthropogenic sources upon the complexation capacity of the freshwater of this River. Five different stations were sampled for surface water the same day on the upper section of the River. The samples were filtered on 0,4 µm membrane (pre-cleaned with HNO3). A portion was analyzed and other ones were fractionnated as a function of the nominal molecular mass (NMM) of dissolved ligands by using in series four gel permeation chromatographic (GPC) columns filled with Sephadex G-10, G-15, G-25 and G-50 respectively, the elution being dope by purified 18MOhms water. The complexation capacity (CC) and critical stability constant (CSC) of the different fractions have been characterized using a method based on free Cu2+ back-titration by Differential Pulsed Anodic Stripping Voltammetry (DPASV) and a 1:1 complexation scheme. Because copper was giving two unresolved peaks on the tailing of the oxygen peak, all polarograms have been deconvolved by a PASCAL computer program based on a least-sqares nonlinear fit using the Taylor differential correction technique. All results compiled were from the peak centered at - 60 mV against an Ag/AgCl reference. By manipulating the usual equations to determine CC and CSC with the free Cu2+ back-titration, we were able to calculate CC by three different routes and CSC by two different routes ; when enough reliable data were available for each route, all values obtained were concordant. So we observed that, going downstream, the CC decreased from 0,32 to 0,14 µM for whole samples. At this point, we cannot identity the cause of this decrease wether it is due to simple dilution or by addition of new dissolved metallic ions into the stream. Once fractionnated, CC measured was seen increasing with NMM but normalized CC per unit of carbon has been found to be greater for ligands with small NMM (normalized CC decreased with increasing NMM). The CSC obtained were all similar, about 5 x 107 L mol-1, excepted for ligands with NMM between 700 and 1 800 g mol-1, the CSC being 27 x 107 L mol-1 from the inverse linearized method

    A Bayesian Approach to Manifold Topology Reconstruction

    Get PDF
    In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated

    Spreads in Effective Learning Rates: The Perils of Batch Normalization During Early Training

    Full text link
    Excursions in gradient magnitude pose a persistent challenge when training deep networks. In this paper, we study the early training phases of deep normalized ReLU networks, accounting for the induced scale invariance by examining effective learning rates (LRs). Starting with the well-known fact that batch normalization (BN) leads to exponentially exploding gradients at initialization, we develop an ODE-based model to describe early training dynamics. Our model predicts that in the gradient flow, effective LRs will eventually equalize, aligning with empirical findings on warm-up training. Using large LRs is analogous to applying an explicit solver to a stiff non-linear ODE, causing overshooting and vanishing gradients in lower layers after the first step. Achieving overall balance demands careful tuning of LRs, depth, and (optionally) momentum. Our model predicts the formation of spreads in effective LRs, consistent with empirical measurements. Moreover, we observe that large spreads in effective LRs result in training issues concerning accuracy, indicating the importance of controlling these dynamics. To further support a causal relationship, we implement a simple scheduling scheme prescribing uniform effective LRs across layers and confirm accuracy benefits

    The translation, validity and reliability of the German version of the Fremantle Back Awareness Questionnaire

    Get PDF
    Background: The Fremantle Back Awareness Questionnaire (FreBAQ) claims to assess disrupted self-perception of the back. The aim of this study was to develop a German version of the Fre-BAQ (FreBAQ-G) and assess its test-retest reliability, its known-groups validity and its convergent validity with another purported measure of back perception. Methods: The FreBaQ-G was translated following international guidelines for the transcultural adaptation of questionnaires. Thirty-five patients with non-specific CLBP and 48 healthy participants were recruited. Assessor one administered the FreBAQ-G to each patient with CLBP on two separate days to quantify intra-observer reliability. Assessor two administered the FreBaQ-G to each patient on day 1. The scores were compared to those obtained by assessor one on day 1 to assess inter-observer reliability. Known-groups validity was quantified by comparing the FreBAQ-G score between patients and healthy controls. To assess convergent validity, patient\u27s FreBAQ-G scores were correlated to their two-point discrimination (TPD) scores. Results: Intra- and Inter-observer reliability were both moderate with ICC3.1 = 0.88 (95%CI: 0.77 to 0.94) and 0.89 (95%CI: 0.79 to 0.94), respectively. Intra- and inter-observer limits of agreement (LoA) were 6.2 (95%CI: 5.0±8.1) and 6.0 (4.8±7.8), respectively. The adjusted mean difference between patients and controls was 5.4 (95%CI: 3.0 to 7.8, p\u3c0.01). Patient\u27s FreBAQ-G scores were not associated with TPD thresholds (Pearson\u27s r = -0.05, p = 0.79). Conclusions: The FreBAQ-G demonstrated a degree of reliability and known-groups validity. Interpretation of patient level data should be performed with caution because the LoA were substantial. It did not demonstrate convergent validity against TPD. Floor effects of some items of the FreBAQ-G may have influenced the validity and reliability results. The clinimetric properties of the FreBAQ-G require further investigation as a simple measure of disrupted self-perception of the back before firm recommendations on its use can be made

    Sequential Data-Adaptive Bandwidth Selection by Cross-Validation for Nonparametric Prediction

    Full text link
    We consider the problem of bandwidth selection by cross-validation from a sequential point of view in a nonparametric regression model. Having in mind that in applications one often aims at estimation, prediction and change detection simultaneously, we investigate that approach for sequential kernel smoothers in order to base these tasks on a single statistic. We provide uniform weak laws of large numbers and weak consistency results for the cross-validated bandwidth. Extensions to weakly dependent error terms are discussed as well. The errors may be {\alpha}-mixing or L2-near epoch dependent, which guarantees that the uniform convergence of the cross validation sum and the consistency of the cross-validated bandwidth hold true for a large class of time series. The method is illustrated by analyzing photovoltaic data.Comment: 26 page

    Rethinking clinical trials of transcranial direct current stimulation: Participant and assessor blinding is inadequate at intensities of 2mA

    Get PDF
    Copyright @ 2012 The Authors. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and 85 reproduction in any medium, provided the original author and source are credited. The article was made available through the Brunel University Open Access Publishing Fund.Background: Many double-blind clinical trials of transcranial direct current stimulation (tDCS) use stimulus intensities of 2 mA despite the fact that blinding has not been formally validated under these conditions. The aim of this study was to test the assumption that sham 2 mA tDCS achieves effective blinding. Methods: A randomised double blind crossover trial. 100 tDCS-naĂŻve healthy volunteers were incorrectly advised that they there were taking part in a trial of tDCS on word memory. Participants attended for two separate sessions. In each session, they completed a word memory task, then received active or sham tDCS (order randomised) at 2 mA stimulation intensity for 20 minutes and then repeated the word memory task. They then judged whether they believed they had received active stimulation and rated their confidence in that judgement. The blinded assessor noted when red marks were observed at the electrode sites post-stimulation. Results: tDCS at 2 mA was not effectively blinded. That is, participants correctly judged the stimulation condition greater than would be expected to by chance at both the first session (kappa level of agreement (Îş) 0.28, 95% confidence interval (CI) 0.09 to 0.47 p = 0.005) and the second session (Îş = 0.77, 95%CI 0.64 to 0.90), p = <0.001) indicating inadequate participant blinding. Redness at the reference electrode site was noticeable following active stimulation more than sham stimulation (session one, Îş = 0.512, 95%CI 0.363 to 0.66, p<0.001; session two, Îş = 0.677, 95%CI 0.534 to 0.82) indicating inadequate assessor blinding. Conclusions: Our results suggest that blinding in studies using tDCS at intensities of 2 mA is inadequate. Positive results from such studies should be interpreted with caution.GLM is supported by the National Health & Medical Research Council of Australia ID 571090
    • …
    corecore