33 research outputs found

    Hybrid perovskite characterization and device applications.

    Get PDF
    Hybrid perovskites are a group of materials that has shown a great impact in the field of scientific research in the past decade due to the efficiency gain within a short period of time. Hot casting is one technique that has been producing high efficient and stable solar cells. Electrical transportation of lateral device structure by such film is explored to understand basic properties and predict possible device applications using it. Under dark, memristive ability of the film was explored using various experiments. Unique uni-polar memristor ability was observed. Using the experimental results, a model is hypothesized using the concepts of inbuilt potential, ion motion and carrier generation in the film. For three terminal devices unique n-type behavior in the presence of light condition and ambi-polar behavior under dark condition was observed. Reversible inert gas sensing ability of the film is explored using the surface conductivity with extra light. Getting better performances in the device applications as well as to understand overall behavior of the film it-self were discussed in the following thesis

    An error term in the Central Limit Theorem for sums of discrete random variables

    Full text link
    We consider sums of independent identically distributed random variables whose distributions have d+1d+1 atoms. Such distributions never admit an Edgeworth expansion of order dd but we show that for almost all parameters the Edgeworth expansion of order d1d-1 is valid and the error of the order d1d-1 Edgeworth expansion is typically of order nd/2.n^{-d/2}.Comment: To appear in the International Mathematics Research Notice

    Higher order asymptotics for large deviations -- Part II

    Get PDF
    We obtain asymptotic expansions for the large deviation principle (LDP) for continuous time stochastic processes with weakly dependent increments. As a key example, we show that additive functionals of solutions of stochastic differential equations (SDEs) satisfying H\"ormander condition on a dd-dimensional compact manifold admit these asymptotic expansions of all orders.Comment: 17 page

    Higher order asymptotics for the Central Limit Theorem and Large Deviation Principles

    Get PDF
    First, we present results that extend the classical theory of Edgeworth expansions to independent identically distributed non-lattice discrete random variables. We consider sums of independent identically distributed random variables whose distributions have (d+1) atoms and show that such distributions never admit an Edgeworth expansion of order d but for almost all parameters the Edgeworth expansion of order (d-1) is valid and the error of the order (d-1) Edgeworth expansion is typically O(n^{-d/2}) but the O(n^{-d/2}) terms have wild oscillations. Next, going a step further, we introduce a general theory of Edgeworth expansions for weakly dependent random variables. This gives us higher order asymptotics for the Central Limit Theorem for strongly ergodic Markov chains and for piece-wise expanding maps. In addition, alternative versions of asymptotic expansions are introduced in order to estimate errors when the classical expansions fail to hold. As applications, we obtain Local Limit Theorems and a Moderate Deviation Principle. Finally, we introduce asymptotic expansions for large deviations. For sufficiently regular weakly dependent random variables, we obtain higher order asymptotics (similar to Edgeworth Expansions) for Large Deviation Principles. In particular, we obtain asymptotic expansions for Cramer's classical Large Deviation Principle for independent identically distributed random variables, and for the Large Deviation Principle for strongly ergodic Markov chains

    Sensory Mapping of Lumbar Facet Joint Pain : A feasibility study

    Get PDF
    Acknowledgements The authors would like to thank Dr Jeremy Weinbren, Consultant Anaesthetist, The Hillingdon Hospitals NHS Foundation Trust for his statistical advice on this paper. Funding The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: K.F. received a John Snow Anaesthesia Intercalated BSc bursary. No funding was obtained for the running costs of the project.Peer reviewedPostprin

    End-To-End Data-Dependent Routing in Multi-Path Neural Networks

    Full text link
    Neural networks are known to give better performance with increased depth due to their ability to learn more abstract features. Although the deepening of networks has been well established, there is still room for efficient feature extraction within a layer which would reduce the need for mere parameter increment. The conventional widening of networks by having more filters in each layer introduces a quadratic increment of parameters. Having multiple parallel convolutional/dense operations in each layer solves this problem, but without any context-dependent allocation of resources among these operations: the parallel computations tend to learn similar features making the widening process less effective. Therefore, we propose the use of multi-path neural networks with data-dependent resource allocation among parallel computations within layers, which also lets an input to be routed end-to-end through these parallel paths. To do this, we first introduce a cross-prediction based algorithm between parallel tensors of subsequent layers. Second, we further reduce the routing overhead by introducing feature-dependent cross-connections between parallel tensors of successive layers. Our multi-path networks show superior performance to existing widening and adaptive feature extraction, and even ensembles, and deeper networks at similar complexity in the image recognition task

    Neural Mixture Models with Expectation-Maximization for End-to-end Deep Clustering

    Full text link
    Any clustering algorithm must synchronously learn to model the clusters and allocate data to those clusters in the absence of labels. Mixture model-based methods model clusters with pre-defined statistical distributions and allocate data to those clusters based on the cluster likelihoods. They iteratively refine those distribution parameters and member assignments following the Expectation-Maximization (EM) algorithm. However, the cluster representability of such hand-designed distributions that employ a limited amount of parameters is not adequate for most real-world clustering tasks. In this paper, we realize mixture model-based clustering with a neural network where the final layer neurons, with the aid of an additional transformation, approximate cluster distribution outputs. The network parameters pose as the parameters of those distributions. The result is an elegant, much-generalized representation of clusters than a restricted mixture of hand-designed distributions. We train the network end-to-end via batch-wise EM iterations where the forward pass acts as the E-step and the backward pass acts as the M-step. In image clustering, the mixture-based EM objective can be used as the clustering objective along with existing representation learning methods. In particular, we show that when mixture-EM optimization is fused with consistency optimization, it improves the sole consistency optimization performance in clustering. Our trained networks outperform single-stage deep clustering methods that still depend on k-means, with unsupervised classification accuracy of 63.8% in STL10, 58% in CIFAR10, 25.9% in CIFAR100, and 98.9% in MNIST
    corecore