647 research outputs found

    Programming matrix optics into Mathematica

    Get PDF
    The various non-linear transformations incurred by the rays in an optical system can be modelled by matrix products up to any desired order of approximation. Mathematica software has been used to find the appropriate matrix coefficients for the straight path transformation and for the transformations induced by conical surfaces, both direction change and position offset. The same software package was programmed to model optical systems in seventh-order. A Petzval lens was used to exemplify the modelling power of the program.Comment: 15 page

    Repurposing cancer drugs, batimastat and marimastat, to inhibit the activity of a group I metalloprotease from the venom of the Western Diamondback rattlesnake, Crotalus atrox

    Get PDF
    Snakebite envenomation causes over 140,000 deaths every year predominantly in developing countries. As a result, it is one of the most lethal neglected tropical diseases. It is associated with an incredibly complex pathophysiology due to the vast number of unique toxins/proteins found in the venoms of diverse snake species found worldwide. Here, we report the purification and functional characteristics of a group I metalloprotease (CAMP-2) from the venom of the western diamondback rattlesnake, Crotalus atrox. Its sensitivity to matrix metalloprotease inhibitors (batimastat and marimastat) was established using specific in vitro experiments and in silico molecular docking analysis. CAMP-2 shows high sequence homology to atroxase from the venom of Crotalus atrox and exhibits collagenolytic, fibrinogenolytic and mild haemolytic activities. It exerts a mild inhibitory effect on agonist-induced platelet aggregation in the absence of plasma proteins. Its collagenolytic activity was completely inhibited by batimastat and marimastat. Zinc chloride also inhibits the collagenolytic activity of CAMP-2 by around 75% at 50 M, while it is partially potentiated by calcium chloride. Molecular docking studies demonstrate that batimastat and marimastat are able to bind strongly to the active site residues of CAMP-2. This study demonstrates the impact of matrix metalloprotease inhibitors in the modulation of a purified, group I metalloprotease activities in comparison to the whole venom. By improving our understanding of snake venom metalloproteases and their sensitivity to small molecule inhibitors, we can begin to develop novel and improved treatment strategies for snakebites

    Validating secure and reliable IP/MPLS communications for current differential protection

    Get PDF
    Current differential protection has stringent real-time communications requirements and it is critical that protection traffic is transmitted securely, i.e., by using appropriate data authentication and encryption methods. This paper demonstrates that real-time encryption of protection traffic in IP/MPLS-based communications networks is possible with negligible impact on performance and system operation. It is also shown how the impact of jitter and asymmetrical delay in real communications networks can be eliminated. These results will provide confidence to power utilities that modern IP/MPLS infrastructure can securely and reliably cater for even the most demanding applications

    Wide angle near-field diffraction and Wigner distribution

    Get PDF
    Free-space propagation can be described as a shearing of the Wigner distribution function in the spatial coordinate; this shearing is linear in paraxial approximation but assumes a more complex shape for wide-angle propagation. Integration in the frequency domain allows the determination of near-field diffraction, leading to the well known Fresnel diffraction when small angles are considered and allowing exact prediction of wide-angle diffraction. The authors use this technique to demonstrate evanescent wave formation and diffraction elimination for very small apertures

    Decision trees and forests: a probabilistic perspective

    Get PDF
    Decision trees and ensembles of decision trees are very popular in machine learning and often achieve state-of-the-art performance on black-box prediction tasks. However, popular variants such as C4.5, CART, boosted trees and random forests lack a probabilistic interpretation since they usually just specify an algorithm for training a model. We take a probabilistic approach where we cast the decision tree structures and the parameters associated with the nodes of a decision tree as a probabilistic model; given labeled examples, we can train the probabilistic model using a variety of approaches (Bayesian learning, maximum likelihood, etc). The probabilistic approach allows us to encode prior assumptions about tree structures and share statistical strength between node parameters; furthermore, it offers a principled mechanism to obtain probabilistic predictions which is crucial for applications where uncertainty quantification is important. Existing work on Bayesian decision trees relies on Markov chain Monte Carlo which can be computationally slow and suffer from poor mixing. We propose a novel sequential Monte Carlo algorithm that computes a particle approximation to the posterior over trees in a top-down fashion. We also propose a novel sampler for Bayesian additive regression trees by combining the above top-down particle filtering algorithm with the Particle Gibbs (Andrieu et al., 2010) framework. Finally, we propose Mondrian forests (MFs), a computationally efficient hybrid solution that is competitive with non-probabilistic counterparts in terms of speed and accuracy, but additionally produces well-calibrated uncertainty estimates. MFs use the Mondrian process (Roy and Teh, 2009) as the randomization mechanism and hierarchically smooth the node parameters within each tree (using a hierarchical probabilistic model and approximate Bayesian updates), but combine the trees in a non-Bayesian fashion. MFs can be grown in an incremental/online fashion and remarkably, the distribution of online MFs is the same as that of batch MFs

    A UK wide cohort study describing management and outcomes for infants with surgical Necrotising Enterocolitis

    Get PDF
    The Royal College of Surgeons have proposed using outcomes from necrotising enterocolitis (NEC) surgery for revalidation of neonatal surgeons. The aim of this study was therefore to calculate the number of infants in the UK/Ireland with surgical NEC and describe outcomes that could be used for national benchmarking and counselling of parents. A prospective nationwide cohort study of every infant requiring surgical intervention for NEC in the UK was conducted between 01/03/13 and 28/02/14. Primary outcome was mortality at 28-days. Secondary outcomes included discharge, post-operative complication, and TPN requirement. 236 infants were included, 43(18%) of whom died, and eight(3%) of whom were discharged prior to 28-days post decision to intervene surgically. Sixty infants who underwent laparotomy (27%) experienced a complication, and 67(35%) of those who were alive at 28 days were parenteral nutrition free. Following multi-variable modelling, presence of a non-cardiac congenital anomaly (aOR 5.17, 95% CI 1.9-14.1), abdominal wall erythema or discolouration at presentation (aOR 2.51, 95% CI 1.23-5.1), diagnosis of single intestinal perforation at laparotomy (aOR 3.1 95% CI 1.05-9.3), and necessity to perform a clip and drop procedure (aOR 30, 95% CI 3.9-237) were associated with increased 28-day mortality. These results can be used for national benchmarking and counselling of parents

    Non-negative matrix factorization for parameter estimation in hidden Markov models

    Full text link
    Hidden Markov models are well-known in analysis of random processes, which exhibit temporal or spatial structure and have been successfully applied to a wide variety of applications such as but not limited to speech recognition, musical scores, handwriting, and bio-informatics. We present a novel algorithm for estimating the parameters of a hidden Markov model through the application of a non-negative matrix factorization to the joint probability distribution of two consecutive observations. We start with the discrete observation model and extend the results to the continuous observation model through a non-parametric approach of kernel density estimation. For both the cases, we present results on a toy example and compare the performance with the Baum-Welch algorithm. ©2010 IEEE

    The Mondrian Kernel

    Get PDF
    We introduce the Mondrian kernel, a fast random feature\textit{random feature} approximation to the Laplace kernel. It is suitable for both batch and online learning, and admits a fast kernel-width-selection procedure as the random features can be re-used efficiently for all kernel widths. The features are constructed by sampling trees via a Mondrian process [Roy and Teh, 2009], and we highlight the connection to Mondrian forests [Lakshminarayanan et al., 2014], where trees are also sampled via a Mondrian process, but fit independently. This link provides a new insight into the relationship between kernel methods and random forests

    Do deep generative models know what they don't know?

    Get PDF
    A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flow models to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood

    Hybrid models with deep and invertible features

    Get PDF
    We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets|features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Yet the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning
    corecore