11,728 research outputs found

    Incorporating interactive 3-dimensional graphics in astronomy research papers

    Full text link
    Most research data collections created or used by astronomers are intrinsically multi-dimensional. In contrast, all visual representations of data presented within research papers are exclusively 2-dimensional. We present a resolution of this dichotomy that uses a novel technique for embedding 3-dimensional (3-d) visualisations of astronomy data sets in electronic-format research papers. Our technique uses the latest Adobe Portable Document Format extensions together with a new version of the S2PLOT programming library. The 3-d models can be easily rotated and explored by the reader and, in some cases, modified. We demonstrate example applications of this technique including: 3-d figures exhibiting subtle structure in redshift catalogues, colour-magnitude diagrams and halo merger trees; 3-d isosurface and volume renderings of cosmological simulations; and 3-d models of instructional diagrams and instrument designs.Comment: 18 pages, 7 figures, submitted to New Astronomy. For paper with 3-dimensional embedded figures, see http://astronomy.swin.edu.au/s2plot/3dpd

    Incorporating interactive 3-dimensional graphics in astronomy research papers

    Full text link
    Most research data collections created or used by astronomers are intrinsically multi-dimensional. In contrast, all visual representations of data presented within research papers are exclusively 2-dimensional. We present a resolution of this dichotomy that uses a novel technique for embedding 3-dimensional (3-d) visualisations of astronomy data sets in electronic-format research papers. Our technique uses the latest Adobe Portable Document Format extensions together with a new version of the S2PLOT programming library. The 3-d models can be easily rotated and explored by the reader and, in some cases, modified. We demonstrate example applications of this technique including: 3-d figures exhibiting subtle structure in redshift catalogues, colour-magnitude diagrams and halo merger trees; 3-d isosurface and volume renderings of cosmological simulations; and 3-d models of instructional diagrams and instrument designs.Comment: 18 pages, 7 figures, submitted to New Astronomy. For paper with 3-dimensional embedded figures, see http://astronomy.swin.edu.au/s2plot/3dpd

    Word Searching in Scene Image and Video Frame in Multi-Script Scenario using Dynamic Shape Coding

    Full text link
    Retrieval of text information from natural scene images and video frames is a challenging task due to its inherent problems like complex character shapes, low resolution, background noise, etc. Available OCR systems often fail to retrieve such information in scene/video frames. Keyword spotting, an alternative way to retrieve information, performs efficient text searching in such scenarios. However, current word spotting techniques in scene/video images are script-specific and they are mainly developed for Latin script. This paper presents a novel word spotting framework using dynamic shape coding for text retrieval in natural scene image and video frames. The framework is designed to search query keyword from multiple scripts with the help of on-the-fly script-wise keyword generation for the corresponding script. We have used a two-stage word spotting approach using Hidden Markov Model (HMM) to detect the translated keyword in a given text line by identifying the script of the line. A novel unsupervised dynamic shape coding based scheme has been used to group similar shape characters to avoid confusion and to improve text alignment. Next, the hypotheses locations are verified to improve retrieval performance. To evaluate the proposed system for searching keyword from natural scene image and video frames, we have considered two popular Indic scripts such as Bangla (Bengali) and Devanagari along with English. Inspired by the zone-wise recognition approach in Indic scripts[1], zone-wise text information has been used to improve the traditional word spotting performance in Indic scripts. For our experiment, a dataset consisting of images of different scenes and video frames of English, Bangla and Devanagari scripts were considered. The results obtained showed the effectiveness of our proposed word spotting approach.Comment: Multimedia Tools and Applications, Springe

    Applications of Artificial Intelligence to Cryptography

    Get PDF
    This paper considers some recent advances in the field of Cryptography using Artificial Intelligence (AI). It specifically considers the applications of Machine Learning (ML) and Evolutionary Computing (EC) to analyze and encrypt data. A short overview is given on Artificial Neural Networks (ANNs) and the principles of Deep Learning using Deep ANNs. In this context, the paper considers: (i) the implementation of EC and ANNs for generating unique and unclonable ciphers; (ii) ML strategies for detecting the genuine randomness (or otherwise) of finite binary strings for applications in Cryptanalysis. The aim of the paper is to provide an overview on how AI can be applied for encrypting data and undertaking cryptanalysis of such data and other data types in order to assess the cryptographic strength of an encryption algorithm, e.g. to detect patterns of intercepted data streams that are signatures of encrypted data. This includes some of the authors’ prior contributions to the field which is referenced throughout. Applications are presented which include the authentication of high-value documents such as bank notes with a smartphone. This involves using the antenna of a smartphone to read (in the near field) a flexible radio frequency tag that couples to an integrated circuit with a non-programmable coprocessor. The coprocessor retains ultra-strong encrypted information generated using EC that can be decrypted on-line, thereby validating the authenticity of the document through the Internet of Things with a smartphone. The application of optical authentication methods using a smartphone and optical ciphers is also briefly explored

    Contextualized risk mitigation based on geological proxies in alluvial diamond mining using geostatistical techniques

    Get PDF
    A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the degree of Doctor of Philosophy. Johannesburg 2016Quantifying risk in the absence of hard data presents a significant challenge. Onshore mining of the diamondiferous linear beach deposit along the south western coast of Namibia has been ongoing for more than 80 years. A historical delineated campaign from the 1930s to 1960s used coast perpendicular trenches spaced 500 m apart, comprising a total of 26 000 individual samples, to identify 6 onshore raised beaches. These linear beaches extend offshore and are successfully mined in water depths deeper than 30 m. There is, however, a roughly 4 km wide submerged coast parallel strip adjacent to the mostly mined out onshore beaches for which no real hard data is available at present. The submerged beaches within the 4 km coast parallel strip hold great potential for being highly diamondiferous. To date hard data is not yet available to quantify or validate this potential. The question is how to obtain sufficient hard data within the techno economic constraints to enable a resource with an acceptable level of confidence to be developed. The work presented in this thesis illustrates how virtual orebodies (VOBs) are created based on geological proxies in order to have a basis to assess and rank different sampling and drilling strategies. Overview of 4 papers Paper I demonstrates the challenge of obtaining a realistic variogram that can be used in variogram-based geostatistical simulations. Simulated annealing is used to unfold the coastline and improve the detectable variography for a number of the beaches. Paper II shows how expert opinion interpretation is used to supplement sparse data that is utilised to create an indicator simulation to study the presence and absence of diamondiferous gravel. When only the sparse data is used the resultant simulation is unsuitable as a VOB upon which drilling strategies can be assessed. Paper III outlines how expert opinion hand sketches are used to create a VOB. The composite probability map based on geological proxies is adjusted using a grade profile based on adjacent onshore data before it is seeded with stones and used as a VOB for strategy testing. Paper IV illustrates how the Nachman model based on a Negative Binomial Distribution (NBD) is used to predict a minimum background grade by considering only the zero proportions (Zp) of the grade data. v Conclusions and future work In the realm of creating spatial simulations that can serve as VOBs it is very difficult to attempt to quantify uncertainty when no hard data is available. In the absence of hard data, geological proxies and expert opinion are the only inputs that can be used to create VOBs. Subsequently these VOBs are used as a base to be analysed in order to evaluate and rank different sampling and drilling strategies based on techno economic constraints. VOBs must be updated and reviewed as hard data becomes available after which sampling strategies should be reassessed. During early stage exploration projects the Zp of sample results can be used to predict a minimum background grade and rank different targets for further sampling and valuation. The research highlights the possibility that multi point statistics (MPS) can be used. Higher order MPS should be further investigated as an additional method for creating VOBs upon which sampling strategies can be assessed.MT201

    Bayesian Estimation of the Multifractality Parameter for Image Texture Using a Whittle Approximation

    Get PDF
    International audienceTexture characterization is a central element in many image processing applications. Multifractal analysis is a useful signal and image processing tool, yet, the accurate estimation of multifractal parameters for image texture remains a challenge. This is due in the main to the fact that current estimation procedures consist of performing linear regressions across frequency scales of the 2D dyadic wavelet transform, for which only a few such scales are computable for images. The strongly non-Gaussian nature of multifractal processes, combined with their complicated dependence structure, makes it difficult to develop suitable models for parameter estimation. Here, we propose a Bayesian procedure that addresses the difficulties in the estimation of the multifractality parameter. The originality of the procedure is threefold. The construction of a generic semiparametric statistical model for the logarithm of wavelet leaders; the formulation of Bayesian estimators that are associated with this model and the set of parameter values admitted by multifractal theory; the exploitation of a suitable Whittle approximation within the Bayesian model which enables the otherwise infeasible evaluation of the posterior distribution associated with the model. Performance is assessed numerically for several 2D multifractal processes, for several image sizes and a large range of process parameters. The procedure yields significant benefits over current benchmark estimators in terms of estimation performance and ability to discriminate between the two most commonly used classes of multifractal process models. The gains in performance are particularly pronounced for small image sizes, notably enabling for the first time the analysis of image patches as small as 64 × 64 pixels
    • 

    corecore