22 research outputs found

    Geometry of discrete and continuous bounded surfaces

    Get PDF
    We work on reconstructing discrete and continuous surfaces with boundaries using length constraints. First, for a bounded discrete surface, we discuss the rigidity and number of embeddings in three-dimensional space, modulo rigid transformations, for given real edge lengths. Our work mainly considers the maximal number of embeddings of rigid graphs in three-dimensional space for specific geometries (annulus, strip). We modify a commonly used semi-algebraic, geometrical formulation using Bézout\u27s theorem, from Euclidean distances corresponding to edge lengths. We suggest a simple way to construct a rigid graph having a finite upper bound. We also implement a generalization of counting embeddings for graphs by segmenting multiple rigid graphs in d-dimensional space. Our computational methodology uses vector and matrix operations and can work best with a relatively small number of points

    Copula-based Multimodal Data Fusion for Inference with Dependent Observations

    Get PDF
    Fusing heterogeneous data from multiple modalities for inference problems has been an attractive and important topic in recent years. There are several challenges in multi-modal fusion, such as data heterogeneity and data correlation. In this dissertation, we investigate inference problems with heterogeneous modalities by taking into account nonlinear cross-modal dependence. We apply copula based methodology to characterize this dependence. In distributed detection, the goal often is to minimize the probability of detection error at the fusion center (FC) based on a fixed number of observations collected by the sensors. We design optimal detection algorithms at the FC using a regular vine copula based fusion rule. Regular vine copula is an extremely flexible and powerful graphical model used to characterize complex dependence among multiple modalities. The proposed approaches are theoretically justified and are computationally efficient for sensor networks with a large number of sensors. With heterogeneous streaming data, the fusion methods applied for processing data streams should be fast enough to keep up with the high arrival rates of incoming data, and meanwhile provide solutions for inference problems (detection, classification, or estimation) with high accuracy. We propose a novel parallel platform, C-Storm (Copula-based Storm), by marrying copula-based dependence modeling for highly accurate inference and a highly-regarded parallel computing platform Storm for fast stream data processing. The efficacy of C-Storm is demonstrated. In this thesis, we consider not only decision level fusion but also fusion with heterogeneous high-level features. We investigate a supervised classification problem by fusing dependent high-level features extracted from multiple deep neural network (DNN) classifiers. We employ regular vine copula to fuse these high-level features. The efficacy of the combination of model-based method and deep learning is demonstrated. Besides fixed-sample-size (FSS) based inference problems, we study a distributed sequential detection problem with random-sample-size. The aim of the distributed sequential detection problem in a non-Bayesian framework is to minimize the average detection time while satisfying the pre-specified constraints on probabilities of false alarm and miss detection. We design local memory-less truncated sequential tests and propose a copula based sequential test at the FC. We show that by suitably designing the local thresholds and the truncation window, the local probabilities of false alarm and miss detection of the proposed local decision rules satisfy the pre-specified error probabilities. Also, we show the asymptotic optimality and time efficiency of the proposed distributed sequential scheme. In large scale sensors networks, we consider a collaborative distributed estimation problem with statistically dependent sensor observations, where there is no FC. To achieve greater sensor transmission and estimation efficiencies, we propose a two-step cluster-based collaborative distributed estimation scheme. In the first step, sensors form dependence driven clusters such that sensors in the same cluster are dependent while sensors from different clusters are independent, and perform copula-based maximum a posteriori probability (MAP) estimation via intra-cluster collaboration. In the second step, the estimates generated in the first step are shared via inter-cluster collaboration to reach an average consensus. The efficacy of the proposed scheme is justified

    Smarandache Near-rings

    Full text link
    Generally, in any human field, a Smarandache Structure on a set A means a weak structure W on A such that there exists a proper subset B contained in A which is embedded with a stronger structure S. These types of structures occur in our everyday's life, that's why we study them in this book. Thus, as a particular case: A Near-ring is a non-empty set N together with two binary operations '+' and '.' such that (N, +) is a group (not necessarily abelian), (N, .) is a semigroup. For all a, b, c belonging to N we have (a + b) . c = a . c + b . c A Near-field is a non-empty set P together with two binary operations '+' and '.' such that (P, +) is a group (not-necessarily abelian), {P\{0}, .) is a group. For all a, b, c belonging to P we have (a + b) . c = a . c + b . c A Smarandache Near-ring is a near-ring N which has a proper subset P contained in N, where P is a near-field (with respect to the same binary operations on N).Comment: 200 pages, 50 tables, 20 figure

    New Challenges in Neutrosophic Theory and Applications

    Get PDF
    Neutrosophic theory has representatives on all continents and, therefore, it can be said to be a universal theory. On the other hand, according to the three volumes of “The Encyclopedia of Neutrosophic Researchers” (2016, 2018, 2019), plus numerous others not yet included in Encyclopedia book series, about 1200 researchers from 73 countries have applied both the neutrosophic theory and method. Neutrosophic theory was founded by Professor Florentin Smarandache in 1998; it constitutes further generalization of fuzzy and intuitionistic fuzzy theories. The key distinction between the neutrosophic set/logic and other types of sets/logics lies in the introduction of the degree of indeterminacy/neutrality (I) as an independent component in the neutrosophic set. Thus, neutrosophic theory involves the degree of membership-truth (T), the degree of indeterminacy (I), and the degree of non-membership-falsehood (F). In recent years, the field of neutrosophic set, logic, measure, probability and statistics, precalculus and calculus, etc., and their applications in multiple fields have been extended and applied in various fields, such as communication, management, and information technology. We believe that this book serves as useful guidance for learning about the current progress in neutrosophic theories. In total, 22 studies have been presented and reflect the call of the thematic vision. The contents of each study included in the volume are briefly described as follows. The first contribution, authored by Wadei Al-Omeri and Saeid Jafari, addresses the concept of generalized neutrosophic pre-closed sets and generalized neutrosophic pre-open sets in neutrosophic topological spaces. In the article “Design of Fuzzy Sampling Plan Using the Birnbaum-Saunders Distribution”, the authors Muhammad Zahir Khan, Muhammad Farid Khan, Muhammad Aslam, and Abdur Razzaque Mughal discuss the use of probability distribution function of Birnbaum–Saunders distribution as a proportion of defective items and the acceptance probability in a fuzzy environment. Further, the authors Derya Bakbak, Vakkas Uluc¸ay, and Memet S¸ahin present the “Neutrosophic Soft Expert Multiset and Their Application to Multiple Criteria Decision Making” together with several operations defined for them and their important algebraic properties. In “Neutrosophic Multigroups and Applications”, Vakkas Uluc¸ay and Memet S¸ahin propose an algebraic structure on neutrosophic multisets called neutrosophic multigroups, deriving their basic properties and giving some applications to group theory. Changxing Fan, Jun Ye, Sheng Feng, En Fan, and Keli Hu introduce the “Multi-Criteria Decision-Making Method Using Heronian Mean Operators under a Bipolar Neutrosophic Environment” and test the effectiveness of their new methods. Another decision-making study upon an everyday life issue which empowered us to organize the key objective of the industry developing is given in “Neutrosophic Cubic Einstein Hybrid Geometric Aggregation Operators with Application in Prioritization Using Multiple Attribute Decision-Making Method” written by Khaleed Alhazaymeh, Muhammad Gulistan, Majid Khan, and Seifedine Kadry

    Optimization of the holographic process for imaging and lithography

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 272-297).Since their invention in 1948 by Dennis Gabor, holograms have demonstrated to be important components of a variety of optical systems and their implementation in new fields and methods is expected to continue growing. Their ability to encode 3D optical fields on a 2D plane opened the possibility of novel applications for imaging and lithography. In the traditional form, holograms are produced by the interference of a reference and object waves recording the phase and amplitude of the complex field. The holographic process has been extended to include different recording materials and methods. The increasing demand for holographic-based systems is followed by a need for efficient optimization tools designed for maximizing the performance of the optical system. In this thesis, a variety of multi-domain optimization tools designed to improve the performance of holographic optical systems are proposed. These tools are designed to be robust, computationally efficient and sufficiently general to be applied when designing various holographic systems. All the major forms of holographic elements are studied: computer generated holograms, thin and thick conventional holograms, numerically simulated holograms and digital holograms. Novel holographic optical systems for imaging and lithography are proposed. In the case of lithography, a high-resolution system based on Fresnel domain computer generated holograms (CGHs) is presented. The holograms are numerically designed using a reduced complexity hybrid optimization algorithm (HOA) based on genetic algorithms (GAs) and the modified error reduction (MER) method. The algorithm is efficiently implemented on a graphic processing unit. Simulations as well as experimental results for CGHs fabricated using electron-beam lithography are presented. A method for extending the system's depth of focus is proposed. The HOA is extended for the design and optimization of multispectral CGHs applied for high efficiency solar concentration and spectral splitting. A second lithographic system based on optically recorded total internal reflection (TIR) holograms is studied. A comparative analysis between scalar and (cont.) vector diffraction theories for the modeling and simulation of the system is performed.A complete numerical model of the system is conducted including the photoresist response and first order models for shrinkage of the holographic emulsion. A novel block-stitching algorithm is introduced for the calculation of large diffraction patterns that allows overcoming current computational limitations of memory and processing time. The numerical model is implemented for optimizing the system's performance as well as redesigning the mask to account for potential fabrication errors. The simulation results are compared to experimentally measured data. In the case of imaging, a segmented aperture thin imager based on holographically corrected gradient index lenses (GRIN) is proposed. The compound system is constrained to a maximum thickness of 5mm and utilizes an optically recorded hologram for correcting high-order optical aberrations of the GRIN lens array. The imager is analyzed using system and information theories. A multi-domain optimization approach is implemented based on GAs for maximizing the system's channel capacity and hence improving the information extraction or encoding process. A decoding or reconstruction strategy is implemented using the superresolution algorithm. Experimental results for the optimization of the hologram's recording process and the tomographic measurement of the system's space-variant point spread function are presented. A second imaging system for the measurement of complex fluid flows by tracking micron sized particles using digital holography is studied. A stochastic theoretical model based on a stability metric similar to the channel capacity for a Gaussian channel is presented and used to optimize the system. The theoretical model is first derived for the extreme case of point source particles using Rayleigh scattering and scalar diffraction theory formulations. The model is then extended to account for particles of variable sizes using Mie theory for the scattering of homogeneous dielectric spherical particles. The influence and statistics of the particle density dependent cross-talk noise are studied. Simulation and experimental results for finding the optimum particle density based on the stability metric are presented. For all the studied systems, a sensitivity analysis is performed to predict and assist in the correction of potential fabrication or calibration errors.by José Antonio Domínguez-Caballero.Ph.D

    Full Issue

    Get PDF

    Large-scale inference in the focally damaged human brain

    Get PDF
    Clinical outcomes in focal brain injury reflect the interactions between two distinct anatomically distributed patterns: the functional organisation of the brain and the structural distribution of injury. The challenge of understanding the functional architecture of the brain is familiar; that of understanding the lesion architecture is barely acknowledged. Yet, models of the functional consequences of focal injury are critically dependent on our knowledge of both. The studies described in this thesis seek to show how machine learning-enabled high-dimensional multivariate analysis powered by large-scale data can enhance our ability to model the relation between focal brain injury and clinical outcomes across an array of modelling applications. All studies are conducted on internationally the largest available set of MR imaging data of focal brain injury in the context of acute stroke (N=1333) and employ kernel machines at the principal modelling architecture. First, I examine lesion-deficit prediction, quantifying the ceiling on achievable predictive fidelity for high-dimensional and low-dimensional models, demonstrating the former to be substantially higher than the latter. Second, I determine the marginal value of adding unlabelled imaging data to predictive models within a semi-supervised framework, quantifying the benefit of assembling unlabelled collections of clinical imaging. Third, I compare high- and low-dimensional approaches to modelling response to therapy in two contexts: quantifying the effect of treatment at the population level (therapeutic inference) and predicting the optimal treatment in an individual patient (prescriptive inference). I demonstrate the superiority of the high-dimensional approach in both settings

    Smarandache near-rings

    Get PDF
    The main concern of this book is the study of Smarandache analogue properties of near-rings and Smarandache near-rings; so it does not promise to cover all concepts or the proofs of all results

    A Search for WIMP Dark Matter using an Optimized Chi-square Technique on the Final Data from the Cryogenic Dark Matter Search Experiment (CDMS II).

    Get PDF
    During the last two decades, cosmology has become a precision observational science thanks (in part) to the incredible number of experiments performed to better understand the composition of the universe. The large amount of data accumulated strongly indicates that the bulk of the universe’s matter is in the form of nonbaryonic matter that does not interact electromagnetically. Combined evidence from the dynamics of galaxies and galaxy clusters confirms that most of the mass in the universe is not composed of any known form of matter. Measurements of the cosmic microwave background, big bang nucleosynthesis and many other experiments indicate that ∼ 80% of the matter in the universe is dark, non-relativistic and cold. The dark matter resides in the halos surrounding galaxies, galaxy clusters and other large-scale structures. Weakly Interacting Massive Particles (WIMPs) are well motivated class of dark matter candidates that arise naturally in supersymmetric extensions to the Standard Model of particles physics, and can be produced as non-relativistic thermal relics in the early universe with about the right density to account for the missing mass. The Cryogenic Dark Matter Search (CDMS) experiment seeks to directly detect the keV-scale energy deposited by WIMPs in the galactic halo when they scatter from nuclei in the crystalline detectors made of germanium and silicon. These detectors, called Z-sensitive Ionization and Phonon detectors (ZIPs) are operated at ∼ 45 mK and simultaneously measure the ionization and the (athermal) phonons produced by particle interactions. The ratio of ionization and phonon energies allows discrimination of a low rate of nuclear recoils (expected for WIMPs) from an overwhelming rate of electron recoils (expected for most backgrounds). Phonon-pulse shape and timing enables further suppression of lower-rate interactions at the detector surfaces. This dissertation describes the results of a WIMP search using CDMS II data sets accumulated at the Soudan Underground Laboratory in Minnesota. Results from the original analysis of these data were published in 2009; two events were observed in the signal region with an expected leakage of 0.9 events. Further investigation revealed an issue with the ionization-pulse reconstruction algorithm leading to a software upgrade and a subsequent reanalysis of the data. As part of the reanalysis, I performed an advanced discrimination technique to better distinguish (potential) signal events from backgrounds using a 5-dimensional chi-square method. This data analysis technique combines the event information recorded for each WIMP-search event to derive a background-discrimination parameter capable of reducing the expected background to less than one event, while maintaining high efficiency for signal events. Furthermore, optimizing the cut positions of this 5-dimensional chi-square parameter for the 14 viable germanium detectors yields an improved expected sensitivity to WIMP interactions relative to previous CDMS results. This dissertation describes my improved (and optimized) discrimination technique and the results obtained from a blind application to the reanalyzed CDMS II WIMP-search data. This analysis achieved the best expected sensitivity of the three techniques developed for the reanalysis and so was chosen as the primary timing analysis whose limit will be quoted in a on-going publication paper which is currently in preparation. For this analysis, a total raw exposure of 612.17 kg-days are analyzed for this work. No candidate events were observed, and a corresponding upper limit on the WIMP-nucleon scattering cross section as a function of WIMP mass is defined. These data set a 90% upper limit on spin-independent WIMP-nucleon elastic-scattering cross section of 3.19 × 10 −44 cm2 for a WIMP mass of 60 GeV/c2. Combining this result with all previous CDMS II data gives an upper limit of 1.96×10 −44 cm2 for a WIMP of mass 60 GeV/c2 (a factor of 2 better than the original analysis). At the moment this analysis is being written, the WIMP-search results obtained with the reanalyzed CDMS II data occupies the second most stringent limits on WIMP-nucleon scattering, after XENON100, excluding previously unexplored parameter space. Interesting parameter space is excluded for WIMP-nucleon cross section as function of WIMP masse under standard assumptions, the parameter space favored by interpretations of other experiments’s data as low-mass WIMP signals due to an excess of low energy events and annual modulation is partially excluded for DAMA/LIBRA and CoGeNT
    corecore