42 research outputs found

    Construction of weakly CUD sequences for MCMC sampling

    Full text link
    In Markov chain Monte Carlo (MCMC) sampling considerable thought goes into constructing random transitions. But those transitions are almost always driven by a simulated IID sequence. Recently it has been shown that replacing an IID sequence by a weakly completely uniformly distributed (WCUD) sequence leads to consistent estimation in finite state spaces. Unfortunately, few WCUD sequences are known. This paper gives general methods for proving that a sequence is WCUD, shows that some specific sequences are WCUD, and shows that certain operations on WCUD sequences yield new WCUD sequences. A numerical example on a 42 dimensional continuous Gibbs sampler found that some WCUD inputs sequences produced variance reductions ranging from tens to hundreds for posterior means of the parameters, compared to IID inputs.Comment: Published in at http://dx.doi.org/10.1214/07-EJS162 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Martin Gardner and His Influence on Recreational Math

    Get PDF
    Recreational mathematics is a relatively new field in the world of mathematics. While sometimes overlooked as frivolous since those who practice it need no advanced knowledge of the subject, recreational mathematics is a perfect transition for people to experience the joy in logically establishing a solution. Martin Gardner recognized that this pattern of proving solutions to questions is how mathematics progresses. From his childhood on, Gardner greatly influenced the mathematical world. Although not a mathematician, he inspired many to pursue careers and make advancements in mathematics during his 25-year career with Scientific American. He encouraged novices to expand their knowledge, enlightened professionals of computer science developments, and established his own proofs

    Aspects of algorithms and dynamics of cellular paradigms

    Get PDF
    Els paradigmes cel·lulars, com les xarxes neuronals cel·lulars (CNN, en anglès) i els autòmats cel·lulars (CA, en anglès), són una eina excel·lent de càlcul, al ser equivalents a una màquina universal de Turing. La introducció de la màquina universal CNN (CNN-UM, en anglès) ha permès desenvolupar hardware, el nucli computacional del qual funciona segons la filosofia cel·lular; aquest hardware ha trobat aplicació en diversos camps al llarg de la darrera dècada. Malgrat això, encara hi ha moltes preguntes a obertes sobre com definir els algoritmes d'una CNN-UM i com estudiar la dinàmica dels autòmats cel·lulars. En aquesta tesis es tracten els dos problemes: primer, es demostra que es possible acotar l'espai dels algoritmes per a la CNN-UM i explorar-lo gràcies a les tècniques genètiques; i segon, s'expliquen els fonaments de l'estudi dels CA per mitjà de la dinàmica no lineal (segons la definició de Chua) i s'il·lustra com aquesta tècnica ha permès trobar resultats innovadors.Los paradigmas celulares, como las redes neuronales celulares (CNN, eninglés) y los autómatas celulares (CA, en inglés), son una excelenteherramienta de cálculo, al ser equivalentes a una maquina universal deTuring. La introducción de la maquina universal CNN (CNN-UM, eninglés) ha permitido desarrollar hardware cuyo núcleo computacionalfunciona según la filosofía celular; dicho hardware ha encontradoaplicación en varios campos a lo largo de la ultima década. Sinembargo, hay aun muchas preguntas abiertas sobre como definir losalgoritmos de una CNN-UM y como estudiar la dinámica de los autómatascelular. En esta tesis se tratan ambos problemas: primero se demuestraque es posible acotar el espacio de los algoritmos para la CNN-UM yexplorarlo gracias a técnicas genéticas; segundo, se explican losfundamentos del estudio de los CA por medio de la dinámica no lineal(según la definición de Chua) y se ilustra como esta técnica hapermitido encontrar resultados novedosos.Cellular paradigms, like Cellular Neural Networks (CNNs) and Cellular Automata (CA) are an excellent tool to perform computation, since they are equivalent to a Universal Turing machine. The introduction of the Cellular Neural Network - Universal Machine (CNN-UM) allowed us to develop hardware whose computational core works according to the principles of cellular paradigms; such a hardware has found application in a number of fields throughout the last decade. Nevertheless, there are still many open questions about how to define algorithms for a CNN-UM, and how to study the dynamics of Cellular Automata. In this dissertation both problems are tackled: first, we prove that it is possible to bound the space of all algorithms of CNN-UM and explore it through genetic techniques; second, we explain the fundamentals of the nonlinear perspective of CA (according to Chua's definition), and we illustrate how this technique has allowed us to find novel results

    A privacy preserving framework for cyber-physical systems and its integration in real world applications

    Get PDF
    A cyber-physical system (CPS) comprises of a network of processing and communication capable sensors and actuators that are pervasively embedded in the physical world. These intelligent computing elements achieve the tight combination and coordination between the logic processing and physical resources. It is envisioned that CPS will have great economic and societal impact, and alter the qualify of life like what Internet has done. This dissertation focuses on the privacy issues in current and future CPS applications. as thousands of the intelligent devices are deeply embedded in human societies, the system operations may potentially disclose the sensitive information if no privacy preserving mechanism is designed. This dissertation identifies data privacy and location privacy as the representatives to investigate the privacy problems in CPS. The data content privacy infringement occurs if the adversary can determine or partially determine the meaning of the transmitted data or the data stored in the storage. The location privacy, on the other hand, is the secrecy that a certain sensed object is associated to a specific location, the disclosure of which may endanger the sensed object. The location privacy may be compromised by the adversary through hop-by-hop traceback along the reverse direction of the message routing path. This dissertation proposes a public key based access control scheme to protect the data content privacy. Recent advances in efficient public key schemes, such as ECC, have already shown the feasibility to use public key schemes on low power devices including sensor motes. In this dissertation, an efficient public key security primitives, WM-ECC, has been implemented for TelosB and MICAz, the two major hardware platform in current sensor networks. WM-ECC achieves the best performance among the academic implementations. Based on WM-ECC, this dissertation has designed various security schemes, including pairwise key establishment, user access control and false data filtering mechanism, to protect the data content privacy. The experiments presented in this dissertation have shown that the proposed schemes are practical for real world applications. to protect the location privacy, this dissertation has considered two adversary models. For the first model in which an adversary has limited radio detection capability, the privacy-aware routing schemes are designed to slow down the adversary\u27s traceback progress. Through theoretical analysis, this dissertation shows how to maximize the adversary\u27s traceback time given a power consumption budget for message routing. Based on the theoretical results, this dissertation also proposes a simple and practical weighted random stride (WRS) routing scheme. The second model assumes a more powerful adversary that is able to monitor all radio communications in the network. This dissertation proposes a random schedule scheme in which each node transmits at a certain time slot in a period so that the adversary would not be able to profile the difference in communication patterns among all the nodes. Finally, this dissertation integrates the proposed privacy preserving framework into Snoogle, a sensor nodes based search engine for the physical world. Snoogle allows people to search for the physical objects in their vicinity. The previously proposed privacy preserving schemes are applied in the application to achieve the flexible and resilient privacy preserving capabilities. In addition to security and privacy, Snoogle also incorporates a number of energy saving and communication compression techniques that are carefully designed for systems composed of low-cost, low-power embedded devices. The evaluation study comprises of the real world experiments on a prototype Snoogle system and the scalability simulations

    Recent results and open problems on CIS Graphs

    Get PDF

    Efficient Anonymous Biometric Matching in Privacy-Aware Environments

    Get PDF
    Video surveillance is an important tool used in security and environmental monitoring, however, the widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. To identify these individuals for protection, the most reliable approach is to use biometric signals as they are immutable and highly discriminative. If misused, these characteristics of biometrics can seriously defeat the goal of privacy protection. In this dissertation, an Anonymous Biometric Access Control (ABAC) procedure is proposed based on biometric signals for privacy-aware video surveillance. The ABAC procedure uses Secure Multi-party Computational (SMC) based protocols to verify membership of an incoming individual without knowing his/her true identity. To make SMC-based protocols scalable to large biometric databases, I introduce the k-Anonymous Quantization (kAQ) framework to provide an effective and secure tradeoff of privacy and complexity. kAQ limits systems knowledge of the incoming individual to k maximally dissimilar candidates in the database, where k is a design parameter that controls the amount of complexity-privacy tradeoff. The relationship between biometric similarity and privacy is experimentally validated using a twin iris database. The effectiveness of the entire system is demonstrated based on a public iris biometric database. To provide the protected subjects with full access to their privacy information in video surveillance system, I develop a novel privacy information management system that allows subjects to access their information via the same biometric signals used for ABAC. The system is composed of two encrypted-domain protocols: the privacy information encryption protocol encrypts the original video records using the iris pattern acquired during ABAC procedure; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of my framework

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms
    corecore