51 research outputs found

    Raspberry Pi Technology

    Get PDF

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    Programming with Numerical Uncertainties

    Get PDF
    Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. In many domains, roundoff errors are not the only source of inaccuracy and measurement as well as truncation errors further increase the uncertainty of the computed results. Adequate tools are needed to help users select suitable approximations (data types and algorithms) which satisfy their accuracy requirements, especially for safety- critical applications. Determining that a computation produces accurate results is challenging. Roundoff errors and error propagation depend on the ranges of variables in complex and non-obvious ways; even determining these ranges accurately for nonlinear programs poses a significant challenge. In numerical loops, roundoff errors grow, in general, unboundedly. Finally, due to numerical errors, the control flow in the finite-precision implementation may diverge from the ideal real-valued one by taking a different branch and produce a result that is far-off of the expected one. In this thesis, we present techniques and tools for automated and sound analysis, verification and synthesis of numerical programs. We focus on numerical errors due to roundoff from floating-point and fixed-point arithmetic, external input uncertainties or truncation errors. Our work uses interval or affine arithmetic together with Satisfiability Modulo Theories (SMT) technology as well as analytical properties of the underlying mathematical problems. This combination of techniques enables us to compute sound and yet accurate error bounds for nonlinear computations, determine closed-form symbolic invariants for unbounded loops and quantify the effects of discontinuities on numerical errors. We can furthermore certify the results of self-correcting iterative algorithms. Accuracy usually comes at the expense of resource efficiency: more precise data types need more time, space and energy. We propose a programming model where the scientist writes his or her numerical program in a real-valued specification language with explicit error annotations. It is then the task of our verifying compiler to select a suitable floating-point or fixed-point data type which guarantees the needed accuracy. Sometimes accuracy can be gained by simply re-arranging the non-associative finite-precision computation. We present a scalable technique that searches for a more optimal evaluation order and show that the gains can be substantial. We have implemented all our techniques and evaluated them on a number of benchmarks from scientific computing and embedded systems, with promising results

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Rigorous solution techniques for numerical constraint satisfaction problems

    Get PDF
    A constraint satisfaction problem (e.g., a system of equations and inequalities) consists of a finite set of constraints specifying which value combinations from given variable domains are admitted. It is called numerical if its variable domains are continuous. Such problems arise in many applications, but form a difficult problem class since they are NP-hard. Solving a constraint satisfaction problem is to find one or more value combinations satisfying all its constraints. Numerical computations on floating-point numbers in computers often suffer from rounding errors. The rigorous control of rounding errors during numerical computations is highly desired in many applications because it would benefit the quality and reliability of the decisions based on the solutions found by the computations. Various aspects of rigorous numerical computations in solving constraint satisfaction problems are addressed in this thesis: search, constraint propagation, combination of inclusion techniques, and post-processing. The solution of a constraint satisfaction problem is essentially performed by a search. In this thesis, we propose a new complete search technique (i.e., it can find all solutions within a predetermined tolerance) for numerical constraint satisfaction problems. This technique is general and can be used in place of branching steps in most branch-and-prune methods. Moreover, this new technique speeds up the most recent general search strategy (often by an order of magnitude) and provides a concise representation of solutions. To make a constraint satisfaction problem easier to solve, a major approach, called constraint propagation, in the constraint programming1 field is often used to reduce the variable domains (by discarding redundant value combinations from the domains). Basing on directed acyclic graphs, we propose a new constraint propagation technique and a method for coordinating constraint propagation and search. More importantly, we propose a novel generic scheme for combining multiple inclusion techniques2 in numerical constraint propagation. This scheme allows bringing into the constraint propagation framework the strengths of various techniques coming from different fields. To illustrate the flexibility and efficiency of the generic scheme, we base on this scheme and devise several specific combination strategies for rigorous numerical constraint propagation using interval constraint propagation, interval arithmetic, affine arithmetic, and linear programming. Our experiments show that the new propagation techniques outperform previously available methods by 1 to 4 orders of magnitude or more in speed. We also propose several post-processing techniques for the representation of continuums of solutions. Based on connectedness, they allow grouping each cluster of connected solution subsets into a larger subset, thus allowing getting additional grouping information. Potentially, these techniques enable interval-based solution techniques to be alternatives to bounding-volume techniques in applications such as collision detection and interactive graphics. __________________________________________________ 1 Constraint programming is an approach to programming that relies on both reasoning and computing. 2 An inclusion technique is to include a set of interest into enclosures. It is also called an enclosure technique

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Boundary Algebra: A Simpler Approach to Boolean Algebra and the Sentential Connectives

    Get PDF
    Boundary algebra [BA] is a algebra of type , and a simplified notation for Spencer-Brown’s (1969) primary algebra. The syntax of the primary arithmetic [PA] consists of two atoms, () and the blank page, concatenation, and enclosure between ‘(‘ and ‘)’, denoting the primitive notion of distinction. Inserting letters denoting, indifferently, the presence or absence of () into a PA formula yields a BA formula. The BA axioms are A1: ()()= (), and A2: “(()) [abbreviated ‘⊥’] may be written or erased at will,” implying (⊥)=(). The repeated application of A1 and A2 simplifies any PA formula to either () or ⊥. The basis for BA is B1: abc=bca (concatenation commutes & associates); B2, ⊥a=a (BA has a lower bound, ⊥); B3, (a)a=() (BA is a complemented lattice); and B4, (ba)a=(b)a (implies that BA is a distributive lattice). BA has two intended models: (1) the Boolean algebra 2 with base set B={(),⊥}, such that () ⇔ 1 [dually 0], (a) ⇔ a′, and ab ⇔ a∪b [a∩b]; and (2) sentential logic, such that () ⇔ true [false], (a) ⇔ ~a, and ab ⇔ a∨b [a∧b]. BA is a self-dual notation, facilitates a calculational style of proof, and simplifies clausal reasoning and Quine’s truth value analysis. BA resembles C.S. Peirce’s graphical logic, the symbolic logics of Leibniz and W.E. Johnson, the 2 notation of Byrne (1946), and the Boolean term schemata of Quine (1982).Boundary algebra; boundary logic; primary algebra; primary arithmetic; Boolean algebra; calculation proof; G. Spencer-Brown; C.S. Peirce; existential graphs

    Computational structures for design democratization

    Get PDF
    Thesis (S.M. in Architecture Studies)--Massachusetts Institute of Technology, Dept. of Architecture, June 2012."June 2012." Cataloged from PDF version of thesis.Includes bibliographical references (p. 107-112).The vision to engage non-architects in the design of their habitat through the mediation of computer aids, dates back to the early computational era (1960s-1970s) and is currently being recast under cyber-cultural and technological influences. The computational tools enabling this architectural do-it-yourself-ism have been traditionally conceptualized as mediating "infrastructures:" neutral and non-defining control systems, which ensure the validity of the designs produced by the non-expert users without distorting their personal hypotheses. Through a critical comparative analysis of two basal computational systems for design "democratization," as discussed in Yona Friedman's and Nicholas Negroponte's early 1970s writings, this thesis illustrates that the "infrastructure" metaphor was engendered and still resides in a positivist paradigm of design, allowing for little freedom or intuition on behalf of the user. Rather than denouncing the internal contradictions of the "structure for freedom" model, this thesis inquires into the computational structures of Friedman's and Negroponte's proto-computational proposals in order to identify and critique the assumptions which underpin their optimism about the non-paternalistic character of their control systems. By exposing the discursive role of the internal workings of the two systems in their authors' arguments, along with their cultural and historical biases, this research aims to problematize inherited approaches to computational tools for user empowerment in design which persist until the present, and to hint to new programmatic agendas.by Theodora Vardouli.S.M.in Architecture Studie
    corecore