237 research outputs found

    One-Step or Two-Step Optimization and the Overfitting Phenomenon: A Case Study on Time Series Classification

    Get PDF
    For the last few decades, optimization has been developing at a fast rate. Bio-inspired optimization algorithms are metaheuristics inspired by nature. These algorithms have been applied to solve different problems in engineering, economics, and other domains. Bio-inspired algorithms have also been applied in different branches of information technology such as networking and software engineering. Time series data mining is a field of information technology that has its share of these applications too. In previous works we showed how bio-inspired algorithms such as the genetic algorithms and differential evolution can be used to find the locations of the breakpoints used in the symbolic aggregate approximation of time series representation, and in another work we showed how we can utilize the particle swarm optimization, one of the famous bio-inspired algorithms, to set weights to the different segments in the symbolic aggregate approximation representation. In this paper we present, in two different approaches, a new meta optimization process that produces optimal locations of the breakpoints in addition to optimal weights of the segments. The experiments of time series classification task that we conducted show an interesting example of how the overfitting phenomenon, a frequently encountered problem in data mining which happens when the model overfits the training set, can interfere in the optimization process and hide the superior performance of an optimization algorithm

    MPI vs OpenMP: Un caso de estudio sobre la generación del conjunto de Mandelbrot

    Get PDF
    Nowadays, some of the most popular tools for parallel programming are Message Passing Interface and Open Multi-Processing. It is of interest to compare these tools in solving the same kind of problems, because of the use of different approaches to inter-task communication. This work attempts to contribute to this goal by running trials in a centralized shared memory architecture in the case of problems with an entirely parallel solution. The selected case study was the parallel computation of Mandelbrot set. Trials were conducted for different iteration limits, processors amount, and C++ implementation variants. The results show better performance in the case of Open Multi-Processing.Algunas de las herramientas más populares hoy en día para la programación paralela son Interfaz de Paso de Mensajes y Multiprocesamiento Abierto. Es de interés comparar estas herramientas en la resolución de los mismos tipos de problemas, debido a la utilización de diferentes enfoques en la comunicación entre tareas. Este trabajo tiene como objetivo contribuir a este empeño al ejecutar pruebas en una arquitectura de memoria compartida y centralizada en el caso de problemas con una solución completamente paralela. El caso de estudio seleccionado fue la computación paralela del conjunto de Mandelbrot. Las pruebas se realizaron para diferentes límites de iteración, cantidad de procesadores y variantes de implementación en C++. Los resultados muestran un mejor desempeño en el caso de Multiprocesamiento Abierto

    Comparative Evaluation of Action Recognition Methods via Riemannian Manifolds, Fisher Vectors and GMMs: Ideal and Challenging Conditions

    Full text link
    We present a comparative evaluation of various techniques for action recognition while keeping as many variables as possible controlled. We employ two categories of Riemannian manifolds: symmetric positive definite matrices and linear subspaces. For both categories we use their corresponding nearest neighbour classifiers, kernels, and recent kernelised sparse representations. We compare against traditional action recognition techniques based on Gaussian mixture models and Fisher vectors (FVs). We evaluate these action recognition techniques under ideal conditions, as well as their sensitivity in more challenging conditions (variations in scale and translation). Despite recent advancements for handling manifolds, manifold based techniques obtain the lowest performance and their kernel representations are more unstable in the presence of challenging conditions. The FV approach obtains the highest accuracy under ideal conditions. Moreover, FV best deals with moderate scale and translation changes

    Pengenalan Objek Pada Computer Vision Dengan Pencocokan Fitur Menggunakan Algoritma SIFT Studi Kasus: Deteksi Penyakit Kulit Sederhana

    Full text link
    Human vision can do amazing things such as recognizing people or objects, navigating through obstacles, recognizing the mood in a scene, and imagining stories. To do mimicry of the human vision, the computer requires a sensor that functions like the human eye and a computer program that serves as a data processor from the sensor. Computer vision is the science that uses image processing to make decisions based on images obtained from sensors. In other words, computer vision aims to build an intelligent machine that can "see". Computer vision can be used to detect skin diseases, for example, to detect disease Shingles (Herpes Zoster), Hives (Urticaria), Psoriasis, Eczema, Rosacea, Cold Sores (Fever Blisters), Rash, Razor Bumps, Skin Tags, Acne, Athlete's Foot, moles, Age or Liver Spots, Pityriasis Rosea, Melasma (Pregnancy Mask), Warts, and Seborrheic keratoses. Prewitt, Sobel, Roberts, and Canny operator are used to detect the edges of one or more objects. Then the results will be match with the results of edge detection image data base to determine the type of disease using Scale invariant Feature Transform (SIFT) algorithm. Skin Disease Detection Expert System will be implemented with C++ programming language, IDE MS Visual Studio 2010 and OpenCV 2.4 library. Keywords— computer vision, edge detection, SIFT algorithm, skin diseas

    Foundations of programming languages

    Get PDF
    I must confess that I have co-authored a book bearing the same title [LMW89] and the first compiler-design book treating the same three language paradigms as the book to be reviewed [WS10]. Thus, I might appear a bit preoccupied, in particular since both books are not cited. The purpose of this book, according to the author, is to introduce you to three styles of programming languages by using them to implement a non-trivial programming language. He starts to realize this purpose by providing Chapter 2 on Syntax including bits on grammars and automata and on parsing and lexing tools. Chapter 3 introduces a virtual machine, JCoCo, tailored towards implementing Python, and shows how a non-trivial subset of Python is translated into JCoCo bytecode. Chapter 4 describes an implementation of JCoCo in Java, intended to teach essentials of object-oriented programming. Chapter 5 introduces functional programming with a short excursion into the lambda-calculus, normal forms, and reduction orders. SML is used as example of a func tional language. The author mixes up referential transparency with a case of non-terminating recursion. SML is explained going through the language features and by implementing a prefix calculator. Chapter 6 describes an SML compiler by giving bytecode sequences for SML language constructs in the same way it was done in Chapter 3. Chapter 7 introduces Prolog. Defining the important notion of unification to be simply a list of substitutions for variables is only half the truth. As applications of Prolog the author shows how to do parsing and type inference in Prolog. All chapters are accompanied by examples and exercises with solutions. The strengths of the book lie in the massive amount of quite interesting and relevant material in Chapters 2 to 8. The weakness of the book is the colloquial and often imprecise writing style. The author is not only imprecise and sometimes plainly wrong, he fails to assign the right weight to concepts of different relevance. For example in Section 1.4.3, which introduces virtual machines. He gives the same weight to the concept of compilation of source code to the intermediate code of a virtual machine, the organizational issue of whether the generated virtual-machine code is externally stored like in Java or internally buried like in Python, and the facts that the Java compiler is called javac and the names of bytecode files end in an .class extension. The book has a lot of colorful graphics to explain concepts. However, there is no legend defining the semantics of the graphics elements, e.g. arrows. In fact, they don’t have a consistent meaning reducing the explanatory value of the figures. Another complaint concerns the author’s treatment of the history of computing. His knowledge appears to be quite cursory, and he doesn’t seem to have spent much effort on improving this. He surprises the reader by viewing the Norwegian mathematicians Abel and Lie as forefathers of Computer Science. Many others, mathematicians, logicians, philosophers could serve this role better. What about Ramon Lull and Gottfried Wilhelm Leibniz? Why doesn’t he mention Ada Lovelace as the first programmer? Austrian-American logician Kurt Godel is missing in his treatment of undecidability. Konrad Zuse is not mentioned as implementor of ¨ the first programmable, fully automatic digital computer. The first relevant machine in the class of stack-based architectures, the Borroughs B5000, an Algol60-machine, introduced in 1961, is not mentioned. Instead, he assigns to Hewlett Packard mainframes, said to have appeared in the 1960s, this pioneering role. However, the HP3000 was only introduced in 1972. The invention of object-oriented languages is first attributed to Wirth and Stroustrup. Fortunately, some pages later, SIMULA 67 and their designers Ole-Johann Dahl and Kristen Nygaard are given some credit. At the end of the book there is a short bibliography. Five of the fifteen entries point to sources for photographs of programming-language pioneers, one to an interview and one to a press release. An enthusiastic reader is left alone on his search for further reading material
    corecore