17 research outputs found

    Predicting the Critical Number of Layers for Hierarchical Support Vector Regression

    Full text link
    Hierarchical support vector regression (HSVR) models a function from data as a linear combination of SVR models at a range of scales, starting at a coarse scale and moving to finer scales as the hierarchy continues. In the original formulation of HSVR, there were no rules for choosing the depth of the model. In this paper, we observe in a number of models a phase transition in the training error -- the error remains relatively constant as layers are added, until a critical scale is passed, at which point the training error drops close to zero and remains nearly constant for added layers. We introduce a method to predict this critical scale a priori with the prediction based on the support of either a Fourier transform of the data or the Dynamic Mode Decomposition (DMD) spectrum. This allows us to determine the required number of layers prior to training any models.Comment: 18 pages, 9 figure

    Scenario Approach for Parametric Markov Models

    Get PDF
    In this paper, we propose an approximating framework for analyzing parametric Markov models. Instead of computing complex rational functions encoding the reachability probability and the reward values of the parametric model, we exploit the scenario approach to synthesize a relatively simple polynomial approximation. The approximation is probably approximately correct (PAC), meaning that with high confidence, the approximating function is close to the actual function with an allowable error. With the PAC approximations, one can check properties of the parametric Markov models. We show that the scenario approach can also be used to check PRCTL properties directly – without synthesizing the polynomial at first hand. We have implemented our algorithm in a prototype tool and conducted thorough experiments. The experimental results demonstrate that our tool is able to compute polynomials for more benchmarks than state-of-the-art tools such as PRISM and Storm, confirming the efficacy of our PAC-based synthesis.</p

    Cooperative Navigation for Mixed Human–Robot Teams Using Haptic Feedback

    Get PDF
    In this paper, we present a novel cooperative navigation control for human–robot teams. Assuming that a human wants to reach a final location in a large environment with the help of a mobile robot, the robot must steer the human from the initial to the target position. The challenges posed by cooperative human–robot navigation are typically addressed by using haptic feedback via physical interaction. In contrast with that, in this paper, we describe a different approach, in which the human–robot interaction is achieved via wearable vibrotactile armbands. In the proposed work, the subject is free to decide her/his own pace. A warning vibrational signal is generated by the haptic armbands when a large deviation with respect to the desired pose is detected by the robot. The proposed method has been evaluated in a large indoor environment, where 15 blindfolded human subjects were asked to follow the haptic cues provided by the robot. The participants had to reach a target area, while avoiding static and dynamic obstacles. Experimental results revealed that the blindfolded subjects were able to avoid the obstacles and safely reach the target in all of the performed trials. A comparison is provided between the results obtained with blindfolded users and experiments performed with sighted people

    Hardware-conscious query processing for the many-core era

    Get PDF
    Die optimale Nutzung von moderner Hardware zur Beschleunigung von Datenbank-Anfragen ist keine triviale Aufgabe. Viele DBMS als auch DSMS der letzten Jahrzehnte basieren auf Sachverhalten, die heute kaum noch GĂŒltigkeit besitzen. Ein Beispiel hierfĂŒr sind heutige Server-Systeme, deren HauptspeichergrĂ¶ĂŸe im Bereich mehrerer Terabytes liegen kann und somit den Weg fĂŒr Hauptspeicherdatenbanken geebnet haben. Einer der grĂ¶ĂŸeren letzten Hardware Trends geht hin zu Prozessoren mit einer hohen Anzahl von Kernen, den sogenannten Manycore CPUs. Diese erlauben hohe ParallelitĂ€tsgrade fĂŒr Programme durch Multithreading sowie Vektorisierung (SIMD), was die Anforderungen an die Speicher-Bandbreite allerdings deutlich erhöht. Der sogenannte High-Bandwidth Memory (HBM) versucht diese LĂŒcke zu schließen, kann aber ebenso wie Many-core CPUs jeglichen Performance-Vorteil negieren, wenn dieser leichtfertig eingesetzt wird. Diese Arbeit stellt die Many-core CPU-Architektur zusammen mit HBM vor, um Datenbank sowie Datenstrom-Anfragen zu beschleunigen. Es wird gezeigt, dass ein hardwarenahes Kostenmodell zusammen mit einem Kalibrierungsansatz die Performance verschiedener Anfrageoperatoren verlĂ€sslich vorhersagen kann. Dies ermöglicht sowohl eine adaptive Partitionierungs und Merge-Strategie fĂŒr die Parallelisierung von Datenstrom-Anfragen als auch eine ideale Konfiguration von Join-Operationen auf einem DBMS. Nichtsdestotrotz ist nicht jede Operation und Anwendung fĂŒr die Nutzung einer Many-core CPU und HBM geeignet. Datenstrom-Anfragen sind oft auch an niedrige Latenz und schnelle Antwortzeiten gebunden, welche von höherer Speicher-Bandbreite kaum profitieren können. Hinzu kommen ĂŒblicherweise niedrigere Taktraten durch die hohe Kernzahl der CPUs, sowie Nachteile fĂŒr geteilte Datenstrukturen, wie das Herstellen von Cache-KohĂ€renz und das Synchronisieren von parallelen Thread-Zugriffen. Basierend auf den Ergebnissen dieser Arbeit lĂ€sst sich ableiten, welche parallelen Datenstrukturen sich fĂŒr die Verwendung von HBM besonders eignen. Des Weiteren werden verschiedene Techniken zur Parallelisierung und Synchronisierung von Datenstrukturen vorgestellt, deren Effizienz anhand eines Mehrwege-Datenstrom-Joins demonstriert wird.Exploiting the opportunities given by modern hardware for accelerating query processing speed is no trivial task. Many DBMS and also DSMS from past decades are based on fundamentals that have changed over time, e.g., servers of today with terabytes of main memory capacity allow complete avoidance of spilling data to disk, which has prepared the ground some time ago for main memory databases. One of the recent trends in hardware are many-core processors with hundreds of logical cores on a single CPU, providing an intense degree of parallelism through multithreading as well as vectorized instructions (SIMD). Their demand for memory bandwidth has led to the further development of high-bandwidth memory (HBM) to overcome the memory wall. However, many-core CPUs as well as HBM have many pitfalls that can nullify any performance gain with ease. In this work, we explore the many-core architecture along with HBM for database and data stream query processing. We demonstrate that a hardware-conscious cost model with a calibration approach allows reliable performance prediction of various query operations. Based on that information, we can, therefore, come to an adaptive partitioning and merging strategy for stream query parallelization as well as finding an ideal configuration of parameters for one of the most common tasks in the history of DBMS, join processing. However, not all operations and applications can exploit a many-core processor or HBM, though. Stream queries optimized for low latency and quick individual responses usually do not benefit well from more bandwidth and suffer from penalties like low clock frequencies of many-core CPUs as well. Shared data structures between cores also lead to problems with cache coherence as well as high contention. Based on our insights, we give a rule of thumb which data structures are suitable to parallelize with focus on HBM usage. In addition, different parallelization schemas and synchronization techniques are evaluated, based on the example of a multiway stream join operation

    A criteria based function for reconstructing low-sampling trajectories as a tool for analytics

    Get PDF
    Abstract: Mobile applications equipped with Global Positioning Systems have generated a huge quantity of location data with sampling uncertainty that must be handled and analyzed. Those location data can be ordered in time to represent trajectories of moving objects. The data warehouse approach based on spatio-temporal data can help on this task. For this reason, we address the problem of personalized reconstruction of low-sampling trajectories based on criteria over a graph for including criteria of movement as a dimension in a trajectory data warehouse solution to carry out analytical tasks over moving objects and the environment where they moveMaestrĂ­

    Zielorientierte Erkennung und Behebung von QualitÀtsdefiziten in Software-Systemen am Beispiel der WeiterentwicklungsfÀhigkeit

    Get PDF
    The evolvability of software systems is one of the key issues when considering their long term quality. Continuous changes and extensions of these systems are neccessary to adjust them to new or changing requirements. But the changes often cause quality deficiencies, which lead to an increase in complexity or an architectural decay. Especially quality deficiencies within the specification or the architecture of a software system can heavily impair a software system.To counteract this, a method is developed in this work to support the analysis of a quality goal in order to identify the quality deficiencies which hinder the achievement of the quality goal. Both the detection and the removal of quality deficiencies are accomplished in a systematic way. The method integrates detection of these quality deficiencies and their removal by reengineering activities based on rules. The detection of quality deficiencies is performed by means of measurable quality attributes which are derived from a quality goal, such as evolvability. In order to demonstrate the practicability of the method, the quality goal evolvability is taken as an example. This work shows how a software system can be evaluated with regard to evolvability based on structural dependencies and which reengineering activities will improve the system in the direction of this quality goal.To evaluate the method, it was applied within an industrial case study. By analyzing the given software system a large number of different quality deficiencies were detected. Afterwards the system's evolvability was improved substantially by reengineering activities proposed by the method.FĂŒr unternehmenskritische Software-Systeme, die langlebig und erweiterbar sein sollen, ist das QualitĂ€tsziel WeiterentwicklungsfĂ€higkeit essentiell. Kontinuierliche Änderungen und Erweiterungen sind unabdingbar, um solche Software-Systeme an neue oder verĂ€nderte Anforderungen anzupassen. Diese Maßnahmen verursachen aber auch oft QualitĂ€tsdefizite, die zu einem Anstieg der KomplexitĂ€t oder einem Verfall der Architektur fĂŒhren können. Gerade QualitĂ€tsdefizite in der Spezifikation oder Architektur können Software-Systeme stark beeintrĂ€chtigen.Um dem entgegenzuwirken, wird in dieser Arbeit eine Methode entwickelt, welche die Einhaltung von QualitĂ€tszielen bewerten kann. Dadurch wird sowohl das Erkennen als auch das Beheben von QualitĂ€tsdefiziten in der Software-Entwicklung ermöglicht. QualitĂ€tsdefizite werden anhand einer am QualitĂ€tsziel orientierten und regelbasierten Analyse erkannt und durch zugeordnete Reengineering-AktivitĂ€ten behoben. Als Beispiel fĂŒr ein QualitĂ€tsziel wird die WeiterentwicklungsfĂ€higkeit von Software-Systemen betrachtet. Es wird gezeigt, wie dieses QualitĂ€tsziel anhand von strukturellen AbhĂ€ngigkeiten in Software-Systemen bewertet und durch gezielte Reengineering-AktivitĂ€ten verbessert werden kann.Um die Methode zu validieren, wurde eine industrielle Fallstudie durchgefĂŒhrt. Durch den Einsatz der Methode konnten eine Vielzahl von QualitĂ€tsdefiziten erkannt und behoben werden. Die WeiterentwicklungsfĂ€higkeit des untersuchten Software-Systems wurde durch die vorgeschlagenen Reengineering-AktivitĂ€ten entscheidend verbessert

    Propuesta metodolĂłgica para el cĂĄlculo de las penalidades por giro en modelos de accesibilidad

    Get PDF
    En esta tesis de maestrĂ­a se busca desarrollar una metodologĂ­a para el cĂĄlculo de las penalidades por giro a utilizar en los modelos de accesibilidad y en general en los modelos de transportes dada la utilizaciĂłn de algoritmos de caminos mĂ­nimos en el cĂĄlculo de los tiempos de viaje en la red vial que incluyen penalizaciones y restricciones por giro, entre estos la accesibilidad media global, utilizada en diversos temas como la planificaciĂłn urbana y de transportes en Manizales (Colombia) y diferentes ciudades alrededor del mundo. En esta ciudad se han utilizado penalidades y restricciones por giro determinadas de manera subjetiva por lo que no se tiene un valor calculado a partir de un mĂ©todo cientĂ­fico. Por lo tanto, se calcularĂĄn las penalidades y restricciones por giro para la ciudad de Manizales realizando una cuantificaciĂłn de los tiempos de giro de los vehĂ­culos en diversas intersecciones viales, escogidas a partir de un anĂĄlisis de priorizaciĂłn y registrando un video en cada una. Con estos datos se podrĂĄ obtener el promedio de giro a izquierda y derecha, es decir, las penalidades por giro para Manizales a utilizar en los modelos de accesibilidad calculados en la ciudad o en general para los modelos de transportes. Las penalidades calculadas mediante estĂĄ metodologĂ­a serĂĄn comparadas con las penalidades utilizadas en investigaciones previas a travĂ©s del gradiente de ahorro, el cual nos permite cuantificar las diferencias generadas por este dato y su importancia en los modelos de transportes, entre ellos la accesibilidadAbstract: In this Master’s degree thesis seeks develop a methodology for the calculation of turn penalties to use in accessibility models and in general for transport models given in the recent use of algorithms of shortest paths for the calculation of travel times in the road network that includes turn penalties and restrictions, among then the global mean accessibility, used in some issues such as urban and transport planning in Manizales (Colombia) and different cities around the world. At Manizales, turn penalties and restrictions used in accessibility models are determined by a subjective way, so there are not calculated from a scientific method. Therefore, turn and restrictions penalties for Manizales will be calculated, making a quantification of the turn times of the vehicles in different road intersections, chosen from a priorization analysis and recording a video in each one. With this data we can obtain the average time to turn to left and right, that is, the turn penalties for Manizales to be used in the accessibility models calculated in the city or in general in the transport models. The penalties calculated using this methodology will be compared with the penalties used in previous investigations through the saving gradient, which allows us to quantify the differences generated by this data and its importance in transport models, including accessibilityMaestrĂ­

    A dependency-aware, context-independent code search infrastructure

    Full text link
    Over the last decade many code search engines and recommendation systems have been developed, both in academia and industry, to try to improve the component discovery step in the software reuse process. Key examples include Krugle, Koders, Portfolio, Merobase, Sourcerer, Strathcona and SENTRE. However, the recall and precision of this current generation of code search tools are limited by their inability to cope effectively with the structural dependencies between code units. This lack of “dependency awareness” manifests itself in three main ways. First, it limits the kinds of search queries that users can define and thus the precision and local recall of dependency aware searches (giving rise to large numbers of false positives and false negatives). Second, it reduces the global recall of the component harvesting process by limiting the range of dependency-containing software components that can be used to populate the search repository. Third, it significantly reduces the performance of the retrieval process for dependency-aware searches. This thesis lays the foundation for a new generation of dependency-aware code search engines that addresses these problems by designing and prototyping a new kind of software search platform. Inspired by the Merobase code search engine, this platform contains three main innovations - an enhanced, dependency aware query language which allows traditional Merobase interface-based searches to be extended with dependency requirements, a new “context independent” crawling infrastructure which can recognize dependencies between code units even when their context (e.g. project) is unknown, and a new graph-based database integrated with a full-text search engine and optimized to store code modules and their dependencies efficiently. After describing the background to, and state-of-the-art in, the field of code search engines and information retrieval the thesis motivates the aforementioned innovations and explains how they are realized in the DAISI (Dependency-Aware, context-Independent code Search Infrastructure) prototype using Lucene and Neo4J.DAISI is then used to demonstrate the advantages of the developed technology in a range of examples
    corecore