469 research outputs found

    Haptic Rendering Based on RBF Approximation from Dynamically Updated Data

    Get PDF
    In this paper, an extension of our previous research focused on haptic rendering based on interpolation from precomputed data is presented. The technique employs the radial-basis function (RBF) interpolation to achieve the accuracy of the force response approximation, however, it assumes that the data used by the interpolation method are generated on-the-fly during the haptic interaction. The issue caused by updating the RBF coefficients during the interaction is analyzed and a force-response smoothing strategy is proposed

    Crossover Method for Interactive Genetic Algorithms to Estimate Multimodal Preferences

    Get PDF
    We apply an interactive genetic algorithm (iGA) to generate product recommendations. iGAs search for a single optimum point based on a user’s Kansei through the interaction between the user and machine. However, especially in the domain of product recommendations, there may be numerous optimum points. Therefore, the purpose of this study is to develop a new iGA crossover method that concurrently searches for multiple optimum points for multiple user preferences. The proposed method estimates the locations of the optimum area by a clustering method and then searches for the maximum values of the area by a probabilistic model. To confirm the effectiveness of this method, two experiments were performed. In the first experiment, a pseudouser operated an experiment system that implemented the proposed and conventional methods and the solutions obtained were evaluated using a set of pseudomultiple preferences. With this experiment, we proved that when there are multiple preferences, the proposed method searches faster and more diversely than the conventional one. The second experiment was a subjective experiment. This experiment showed that the proposed method was able to search concurrently for more preferences when subjects had multiple preferences

    Processing mesh animations: from static to dynamic geometry and back

    Get PDF
    Static triangle meshes are the representation of choice for artificial objects, as well as for digital replicas of real objects. They have proven themselves to be a solid foundation for further processing. Although triangle meshes are handy in general, it may seem that their discrete approximation of reality is a downside. But in fact, the opposite is true. The approximation of the real object's shape remains the same, even if we willfully change the vertex positions in the mesh, which allows us to optimize it in this way. Due to modern acquisition methods, such a step is always beneficial, often even required, prior to further processing of the acquired triangle mesh. Therefore, we present a general framework for optimizing surface meshes with respect to various target criteria. Because of the simplicity and efficiency of the setup it can be adapted to a variety of applications. Although this framework was initially designed for single static meshes, the application to a set of meshes is straightforward. For example, we convert a set of meshes into compatible ones and use them as basis for creating dynamic geometry. Consequently, we propose an interpolation method which is able to produce visually plausible interpolation results, even if the compatible input meshes differ by large rotations. The method can be applied to any number of input vertex configurations and due to the utilization of a hierarchical scheme, the approach is fast and can be used for very large meshes. Furthermore, we consider the opposite direction. Given an animation sequence, we propose a pre-processing algorithm that considerably reduces the number of meshes required to describe the sequence, thus yielding a compact representation. Our method is based on a clustering and classification approach, which can be utilized to automatically find the most prominent meshes of the sequence. The original meshes can then be expressed as linear combinations of these few representative meshes with only small approximation errors. Finally, we investigate the shape space spanned by those few meshes and show how to apply different interpolation schemes to create other shape spaces, which are not based on vertex coordinates. We conclude with a careful analysis of these shape spaces and their usability for a compact representation of an animation sequence

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    Support Vector Methods for Higher-Level Event Extraction in Point Data

    Get PDF
    Phenomena occur both in space and time. Correspondingly, ability to model spatiotemporal behavior translates into ability to model phenomena as they occur in reality. Given the complexity inherent when integrating spatial and temporal dimensions, however, the establishment of computational methods for spatiotemporal analysis has proven relatively elusive. Nonetheless, one method, the spatiotemporal helix, has emerged from the field of video processing. Designed to efficiently summarize and query the deformation and movement of spatiotemporal events, the spatiotemporal helix has been demonstrated as capable of describing and differentiating the evolution of hurricanes from sequences of images. Being derived from image data, the representations of events for which the spatiotemporal helix was originally created appear in areal form (e.g., a hurricane covering several square miles is represented by groups of pixels). ii Many sources of spatiotemporal data, however, are not in areal form and instead appear as points. Examples of spatiotemporal point data include those from an epidemiologist recording the time and location of cases of disease and environmental observations collected by a geosensor at the point of its location. As points, these data cannot be directly incorporated into the spatiotemporal helix for analysis. However, with the analytic potential for clouds of point data limited, phenomena represented by point data are often described in terms of events. Defined as change units localized in space and time, the concept of events allows for analysis at multiple levels. For instance lower-level events refer to occurrences of interest described by single data streams at point locations (e.g., an individual case of a certain disease or a significant change in chemical concentration in the environment) while higher-level events describe occurrences of interest derived from aggregations of lower-level events and are frequently described in areal form (e.g., a disease cluster or a pollution cloud). Considering that these higher-level events appear in areal form, they could potentially be incorporated into the spatiotemporal helix. With deformation being an important element of spatiotemporal analysis, however, at the crux of a process for spatiotemporal analysis based on point data would be accurate translation of lower-level event points into representations of higher-level areal events. A limitation of current techniques for the derivation of higher-level events is that they imply bias a priori regarding the shape of higher-level events (e.g., elliptical, convex, linear) which could limit the description of the deformation of higher-level events over time. The objective of this research is to propose two newly developed kernel methods, support vector clustering (SVC) and support vector machines (SVMs), as means for iii translating lower-level event points into higher-level event areas that follow the distribution of lower-level points. SVC is suggested for the derivation of higher-level events arising in point process data while SVMs are explored for their potential with scalar field data (i.e., spatially continuous real-valued data). Developed in the field of machine learning to solve complex non-linear problems, both of these methods are capable of producing highly non-linear representations of higher-level events that may be more suitable than existing methods for spatiotemporal analysis of deformation. To introduce these methods, this thesis is organized so that a context for these methods is first established through a description of existing techniques. This discussion leads to a technical explanation of the mechanics of SVC and SVMs and to the implementation of each of the kernel methods on simulated datasets. Results from these simulations inform discussion regarding the application potential of SVC and SVMs
    • …
    corecore