12 research outputs found

    Example-based control of human motion

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 41-43).In human motion control applications, the mapping between a control specification and an appropriate target motion often defies an explicit encoding. This thesis presents a method that allows such a mapping to be defined by example, given that the control specification is recorded motion. The method begins by building a database of semantically meaningful instances of the mapping, each of which is represented by synchronized segments of control and target motion. A dynamic programming algorithm can then be used to interpret an input control specification in terms of mapping instances. This interpretation induces a sequence of target segments from the database, which is concatenated to create the appropriate target motion. The method is evaluated on two examples of indirect control. In the first, it is used to synthesize a walking human character that follows a sampled trajectory. In the second, it is used generate a synthetic partner for a dancer whose motion is acquired through motion capture.by Eugene Hsu.S.M

    Example-based control of human motion

    Full text link

    Knowledge discovery from trajectories

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesAs a newly proliferating study area, knowledge discovery from trajectories has attracted more and more researchers from different background. However, there is, until now, no theoretical framework for researchers gaining a systematic view of the researches going on. The complexity of spatial and temporal information along with their combination is producing numerous spatio-temporal patterns. In addition, it is very probable that a pattern may have different definition and mining methodology for researchers from different background, such as Geographic Information Science, Data Mining, Database, and Computational Geometry. How to systematically define these patterns, so that the whole community can make better use of previous research? This paper is trying to tackle with this challenge by three steps. First, the input trajectory data is classified; second, taxonomy of spatio-temporal patterns is developed from data mining point of view; lastly, the spatio-temporal patterns appeared on the previous publications are discussed and put into the theoretical framework. In this way, researchers can easily find needed methodology to mining specific pattern in this framework; also the algorithms needing to be developed can be identified for further research. Under the guidance of this framework, an application to a real data set from Starkey Project is performed. Two questions are answers by applying data mining algorithms. First is where the elks would like to stay in the whole range, and the second is whether there are corridors among these regions of interest

    Spatio-temporal clustering of natural hazards

    Get PDF
    Natural hazards are inherently spatio-temporal processes. Spatio-temporal clustering methodologies applied to natural hazard data can help distinguish clustering patterns that would not only identify point-event dense regions and time periods, but also characterise the hazardous process. In Chapter 2, spatio-temporal clustering methodologies applicable to point event and trajectory datasets representative of natural hazards are reviewed by critically examining 143 scientific publications from various fields of study. These methodologies include clustering measures that are either (i) global (providing a single quantitative measure of the degree of clustering in the dataset) or (ii) local (i.e. assigning individual point events to a cluster). A common application and analysis framework of combining global and local measures for application to point event data is proposed. For global measures, K-functions analysis and for local measures, a space-time scan statistic with kernel density estimation as an aiding methodology within the framework are selected. For trajectories, a density-based local clustering measure Trajectory-OPTICS is selected. In Chapter 3, to assess the performance of the methodology framework, real-world natural hazard data and synthetic datasets, either representative of natural hazards or used as performance benchmarks for application, are presented and characterised. A point event dataset of 12,521 lightning strikes recorded on 1 July 2015 over the UK is selected, where a severe three-storm system crossed the region with different convective modes. It is also used as a case study together with a dataset of 77,252 lightning strikes on 28 June 2012 over the UK to characterise and model lightning strikes as point events produced by a moving source. Each source has a set number of points events, initiation point in space and time, movement speed, direction, inter-event time distribution and spatial spread distribution. Movement speed, inter-event time and spatial spread distributions are characterised based on the two case studies. Inter-event time values range from below 0.01 s to over 100 s for individual storms from both case studies. A least-squares plane fit in the spatio-temporal domain estimates a range of representative movement speed values of 47–60 km h–1 for the first and 66–111 km h–1 for the second case study. Based on these values, single (Model 3) and three storm (Model 4) models are generated to form a simulation study of point event datasets representing various physical lightning characteristics, each with three variations in their movement speed and spatial spread input parameters. For trajectories, the Atlantic hurricane database (HURDAT2) is used to select a real-world dataset of 316 hurricanes. Homogeneous and clustered trajectory datasets are generated as benchmarks for Trajectory-OPTICS. In Chapter 4, the clustering methodology framework identified in Chapter 2 is applied to all the real-world and synthetic datasets presented in Chapter 3. K-function analysis results are used to inform the range of bandwidth values for the kernel density estimation. A leave-one-out estimator is used to find the optimal values. A value threshold on the probability density values from the kernel density estimation is imposed to identify high probability density space-time volumes. These volumes are used as centroids for applying the scan statistic as a local clustering measure. The elliptic scan statistic is unable to identify individual lightning strike clusters within the same storm source for storm sources with small temporal separation (Model 4). Chapter 5 extends the elliptic scan statistic by including an ‘Inclination height’ parameter as the temporal distance between the major axis points of the ellipse basis. With detailed selection of input parameter ranges, the inclined elliptic scan statistic is applied to Model 4 and its variations and is able to identify point event cluster produced by a moving source and the point events assigned to the cluster are from the same storm source

    Privacy preserving distributed spatio-temporal data mining

    Get PDF
    Time-stamped location information is regarded as spatio-temporal data due to its time and space dimensions and, by its nature, is highly vulnerable to misuse. Privacy issues related to collection, use and distribution of individuals’ location information are the main obstacles impeding knowledge discovery in spatio-temporal data. Suppressing identifiers from the data does not suffice since movement trajectories can easily be linked to individuals using publicly available information such as home or work addresses. Yet another solution could be employing existing privacy preserving data mining techniques. However these techniques are not suitable since time-stamped location observations of an object are not plain, independent attributes of this object. Therefore, new privacy preserving data mining techniques are required to handle spatio-temporal data specifically. In this thesis, we propose a privacy preserving data mining technique and two preprocessing steps for data mining related to privacy preservation in spatio-temporal datasets: (1) Distributed clustering, (2) Centralized anonymization and (3) Distributed anonymization. We also provide security and efficiency analysis of our algorithms which shows that under reasonable conditions, achieving privacy preservation with minimal sensitive information leakage is possible for data mining purposes

    Mastering the Spatio-Temporal Knowledge Discovery Process

    Get PDF
    The thesis addresses a topic of great importance: a framework for data mining positioning data collected by personal mobile devices. The main contribution of this thesis is the creation of a theoretical and practical framework in order to manage the complex Knowledge discovery process on mobility data. Hence the creation of such framework leads to the integration of very different aspects of the process with their assumptions and requirements. The result is a homogeneous system which gives the possibility to exploit the power of all the components with the same flexibilities of a database such as a new way to use the ontology for an automatic reasoning on trajectory data. Furthermore two extensions are invented and developed and then integrated in the system to confirm the extensibility of it: a innovative way to reconstruct the trajectories considering the uncertainty of the path followed and a Location prediction algorithm called WhereNext. Another important contribution of the thesis is the experimentation on a real case of study on analysis of mobility data. It has been shown the usefulness of the system for a mobility manager who is provided with a knowledge discovery framework

    Event impact analysis for time series

    Get PDF
    Time series arise in a variety of application domains—whenever data points are recorded over time and stored for subsequent analysis. A critical question is whether the occurrence of events like natural disasters, technical faults, or political interventions leads to changes in a time series, for example, temporary deviations from its typical behavior. The vast majority of existing research on this topic focuses on the specific impact of a single event on a time series, while methods to generically capture the impact of a recurring event are scarce. In this thesis, we fill this gap by introducing a novel framework for event impact analysis in the case of randomly recurring events. We develop a statistical perspective on the problem and provide a generic notion of event impacts based on a statistical independence relation. The main problem we address is that of establishing the presence of event impacts in stationary time series using statistical independence tests. Tests for event impacts should be generic, powerful, and computationally efficient. We develop two algorithmic test strategies for event impacts that satisfy these properties. The first is based on coincidences between events and peaks in the time series, while the second is based on multiple marginal associations. We also discuss a selection of follow-up questions, including ways to measure, model and visualize event impacts, and the relationship between event impact analysis and anomaly detection in time series. At last, we provide a first method to study event impacts in nonstationary time series. We evaluate our methodological contributions on several real-world datasets and study their performance within large-scale simulation studies

    New directions in the analysis of movement patterns in space and time

    Get PDF
    corecore