1,424 research outputs found
Does Dimensionality Reduction improve the Quality of Motion Interpolation?
In recent years nonlinear dimensionality reduction has frequently
been suggested for the modelling of high-dimensional motion data.
While it is intuitively plausible to use dimensionality reduction to recover
low dimensional manifolds which compactly represent a given set of movements,
there is a lack of critical investigation into the quality of resulting
representations, in particular with respect to generalisability. Furthermore
it is unclear how consistently particular methods can achieve good results.
Here we use a set of robotic motion data for which we know the ground
truth to evaluate a range of nonlinear dimensionality reduction methods
with respect to the quality of motion interpolation. We show that results
are extremely sensitive to parameter settings and data set used, but
that dimensionality reduction can potentially improve the quality of linear motion interpolation, in particular in the presence of noise
Anytime Point-Based Approximations for Large POMDPs
The Partially Observable Markov Decision Process has long been recognized as
a rich framework for real-world planning and control problems, especially in
robotics. However exact solutions in this framework are typically
computationally intractable for all but the smallest problems. A well-known
technique for speeding up POMDP solving involves performing value backups at
specific belief points, rather than over the entire belief simplex. The
efficiency of this approach, however, depends greatly on the selection of
points. This paper presents a set of novel techniques for selecting informative
belief points which work well in practice. The point selection procedure is
combined with point-based value backups to form an effective anytime POMDP
algorithm called Point-Based Value Iteration (PBVI). The first aim of this
paper is to introduce this algorithm and present a theoretical analysis
justifying the choice of belief selection technique. The second aim of this
paper is to provide a thorough empirical comparison between PBVI and other
state-of-the-art POMDP methods, in particular the Perseus algorithm, in an
effort to highlight their similarities and differences. Evaluation is performed
using both standard POMDP domains and realistic robotic tasks
Multilinear motion synthesis with level-of-detail controls
Interactive animation systems often use a level-of-detail(LOD) control to reduce the computational cost by eliminatingunperceivable details of the scene. Most methodsemploy a multiresolutional representation of animationand geometrical data, and adaptively change the accuracylevel according to the importance of each character.Multilinear analysis provides the efficient representation ofmultidimensional and multimodal data, including humanmotion data, based on statistical data correlations. Thispaper proposes a LOD control method of motion synthesiswith a multilinear model. Our method first extracts asmall number of principal components of motion samplesby analyzing three-mode correlations among joints, time,and samples using high-order singular value decomposition.A new motion is synthesized by interpolatingthe reduced components using geostatistics, where theprediction accuracy of the resulting motion is controlledby adaptively decreasing the data dimensionality. Weintroduce a hybrid algorithm to optimize the reductionsize and computational time according to the distancefrom the camera while maintaining visual quality. Ourmethod provides a practical tool for creating an interactiveanimation of many characters while ensuring accurate andflexible controls at a modest level of computational cost
Finding Approximate POMDP solutions Through Belief Compression
Standard value function approaches to finding policies for Partially
Observable Markov Decision Processes (POMDPs) are generally considered to be
intractable for large models. The intractability of these algorithms is to a
large extent a consequence of computing an exact, optimal policy over the
entire belief space. However, in real-world POMDP problems, computing the
optimal policy for the full belief space is often unnecessary for good control
even for problems with complicated policy classes. The beliefs experienced by
the controller often lie near a structured, low-dimensional subspace embedded
in the high-dimensional belief space. Finding a good approximation to the
optimal value function for only this subspace can be much easier than computing
the full value function. We introduce a new method for solving large-scale
POMDPs by reducing the dimensionality of the belief space. We use Exponential
family Principal Components Analysis (Collins, Dasgupta and Schapire, 2002) to
represent sparse, high-dimensional belief spaces using small sets of learned
features of the belief state. We then plan only in terms of the low-dimensional
belief features. By planning in this low-dimensional space, we can find
policies for POMDP models that are orders of magnitude larger than models that
can be handled by conventional techniques. We demonstrate the use of this
algorithm on a synthetic problem and on mobile robot navigation tasks
Color image quality measures and retrieval
The focus of this dissertation is mainly on color image, especially on the images with lossy compression. Issues related to color quantization, color correction, color image retrieval and color image quality evaluation are addressed. A no-reference color image quality index is proposed. A novel color correction method applied to low bit-rate JPEG image is developed. A novel method for content-based image retrieval based upon combined feature vectors of shape, texture, and color similarities has been suggested. In addition, an image specific color reduction method has been introduced, which allows a 24-bit JPEG image to be shown in the 8-bit color monitor with 256-color display. The reduction in download and decode time mainly comes from the smart encoder incorporating with the proposed color reduction method after color space conversion stage. To summarize, the methods that have been developed can be divided into two categories: one is visual representation, and the other is image quality measure.
Three algorithms are designed for visual representation:
(1) An image-based visual representation for color correction on low bit-rate JPEG images. Previous studies on color correction are mainly on color image calibration among devices. Little attention was paid to the compressed image whose color distortion is evident in low bit-rate JPEG images. In this dissertation, a lookup table algorithm is designed based on the loss of PSNR in different compression ratio.
(2) A feature-based representation for content-based image retrieval. It is a concatenated vector of color, shape, and texture features from region of interest (ROI).
(3) An image-specific 256 colors (8 bits) reproduction for color reduction from 16 millions colors (24 bits). By inserting the proposed color reduction method into a JPEG encoder, the image size could be further reduced and the transmission time is also reduced. This smart encoder enables its decoder using less time in decoding.
Three algorithms are designed for image quality measure (IQM):
(1) A referenced IQM based upon image representation in very low-dimension. Previous studies on IQMs are based on high-dimensional domain including spatial and frequency domains. In this dissertation, a low-dimensional domain IQM based on random projection is designed, with preservation of the IQM accuracy in high-dimensional domain.
(2) A no-reference image blurring metric. Based on the edge gradient, the degree of image blur can be measured.
(3) A no-reference color IQM based upon colorfulness, contrast and sharpness
A POD-selective inverse distance weighting method for fast parametrized shape morphing
Efficient shape morphing techniques play a crucial role in the approximation of partial differential equations defined in parametrized domains, such as for fluid-structure interaction or shape optimization problems. In this paper, we focus on inverse distance weighting (IDW) interpolation techniques, where a reference domain is morphed into a deformed one via the displacement of a set of control points. We aim at reducing the computational burden characterizing a standard IDW approach without significantly compromising the accuracy. To this aim, first we propose an improvement of IDW based on a geometric criterion that automatically selects a subset of the original set of control points. Then, we combine this new approach with a dimensionality reduction technique based on a proper orthogonal decomposition of the set of admissible displacements. This choice further reduces computational costs. We verify the performances of the new IDW techniques on several tests by investigating the trade-off reached in terms of accuracy and efficiency
Analysis of Human Motion Data for Vehicle Ingress Discomfort Evaluation
The ease of entering a vehicle, known as ingress, is one of the important ergonomic factors that car manufacturers consider during the process of vehicle design. This has motivated vehicle manufacturers to focus on assessing and improving ingress discomfort. With the rapid advancement in human motion capture and computer simulation technologies, one of the promising means to evaluate vehicle ingress discomfort is through analyzing human motion data. For this purpose, this dissertation will focus on proposing methods that analyze human motion data to evaluate vehicle ingress discomfort. The first part of this dissertation proposes a method for identifying and analyzing human motion variation patterns. The method uses a high-order array to represent human motion data and utilizes the Uncorrelated Multilinear Principal Component Analysis (UMPCA) method to identify variation patterns in human motion. The proposed method is capable of preserving the original spatiotemporal correlation structure of human motion data and provides better feature extraction than Principal Component Analysis (PCA). The method is applied to the ingress motion data to show its effectiveness in automatically detecting important motion variation patterns. The second part of this dissertation proposes a method for modeling the relationship between ingress motion and ingress discomfort ratings. The method presents a modeling framework that predicts subjective responses using human motion trajectories. The framework integrates curve alignment and data dimension reduction methods into the prediction model development. A case study is shown to demonstrate that human motion prediction models are more effective than simpler, more common ingress discomfort prediction models. The third part of this dissertation proposes a method for statistical hypothesis testing and sample size calculation for comparing ingress discomfort proportions of different vehicle designs. A dual-bootstrap method is proposed to estimate the standard deviation of ingress discomfort proportions estimated using a human motion prediction model. The proposed method is capable of separating the two sources of variation; the modeling variance, which results from the uncertainty in the estimated prediction models, and the sampling variance, which arises due to the randomness in the prediction dataset. The effectiveness of the proposed method is demonstrated through an ingress case study. The research presented in this dissertation is applicable beyond the analysis of ingress motion data; it can be applied to many fields where human motion data is available. At a broader level, the research presented can be useful in the analysis of functional data of many types, with particular applicability to multi-channel time-series data.Ph.D.Engineering (Manufacturing)University of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/116903/1/Thesis_Hadi_Final_Version.pd
Big-Data-Driven Materials Science and its FAIR Data Infrastructure
This chapter addresses the forth paradigm of materials research -- big-data
driven materials science. Its concepts and state-of-the-art are described, and
its challenges and chances are discussed. For furthering the field, Open Data
and an all-embracing sharing, an efficient data infrastructure, and the rich
ecosystem of computer codes used in the community are of critical importance.
For shaping this forth paradigm and contributing to the development or
discovery of improved and novel materials, data must be what is now called FAIR
-- Findable, Accessible, Interoperable and Re-purposable/Re-usable. This sets
the stage for advances of methods from artificial intelligence that operate on
large data sets to find trends and patterns that cannot be obtained from
individual calculations and not even directly from high-throughput studies.
Recent progress is reviewed and demonstrated, and the chapter is concluded by a
forward-looking perspective, addressing important not yet solved challenges.Comment: submitted to the Handbook of Materials Modeling (eds. S. Yip and W.
Andreoni), Springer 2018/201
- …