12,476 research outputs found

    Adding dynamics to sketch-based character animations

    Get PDF
    International audienceCartoonists and animators often use lines of action to emphasize dynamics in character poses. In this paper, we propose a physically-based model to simulate the line of action's motion, leading to rich motion from simple drawings. Our proposed method is decomposed into three steps. Based on user-provided strokes, we forward simulate 2D elastic motion. To ensure continuity across keyframes, we re-target the forward simulations to the drawn strokes. Finally, we synthesize a 3D character motion matching the dynamic line. The fact that the line can move freely like an elastic band raises new questions about its relationship to the body over time. The line may move faster and leave body parts behind, or the line may slide slowly towards other body parts for support. We conjecture that the artist seeks to maximize the filling of the line (with the character's body)---while respecting basic realism constraints such as balance. Based on these insights, we provide a method that synthesizes 3D character motion, given discontinuously constrained body parts that are specified by the user at key moments

    Applying MDL to Learning Best Model Granularity

    Get PDF
    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appea

    An analysis of the use of graphics for information retrieval

    Get PDF
    Several research groups have addressed the problem of retrieving vector graphics. This work has, however, focused either on domain-dependent areas or was based on very simple graphics languages. Here we take a fresh look at the issue of graphics retrieval in general and in particular at the tasks which retrieval systems must support. The paper presents a series of case studies which explored the needs of professionals in the hope that these needs can help direct future graphics IR research. Suggested modelling techniques for some of the graphic collections are also presented

    Methods and apparatus employing vibratory energy for wrenching Patent

    Get PDF
    Ultrasonic wrench for applying vibratory energy to mechanical fastener

    Rectangular Layouts and Contact Graphs

    Get PDF
    Contact graphs of isothetic rectangles unify many concepts from applications including VLSI and architectural design, computational geometry, and GIS. Minimizing the area of their corresponding {\em rectangular layouts} is a key problem. We study the area-optimization problem and show that it is NP-hard to find a minimum-area rectangular layout of a given contact graph. We present O(n)-time algorithms that construct O(n2)O(n^2)-area rectangular layouts for general contact graphs and O(nlogn)O(n\log n)-area rectangular layouts for trees. (For trees, this is an O(logn)O(\log n)-approximation algorithm.) We also present an infinite family of graphs (rsp., trees) that require Ω(n2)\Omega(n^2) (rsp., Ω(nlogn)\Omega(n\log n)) area. We derive these results by presenting a new characterization of graphs that admit rectangular layouts using the related concept of {\em rectangular duals}. A corollary to our results relates the class of graphs that admit rectangular layouts to {\em rectangle of influence drawings}.Comment: 28 pages, 13 figures, 55 references, 1 appendi

    Electron Cloud Buildup Characterization Using Shielded Pickup Measurements and Custom Modeling Code at CESRTA

    Full text link
    The Cornell Electron Storage Ring Test Accelerator experimental program includes investigations into electron cloud buildup, applying various mitigation techniques in custom vacuum chambers. Among these are two 1.1-m-long sections located symmetrically in the east and west arc regions. These chambers are equipped with pickup detectors shielded against the direct beam-induced signal. They detect cloud electrons migrating through an 18-mm-diameter pattern of small holes in the top of the chamber. A digitizing oscilloscope is used to record the signals, providing time-resolved information on cloud development. Carbon-coated, TiN-coated and uncoated aluminum chambers have been tested. Electron and positron beams of 2.1, 4.0 and 5.3 GeV with a variety of bunch populations and spacings in steps of 4 and 14 ns have been used. Here we report on results from the ECLOUD modeling code which highlight the sensitivity of these measurements to the physical phenomena determining cloud buildup such as the photoelectron production azimuthal and energy distributions, and the secondary yield parameters including the true secondary, re-diffused, and elastic yield values.Comment: Presented at ECLOUD'12: Joint INFN-CERN-EuCARD-AccNet Workshop on Electron-Cloud Effects, La Biodola, Isola d'Elba, Italy, 5-9 June 2012; CERN-2013-002, pp. 241-25

    Improving Loss Estimation for Woodframe Buildings. Volume 2: Appendices

    Get PDF
    This report documents Tasks 4.1 and 4.5 of the CUREE-Caltech Woodframe Project. It presents a theoretical and empirical methodology for creating probabilistic relationships between seismic shaking severity and physical damage and loss for buildings in general, and for woodframe buildings in particular. The methodology, called assembly-based vulnerability (ABV), is illustrated for 19 specific woodframe buildings of varying ages, sizes, configuration, quality of construction, and retrofit and redesign conditions. The study employs variations on four basic floorplans, called index buildings. These include a small house and a large house, a townhouse and an apartment building. The resulting seismic vulnerability functions give the probability distribution of repair cost as a function of instrumental ground-motion severity. These vulnerability functions are useful by themselves, and are also transformed to seismic fragility functions compatible with the HAZUS software. The methods and data employed here use well-accepted structural engineering techniques, laboratory test data and computer programs produced by Element 1 of the CUREE-Caltech Woodframe Project, other recently published research, and standard construction cost-estimating methods. While based on such well established principles, this report represents a substantially new contribution to the field of earthquake loss estimation. Its methodology is notable in that it calculates detailed structural response using nonlinear time-history structural analysis as opposed to the simplifying assumptions required by nonlinear pushover methods. It models physical damage at the level of individual building assemblies such as individual windows, segments of wall, etc., for which detailed laboratory testing is available, as opposed to two or three broad component categories that cannot be directly tested. And it explicitly models uncertainty in ground motion, structural response, component damageability, and contractor costs. Consequently, a very detailed, verifiable, probabilistic picture of physical performance and repair cost is produced, capable of informing a variety of decisions regarding seismic retrofit, code development, code enforcement, performance-based design for above-code applications, and insurance practices

    Seismic Performance of Steel Pipe Pile to Cap Beam Moment Resisting Connections

    Get PDF
    INE/AUTC 13.0
    corecore