337 research outputs found
Quad Meshing
Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this State of the Art Report, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing
Frustrated two dimensional quantum magnets
We overview physical effects of exchange frustration and quantum spin
fluctuations in (quasi-) two dimensional (2D) quantum magnets () with
square, rectangular and triangular structure. Our discussion is based on the
- type frustrated exchange model and its generalizations. These
models are closely related and allow to tune between different phases,
magnetically ordered as well as more exotic nonmagnetic quantum phases by
changing only one or two control parameters. We survey ground state properties
like magnetization, saturation fields, ordered moment and structure factor in
the full phase diagram as obtained from numerical exact diagonalization
computations and analytical linear spin wave theory. We also review finite
temperature properties like susceptibility, specific heat and magnetocaloric
effect using the finite temperature Lanczos method. This method is powerful to
determine the exchange parameters and g-factors from experimental results. We
focus mostly on the observable physical frustration effects in magnetic phases
where plenty of quasi-2D material examples exist to identify the influence of
quantum fluctuations on magnetism.Comment: 78 pages, 54 figure
Vector field processing on triangle meshes
While scalar fields on surfaces have been staples of geometry processing, the use of tangent vector fields has steadily grown in geometry processing over the last two decades: they are crucial to encoding directions and sizing on surfaces as commonly required in tasks such as texture synthesis, non-photorealistic rendering, digital grooming, and meshing. There are, however, a variety of discrete representations of tangent vector fields on triangle meshes, and each approach offers different tradeoffs among simplicity, efficiency, and accuracy depending on the targeted application.
This course reviews the three main families of discretizations used to design computational tools for vector field processing on triangle meshes: face-based, edge-based, and vertex-based representations. In the process of reviewing the computational tools offered by these representations, we go over a large body of recent developments in vector field processing in the area of discrete differential geometry. We also discuss the theoretical and practical limitations of each type of discretization, and cover increasingly-common extensions such as n-direction and n-vector fields.
While the course will focus on explaining the key approaches to practical encoding (including data structures) and manipulation (including discrete operators) of finite-dimensional vector fields, important differential geometric notions will also be covered: as often in Discrete Differential Geometry, the discrete picture will be used to illustrate deep continuous concepts such as covariant derivatives, metric connections, or Bochner Laplacians
Computer-Aided Geometry Modeling
Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design
Rapid Finite Fault Inversion for Megathrust Earthquakes
The largest earthquakes take place at subduction zones, and their
devastating impact in populated regions is often exacerbated by
their ability to excite powerful tsunamis. Today, we understand
that large subduction earthquakes, known as megathrust events,
are caused by the sudden release of elastic strain energy stored
at the plate boundaries where a localized, previously locked,
section of the megathrust ruptures. The rupture process can
propagate over hundreds of kilometres and slip on the fault can
be tens of meters. Using ground motion data to image the
spatio-temporal spread of slip over the fault surface is known as
finite fault inversion (FFI). Over the past decade FFI has become
almost routine, so that results produced by different groups are
available within several days or even hours after a large event.
However, these results typically require manual processing of the
data, and are not accompanied by appraisals of uncertainty. My
PhD research has focused on obtaining slip models for such events
in near real time. I divided my analysis into three main projects
that are discussed in this thesis.
First, I evaluated the performance of a long period seismic wave,
the W-phase, which arrives between P and S waves, in a classic
FFI scheme for the Maule (2010, Mw = 8.8) and Tohoku (2011, Mw =
9.1) events. I found that, despite its long period, the W-phase
can resolve first order features of the rupture for both events.
Since the W-phase is not very sensitive to 3D structure, the
processing of data for the W-phase is generally simpler than it
is for the body and surface waves that are commonly used for FFI.
In addition, the W-phase is fast and can be obtained soon after
the arrival of the P-wave.
Second, I improve the classic inversion scheme to increase
robustness and rigour for rapid inversions. The most remarkable
aspects of this inversion approach are that the faulting surface
is constrained to follow the 3D subducting slab geometry and that
the smoothness of the rupture is objectively determined. I used
this approach for the recent Illapel event (2015, Mw = 8.3) and
showed that a meaningful preliminary model can be obtained within
25 minutes from rupture onset. A refined solution can be obtained
1 hour from the origin time, which is still useful for the
management of the disaster.
Finally, I have developed a novel linearized inversion method
that allows slip uncertainties to be estimated during rapid
finite fault inversion. This is an intrinsically complex problem
as normally positivity constraints are imposed on finite fault
models to ensure well behaved solutions. Uncertainties are
typically unavailable for FFI results, but they can be crucial
for meaningful interpretation of the slip models. To estimate
them, I follow a probabilistic Bayesian framework but avoiding
the computationally demanding Bayesian sampling. Instead, by
using a coordinate transformation, the posterior distribution is
approximated and obtained by linearized inversion. This inversion
scheme was tested employing both simulated and real W-phase data,
showing that meaningful uncertainty estimates can be inferred.
Comparison with Bayesian sampling is also performed suggesting
that the error of approximating the posterior is small. Including
uncertainty estimates in early finite fault models will reduce
the risk of working with misleading solutions.
The rigour, objectivity and robustness of the inversion
techniques devised in this thesis can be a valuable contribution
to the FFI community. Since I have utilized mostly open source
software and a desktop computer to carry out this research, the
tools I have developed can be easily used for early warning in
most seismic observatories. I believe that, when facing such
disastrous events, the methods developed here can be important to
assist authorities with emergency response
A framework for hull form reverse engineering and geometry integration into numerical simulations
The thesis presents a ship hull form specific reverse engineering and CAD integration framework. The reverse engineering part proposes three alternative suitable reconstruction approaches namely curves network, direct surface fitting, and triangulated surface reconstruction. The CAD integration part includes surface healing, region identification, and domain preparation strategies which used to adapt the CAD model to downstream application requirements. In general, the developed framework bridges a point cloud and a CAD model obtained from IGES and STL file into downstream applications
Feature detection algorithms in computed images
The problem of sensing a medium by several sensors and retrieving
interesting features is a very general one. The basic framework of the
problem is generally the same for applications from MRI,
tomography, Radar SAR imaging to subsurface imaging, even though the
data acquisition processes, sensing geometries and sensed properties are
different. In this thesis we introduced a new perspective to the
problem of remote sensing and information retrieval by studying the
problem of subsurface imaging using GPR and seismic sensors.
We have shown that if the sensed medium is sparse in some domain then it can be imaged using many fewer measurements than required by the standard methods. This leads to much lower data acquisition times and better images representing the medium. We have used the ideas from Compressive Sensing, which show that a small number of random measurements about a signal is sufficient to completely characterize it, if the signal is sparse or compressible in some domain. Although we have applied our ideas to the subsurface imaging problem, our results are general and can be extended to other remote sensing applications.
A second objective in remote sensing is information retrieval
which involves searching for important features in the computed image of
the medium. In this thesis we focus on detecting buried structures like
pipes, and tunnels in computed GPR or seismic images. The problem of
finding these structures in high clutter and noise conditions, and
finding them faster than the standard shape detecting methods like the
Hough transform is analyzed.
One of the most important contributions of this thesis is, where the
sensing and the information retrieval stages are unified in a single
framework using compressive sensing. Instead of taking lots of standard
measurements to compute the image of the medium and search the
necessary information in the computed image, a much smaller number of
measurements as random projections are taken. The
data acquisition and information retrieval stages are unified by using a
data model dictionary that connects the information to the sensor data.Ph.D.Committee Chair: McClellan, James H.; Committee Member: Romberg, Justin K.; Committee Member: Scott, Waymond R. Jr.; Committee Member: Vela, Patricio A.; Committee Member: Vidakovic, Bran
- …