128 research outputs found
3D object reconstruction from line drawings.
Cao Liangliang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 64-69).Abstracts in English and Chinese.Chapter 1 --- Introduction and Related Work --- p.1Chapter 1.1 --- Reconstruction from Single Line Drawings and the Applications --- p.1Chapter 1.2 --- Optimization-based Reconstruction --- p.2Chapter 1.3 --- Other Reconstruction Methods --- p.2Chapter 1.3.1 --- Line Labeling and Algebraic Methods --- p.2Chapter 1.3.2 --- CAD Reconstruction --- p.3Chapter 1.3.3 --- Modelling from Images --- p.3Chapter 1.4 --- Finding Faces of Line Drawings --- p.4Chapter 1.5 --- Generalized Cylinder --- p.4Chapter 1.6 --- Research Problems and Our Contribution --- p.5Chapter 1.6.1 --- A New Criteria --- p.5Chapter 1.6.2 --- Recover Objects from Line Drawings without Hidden Lines --- p.6Chapter 1.6.3 --- Reconstruction of Curved Objects --- p.6Chapter 1.6.4 --- Planar Limbs Assumption and the Derived Models --- p.6Chapter 2 --- A New Criteria for Reconstruction --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Human Visual Perception and the Symmetry Measure --- p.10Chapter 2.3 --- Reconstruction Based on Symmetry and Planarity --- p.11Chapter 2.3.1 --- Finding Faces --- p.11Chapter 2.3.2 --- Constraint of Planarity --- p.11Chapter 2.3.3 --- Objective Function --- p.12Chapter 2.3.4 --- Reconstruction Algorithm --- p.13Chapter 2.4 --- Experimental Results --- p.13Chapter 2.5 --- Summary --- p.18Chapter 3 --- Line Drawings without Hidden Lines: Inference and Reconstruction --- p.19Chapter 3.1 --- Introduction --- p.19Chapter 3.2 --- Terminology --- p.20Chapter 3.3 --- Theoretical Inference of the Hidden Topological Structure --- p.21Chapter 3.3.1 --- Assumptions --- p.21Chapter 3.3.2 --- Finding the Degrees and Ranks --- p.22Chapter 3.3.3 --- Constraints for the Inference --- p.23Chapter 3.4 --- An Algorithm to Recover the Hidden Topological Structure --- p.25Chapter 3.4.1 --- Outline of the Algorithm --- p.26Chapter 3.4.2 --- Constructing the Initial Hidden Structure --- p.26Chapter 3.4.3 --- Reducing Initial Hidden Structure --- p.27Chapter 3.4.4 --- Selecting the Most Plausible Structure --- p.28Chapter 3.5 --- Reconstruction of 3D Objects --- p.29Chapter 3.6 --- Experimental Results --- p.32Chapter 3.7 --- Summary --- p.32Chapter 4 --- Curved Objects Reconstruction from 2D Line Drawings --- p.35Chapter 4.1 --- Introduction --- p.35Chapter 4.2 --- Related Work --- p.36Chapter 4.2.1 --- Face Identification --- p.36Chapter 4.2.2 --- 3D Reconstruction of planar objects --- p.37Chapter 4.3 --- Reconstruction of Curved Objects --- p.37Chapter 4.3.1 --- Transformation of Line Drawings --- p.37Chapter 4.3.2 --- Finding 3D Bezier Curves --- p.39Chapter 4.3.3 --- Bezier Surface Patches and Boundaries --- p.40Chapter 4.3.4 --- Generating Bezier Surface Patches --- p.41Chapter 4.4 --- Results --- p.43Chapter 4.5 --- Summary --- p.45Chapter 5 --- Planar Limbs and Degen Generalized Cylinders --- p.47Chapter 5.1 --- Introduction --- p.47Chapter 5.2 --- Planar Limbs and View Directions --- p.49Chapter 5.3 --- DGCs in Homogeneous Coordinates --- p.53Chapter 5.3.1 --- Homogeneous Coordinates --- p.53Chapter 5.3.2 --- Degen Surfaces --- p.54Chapter 5.3.3 --- DGCs --- p.54Chapter 5.4 --- Properties of DGCs --- p.56Chapter 5.5 --- Potential Applications --- p.59Chapter 5.5.1 --- Recovery of DGC Descriptions --- p.59Chapter 5.5.2 --- Deformable DGCs --- p.60Chapter 5.6 --- Summary --- p.61Chapter 6 --- Conclusion and Future Work --- p.62Bibliography --- p.6
E -commerce for the metal removal industry
The popularity of outsourcing fabrication introduces a problem, namely an inevitable loss of data as information is translated from design to fabrication or from one system to another. Unsatisfactory information, delivered to the outsourcing facility, and inefficient communications between design and fabrication certainly cause enormous economic losses from late product delivery or bad product quality. To overcome these data transferring problems and to improve communications between the design and fabrication sides, a design and manufacturing methodology for custom machined parts in E-Commerce is suggested and implemented in this dissertation. This methodology is based on the idea of a Clean Interface like the Mead-Conway approach for VLSI chip fabrication [MEAD81].
Essential design information for fabricating parts properly with NC (Numerical Controlled) milling machines is expressed in machining/manufacturing features, fabrication friendly terminologies, and is represented by a new language called NCML (Numerical Control Markup Language). NCML is based on XML (Extensible Markup Language)---the document-processing standard proposed by the World Wide Web Consortium (W3C). NCML is designed to include the minimum requisite information necessary for the manufacturer to produce the product. The designer defines NCML, which overcomes geographical separation between design and manufacturing, and minimizes unnecessary interactions caused from lack of information.
To prove the possibility of custom machine part fabrication and E-Commerce with NCML, three software systems are implemented. These three systems are FACILE/Design, FACILE/Fabricate, and E-Mill. FACILE is a prototype CAD/CAM system developed to verify NCML feasibility as an Electronic Data Interchange (EDI) format. FACILE/Design is a system based on manufacturing features like holes, contours, and pockets. It can be used to create geometric models, verify the design, and create NCML files. The NCML file is imported by FACILE/Fabricate and turned into G-codes by applying appropriate cutting conditions. Simplified machining simulation and cost estimation tools using NCML inputs are also developed to show some examples of NCML applications that can help design and manufacturing activities. To demonstrate how NCML could be used in a web-based application, an E-Business model called E-Mill has been implemented. E-Mill is a market place for machined parts whose data is encoded in NCML. To make E-Mill a feasible E-Commerce model, two-way communication based on NCML data and the visualization of 3D geometric models in the Virtual Reality Modeling Language (VRML) are equipped with a competitive matchmaking mechanism.
In this dissertation, a whole system based on NCML bridges the gap between design and manufacturing. As a part of the NCML validation process for the new system, the pros and cons of NCML design features are discussed. A system for cost estimation is calibrated and compared to real cutting results for the purpose of validation
LSST Science Book, Version 2.0
A survey that can cover the sky in optical bands over wide fields to faint
magnitudes with a fast cadence will enable many of the exciting science
opportunities of the next decade. The Large Synoptic Survey Telescope (LSST)
will have an effective aperture of 6.7 meters and an imaging camera with field
of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over
20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with
fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a
total point-source depth of r~27.5. The LSST Science Book describes the basic
parameters of the LSST hardware, software, and observing plans. The book
discusses educational and outreach opportunities, then goes on to describe a
broad range of science that LSST will revolutionize: mapping the inner and
outer Solar System, stellar populations in the Milky Way and nearby galaxies,
the structure of the Milky Way disk and halo and other objects in the Local
Volume, transient and variable objects both at low and high redshift, and the
properties of normal and active galaxies at low and high redshift. It then
turns to far-field cosmological topics, exploring properties of supernovae to
z~1, strong and weak lensing, the large-scale distribution of galaxies and
baryon oscillations, and how these different probes may be combined to
constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at
http://www.lsst.org/lsst/sciboo
Enhancing 3D Autonomous Navigation Through Obstacle Fields: Homogeneous Localisation and Mapping, with Obstacle-Aware Trajectory Optimisation
Small flying robots have numerous potential applications, from quadrotors for search and rescue, infrastructure inspection and package delivery to free-flying satellites for assistance activities inside a space station. To enable these applications, a key challenge is autonomous navigation in 3D, near obstacles on a power, mass and computation constrained platform. This challenge requires a robot to perform localisation, mapping, dynamics-aware trajectory planning and control. The current state-of-the-art uses separate algorithms for each component. Here, the aim is for a more homogeneous approach in the search for improved efficiencies and capabilities. First, an algorithm is described to perform Simultaneous Localisation And Mapping (SLAM) with physical, 3D map representation that can also be used to represent obstacles for trajectory planning: Non-Uniform Rational B-Spline (NURBS) surfaces. Termed NURBSLAM, this algorithm is shown to combine the typically separate tasks of localisation and obstacle mapping. Second, a trajectory optimisation algorithm is presented that produces dynamically-optimal trajectories with direct consideration of obstacles, providing a middle ground between path planners and trajectory smoothers. Called the Admissible Subspace TRajectory Optimiser (ASTRO), the algorithm can produce trajectories that are easier to track than the state-of-the-art for flight near obstacles, as shown in flight tests with quadrotors. For quadrotors to track trajectories, a critical component is the differential flatness transformation that links position and attitude controllers. Existing singularities in this transformation are analysed, solutions are proposed and are then demonstrated in flight tests. Finally, a combined system of NURBSLAM and ASTRO are brought together and tested against the state-of-the-art in a novel simulation environment to prove the concept that a single 3D representation can be used for localisation, mapping, and planning
Computational methods for the analysis of functional 4D-CT chest images.
Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
Aerial Vehicles
This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space
LSST Science Book, Version 2.0
A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy
Forecasting CO2 Sequestration with Enhanced Oil Recovery
The aim of carbon capture, utilization, and storage (CCUS) is to reduce the amount of CO2 released into the atmosphere and to mitigate its effects on climate change. Over the years, naturally occurring CO2 sources have been utilized in enhanced oil recovery (EOR) projects in the United States. This has presented an opportunity to supplement and gradually replace the high demand for natural CO2 sources with anthropogenic sources. There also exist incentives for operators to become involved in the storage of anthropogenic CO2 within partially depleted reservoirs, in addition to the incremental production oil revenues. These incentives include a wider availability of anthropogenic sources, the reduction of emissions to meet regulatory requirements, tax incentives in some jurisdictions, and favorable public relations. The United States Department of Energy has sponsored several Regional Carbon Sequestration Partnerships (RCSPs) through its Carbon Storage program which have conducted field demonstrations for both EOR and saline aquifer storage. Various research efforts have been made in the area of reservoir characterization, monitoring, verification and accounting, simulation, and risk assessment to ascertain long-term storage potential within the subject storage complex. This book is a collection of lessons learned through the RCSP program within the Southwest Region of the United States. The scope of the book includes site characterization, storage modeling, monitoring verification reporting (MRV), risk assessment and international case studies
Unveiling the Universe with emerging cosmological probes
The detection of the accelerated expansion of the Universe has been one of the major breakthroughs in modern cosmology. Several cosmological probes (Cosmic Microwave Background, Supernovae Type Ia, Baryon Acoustic Oscillations) have been studied in depth to better understand the nature of the mechanism driving this acceleration, and they are being currently pushed to their limits, obtaining remarkable constraints that allowed us to shape the standard cosmological model. In parallel to that, however, the percent precision achieved has recently revealed apparent tensions between measurements obtained from different methods. These are either indicating some unaccounted systematic effects, or are pointing toward new physics. Following the development of CMB, SNe, and BAO cosmology, it is critical to extend our selection of cosmological probes. Novel probes can be exploited to validate results, control or mitigate systematic effects, and, most importantly, to increase the accuracy and robustness of our results. This review is meant to provide a state-of-art benchmark of the latest advances in emerging “beyond-standard” cosmological probes. We present how several different methods can become a key resource for observational cosmology. In particular, we review cosmic chronometers, quasars, gamma-ray bursts, standard sirens, lensing time-delay with galaxies and clusters, cosmic voids, neutral hydrogen intensity mapping, surface brightness fluctuations, stellar ages of the oldest objects, secular redshift drift, and clustering of standard candles. The review describes the method, systematics, and results of each probe in a homogeneous way, giving the reader a clear picture of the available innovative methods that have been introduced in recent years and how to apply them. The review also discusses the potential synergies and complementarities between the various probes, exploring how they will contribute to the future of modern cosmology
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …