6,305 research outputs found

    Automatic Reverse Engineering: Creating computer-aided design (CAD) models from multi-view images

    Full text link
    Generation of computer-aided design (CAD) models from multi-view images may be useful in many practical applications. To date, this problem is usually solved with an intermediate point-cloud reconstruction and involves manual work to create the final CAD models. In this contribution, we present a novel network for an automated reverse engineering task. Our network architecture combines three distinct stages: A convolutional neural network as the encoder stage, a multi-view pooling stage and a transformer-based CAD sequence generator. The model is trained and evaluated on a large number of simulated input images and extensive optimization of model architectures and hyper-parameters is performed. A proof-of-concept is demonstrated by successfully reconstructing a number of valid CAD models from simulated test image data. Various accuracy metrics are calculated and compared to a state-of-the-art point-based network. Finally, a real world test is conducted supplying the network with actual photographs of two three-dimensional test objects. It is shown that some of the capabilities of our network can be transferred to this domain, even though the training exclusively incorporates purely synthetic training data. However to date, the feasible model complexity is still limited to basic shapes.Comment: Presented at GCPR 202

    Integrated Laboratory Demonstrations of Multi-Object Adaptive Optics on a Simulated 10-Meter Telescope at Visible Wavelengths

    Full text link
    One important frontier for astronomical adaptive optics (AO) involves methods such as Multi-Object AO and Multi-Conjugate AO that have the potential to give a significantly larger field of view than conventional AO techniques. A second key emphasis over the next decade will be to push astronomical AO to visible wavelengths. We have conducted the first laboratory simulations of wide-field, laser guide star adaptive optics at visible wavelengths on a 10-meter-class telescope. These experiments, utilizing the UCO/Lick Observatory's Multi-Object / Laser Tomography Adaptive Optics (MOAO/LTAO) testbed, demonstrate new techniques in wavefront sensing and control that are crucial to future on-sky MOAO systems. We (1) test and confirm the feasibility of highly accurate atmospheric tomography with laser guide stars, (2) demonstrate key innovations allowing open-loop operation of Shack-Hartmann wavefront sensors (with errors of ~30 nm) as will be needed for MOAO, and (3) build a complete error budget model describing system performance. The AO system maintains a performance of 32.4% Strehl on-axis, with 24.5% and 22.6% at 10" and 15", respectively, at a science wavelength of 710 nm (R-band) over the equivalent of 0.8 seconds of simulation. The MOAO-corrected field of view is ~25 times larger in area than that limited by anisoplanatism at R-band. Our error budget is composed of terms verified through independent, empirical experiments. Error terms arising from calibration inaccuracies and optical drift are comparable in magnitude to traditional terms like fitting error and tomographic error. This makes a strong case for implementing additional calibration facilities in future AO systems, including accelerometers on powered optics, 3D turbulators, telescope and LGS simulators, and external calibration ports for deformable mirrors.Comment: 29 pages, 11 figures, submitted to PAS

    Smooth(er) Stellar Mass Maps in CANDELS: Constraints on the Longevity of Clumps in High-redshift Star-forming Galaxies

    Get PDF
    We perform a detailed analysis of the resolved colors and stellar populations of a complete sample of 323 star-forming galaxies at 0.5 < z < 1.5, and 326 star-forming galaxies at 1.5 < z < 2.5 in the ERS and CANDELS-Deep region of GOODS-South. Galaxies were selected to be more massive than 10^10 Msun and have specific star formation rates above 1/t_H. We model the 7-band optical ACS + near-IR WFC3 spectral energy distributions of individual bins of pixels, accounting simultaneously for the galaxy-integrated photometric constraints available over a longer wavelength range. We analyze variations in rest-frame color, stellar surface mass density, age, and extinction as a function of galactocentric radius and local surface brightness/density, and measure structural parameters on luminosity and stellar mass maps. We find evidence for redder colors, older stellar ages, and increased dust extinction in the nuclei of galaxies. Big star-forming clumps seen in star formation tracers are less prominent or even invisible on the inferred stellar mass distributions. Off-center clumps contribute up to ~20% to the integrated SFR, but only 7% or less to the integrated mass of all massive star-forming galaxies at z ~ 1 and z ~ 2, with the fractional contributions being a decreasing function of wavelength used to select the clumps. The stellar mass profiles tend to have smaller sizes and M20 coefficients, and higher concentration and Gini coefficients than the light distribution. Our results are consistent with an inside-out disk growth scenario with brief (100 - 200 Myr) episodic local enhancements in star formation superposed on the underlying disk. Alternatively, the young ages of off-center clumps may signal inward clump migration, provided this happens efficiently on the order of an orbital timescale.Comment: Accepted by The Astrophysical Journal, 27 pages, 1 table, 16 figure

    Statistical structures for internet-scale data management

    Get PDF
    Efficient query processing in traditional database management systems relies on statistics on base data. For centralized systems, there is a rich body of research results on such statistics, from simple aggregates to more elaborate synopses such as sketches and histograms. For Internet-scale distributed systems, on the other hand, statistics management still poses major challenges. With the work in this paper we aim to endow peer-to-peer data management over structured overlays with the power associated with such statistical information, with emphasis on meeting the scalability challenge. To this end, we first contribute efficient, accurate, and decentralized algorithms that can compute key aggregates such as Count, CountDistinct, Sum, and Average. We show how to construct several types of histograms, such as simple Equi-Width, Average-Shifted Equi-Width, and Equi-Depth histograms. We present a full-fledged open-source implementation of these tools for distributed statistical synopses, and report on a comprehensive experimental performance evaluation, evaluating our contributions in terms of efficiency, accuracy, and scalability

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Template-based reverse engineering of parametric CAD models from point clouds

    Get PDF
    openEven if many Reverse Engineering techniques exist to reconstruct real objects in 3D, very few are able to deal directly and efficiently with the reconstruction of editable CAD models of assemblies of mechanical parts that can be used in the stages of Product Development Processes (PDP). In the absence of suitable segmentation tools, these approaches struggle to identify and reconstruct model the different parts that make up the assembly. The thesis aims to develop a new Reverse Engineering technique for the reconstruction of editable CAD models of mechanical parts’ assemblies. The originality lies in the use of a Simulated Annealing-based fitting technique optimization process that leverages a two-level filtering able to capture and manage the boundaries of the parts’ geometries inside the overall point cloud to allow for interface detection and local fitting of a part template to the point cloud. The proposed method uses various types of data (e.g. clouds of points, CAD models possibly stored in database together with the associated best parameter configurations for the fitting process). The approach is modular and integrates a sensitivity analysis to characterize the impact of the variations of the parameters of a CAD model on the evolution of the deviation between the CAD model itself and the point cloud to be fitted. The evaluation of the proposed approach is performed using both real scanned point clouds and as-scanned virtually generated point clouds which incorporate several artifacts that could appear with a real scanner. Results cover several Industry 4.0 related application scenarios, ranging from the global fitting of a single part to the update of a complete Digital Mock-Up embedding assembly constraints. The proposed approach presents good capacities to help maintaining the coherence between a product/system and its digital twin.openXXXIII CICLO - INGEGNERIA MECCANICA, ENERGETICA E GESTIONALE - Meccanica, misure e robotica01/A3 - ANALISI MATEMATICA, PROBABILITA' E STATISTICA MATEMATICA01/B1 - INFORMATICA09/B2 - IMPIANTI INDUSTRIALI MECCANICIShah, GHAZANFAR AL

    A Systematic Approach to Constructing Families of Incremental Topology Control Algorithms Using Graph Transformation

    Full text link
    In the communication systems domain, constructing and maintaining network topologies via topology control (TC) algorithms is an important cross-cutting research area. Network topologies are usually modeled using attributed graphs whose nodes and edges represent the network nodes and their interconnecting links. A key requirement of TC algorithms is to fulfill certain consistency and optimization properties to ensure a high quality of service. Still, few attempts have been made to constructively integrate these properties into the development process of TC algorithms. Furthermore, even though many TC algorithms share substantial parts (such as structural patterns or tie-breaking strategies), few works constructively leverage these commonalities and differences of TC algorithms systematically. In previous work, we addressed the constructive integration of consistency properties into the development process. We outlined a constructive, model-driven methodology for designing individual TC algorithms. Valid and high-quality topologies are characterized using declarative graph constraints; TC algorithms are specified using programmed graph transformation. We applied a well-known static analysis technique to refine a given TC algorithm in a way that the resulting algorithm preserves the specified graph constraints. In this paper, we extend our constructive methodology by generalizing it to support the specification of families of TC algorithms. To show the feasibility of our approach, we reneging six existing TC algorithms and develop e-kTC, a novel energy-efficient variant of the TC algorithm kTC. Finally, we evaluate a subset of the specified TC algorithms using a new tool integration of the graph transformation tool eMoflon and the Simonstrator network simulation framework.Comment: Corresponds to the accepted manuscrip

    Painting with Turbulence

    Get PDF
    Inspired by a study that identified a strong similarity between Vincent van Gogh\u27s and Jackson Pollock\u27s painting techniques, this thesis explores the interplay between science and art, specifically the unpredictable behaviors in turbulent flows and aesthetic concepts in painting. It utilizes data from a GPU-based air flow simulation, and presents a framework for artists to visualize the chaotic property changes in turbulent flows and create paintings with turbulence data. While the creation of individual brushstrokes is procedural and driven by simulation, artists are able to exercise their aesthetic judgments at various stages during a painting creation. A short animation demonstrates the potential results from this framework

    Developing virtual reality applications: The design and evaluation of virtual reality development tools for novice users.

    Get PDF
    Developing applications for Virtual Reality(VR) systems is difficult because of the special- ized hardware required, complexity of VR software, and the technical expertise needed to use both together. We have develop tools and applications that support the authoring of virtual reality applications. The tools will support development of VR applications based on common requirements of the hardware and architecture used in VR systems. We developed support for animations, geometry morphs, deformable geometry, advanced particle systems, importing digital assets, embedding a scripting language virtual machine, sound library wrappers, video library wrappers, and physics library wrappers for the OpenSG framework. The KabalaEngine was developed to use the supporting libraries previously men- tioned in a clustered VR system using OpenSG\u27s clustering capabilities. The KabalaEngine has an expert graphical user interface that can be used for developing virtual environments. Finally, we developed a graphical user interface for novice users of the KabalaEngine. We found that users of the KabalaEngine were able to use the interface to produce three different complex virtual environments with 10-15 different 3D objects arranged in a meaningful way in fifty minutes
    corecore