9,793 research outputs found

    Automatic light source placement for maximum visual information recovery

    Get PDF
    The definitive version is available at http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2007.00944.x/abstractThe automatic selection of good viewing parameters is a very complex problem. In most cases, the notion of good strongly depends on the concrete application. Moreover, when an intuitive definition of good view is available, it is often difficult to establish a measure that brings it to the practice. Commonly, two kinds of viewing parameters must be set: camera parameters (position and orientation) and lighting parameters (number of light sources, its position and eventually the orientation of the spot). The first ones will determine how much of the geometry can be captured and the latter will influence on how much of it is revealed (i. e. illuminated) to the user. Unfortunately, ensuring that certain parts of a scene are lit does not make sure that the details will be communicated to the user, as the amount of illumination might be too small or too high. In this paper we define a metric to calculate the amount of information relative to an object that is effectively communicated to the user given a fixed camera position. This measure is based on an information-based concept, the Shannon entropy, and will be applied to the problem of automatic selection of light positions in order to adequately illuminate an object. In order to validate the results, we have carried out an experiment on users, this experiment helped us to explore other related measures.Preprin

    Adaptive dynamic path re-planning RRT algorithms with game theory for UAVs

    Get PDF
    The main aim of this paper is to describe an adaptive re-planning algorithm based on a RRT and Game Theory to produce an efficient collision free obstacle adaptive Mission Path Planner for Search and Rescue (SAR) missions. This will provide UAV autopilots and flight computers with the capability to autonomously avoid static obstacles and No Fly Zones (NFZs) through dynamic adaptive path replanning. The methods and algorithms produce optimal collision free paths and can be integrated on a decision aid tool and UAV autopilots

    An intuitive control space for material appearance

    Get PDF
    Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction

    Ground-Based 1U CubeSat Robotic Assembly Demonstration

    Get PDF
    Key gaps limiting in-space assembly of small satellites are (1) the lack of standardization of electromechanical CubeSat components for compatibility with commercial robotic assembly hardware, and (2) testing and modifying commercial robotic assembly hardware suitable for small satellite assembly for space operation. Working toward gap (1), the lack of standardization of CubeSat components for compatibility with commercial robotic assembly hardware, we have developed a ground-based robotic assembly of a 1U CubeSat using modular components and Commercial-Off-The-Shelf (COTS) robot arms without humans-in-the-loop. Two 16 in x 7 in x 7 in dexterous robot arms, weighing 2 kg each, are shown to work together to grasp and assemble CubeSat components into a 1U CubeSat. Addressing gap (2) in this work, solutions for adapting power-efficient COTS robot arms to assemble highly-capable CubeSats are examined. Lessons learned on thermal and power considerations for overheated motors and positioning errors were also encountered and resolved. We find that COTS robot arms with sustained throughput and processing efficiency have the potential to be cost-effective for future space missions. The two robot arms assembled a 1U CubeSat prototype in less than eight minutes

    Fast LIDAR-based Road Detection Using Fully Convolutional Neural Networks

    Full text link
    In this work, a deep learning approach has been developed to carry out road detection using only LIDAR data. Starting from an unstructured point cloud, top-view images encoding several basic statistics such as mean elevation and density are generated. By considering a top-view representation, road detection is reduced to a single-scale problem that can be addressed with a simple and fast fully convolutional neural network (FCN). The FCN is specifically designed for the task of pixel-wise semantic segmentation by combining a large receptive field with high-resolution feature maps. The proposed system achieved excellent performance and it is among the top-performing algorithms on the KITTI road benchmark. Its fast inference makes it particularly suitable for real-time applications

    Apply Active Learning in Short-term Data-driven Building Energy Modeling

    Get PDF
    In the United States, the buildings sector accounted for about 41% of primary energy consumption. Building control and operation strategies have a great impact on building energy efficiency and the development of building-grid integration. For better building control, and for buildings to be better integrated with the grid operation, high fidelity building energy forecasting model that can be used for short-term and real-time operation is in great need. With the wide adoption of building automation system (BAS) and Internet of things (IoT), massive measurements from sensors and other sources are continuously collected which provide data on equipment and building operations. This provides a great opportunity for data-driven building energy modeling. However, data-driven approach is heavily dependent on data, while the collected operation data are often constrained to limited applicability (or termed as “bias” in this paper) because most of the building operation data are generated under limited operational modes, weather conditions, and very limited setpoints (often one or two fixed values, such as a constant zone temperature setpoint). For nonlinear systems, a data-driven model generated from biased data has poor scalabilities (when used for a different building) and extendibility (when used for different weather and operation conditions). The fact impedes the development of data-driven forecasting model as well as model-based control in buildings. The design of task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation is termed as active learning in machine learning. The purpose is to choose or generate informative training data, either to defy data bias or to reduce labeling cost (when doing experiments in building is too expensive). Research on applying active learning in building energy modeling is relatively unexplored. From the few existing researches, most of them only consider single operational setpoint, which is impractical for most real buildings where multiple setpoints in chillers, air handling units and air-conditioning terminals are used for building operation and control. Moreover, disturbances, especially weather and occupancy, in most cases are not considered. In this research, a nonlinear fractional factorial design combined with block design is applied as the active learning strategy to generate building operation (setpoints) schedule. The data generated on operation schedule will be used as training data for building energy modeling. The testbed is a virtual DOE reference large-size office building with hierarchical setpoints: zone temperature setpoint, supply air static pressure setpoint and chiller leaving water temperature setpoint. D-Optimal will be used as the nonlinear fractional factorial design algorithm, and its parameters are further compared and discussed. At the same time, block design will be applied to divide different weather and occupancy into four blocks. And D-Optimal design will be applied in each block, in which way the disturbance will be taken into consideration. Results show that compared with normal operation data and data generated by full factorial design, the proposed active learning method can increase model accuracy in validation and testing period, indicating its effectiveness to improve model generalization

    VoGE: A Differentiable Volume Renderer using Gaussian Ellipsoids for Analysis-by-Synthesis

    Get PDF
    Differentiable rendering allows the application of computer graphics onvision tasks, e.g. object pose and shape fitting, via analysis-by-synthesis,where gradients at occluded regions are important when inverting the renderingprocess. To obtain those gradients, state-of-the-art (SoTA) differentiablerenderers use rasterization to collect a set of nearest components for eachpixel and aggregate them based on the viewing distance. In this paper, wepropose VoGE, which uses ray tracing to capture nearest components with theirvolume density distributions on the rays and aggregates via integral of thevolume densities based on Gaussian ellipsoids, which brings more efficient andstable gradients. To efficiently render via VoGE, we propose an approximateclose-form solution for the volume density aggregation and a coarse-to-finerendering strategy. Finally, we provide a CUDA implementation of VoGE, whichgives a competitive rendering speed in comparison to PyTorch3D. Quantitativeand qualitative experiment results show VoGE outperforms SoTA counterparts whenapplied to various vision tasks,e.g., object pose estimation, shape/texturefitting, and occlusion reasoning. The VoGE library and demos are available athttps://github.com/Angtian/VoGE.<br
    • …
    corecore