21,450 research outputs found

    Intelligent sampling for the measurement of structured surfaces

    Get PDF
    Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the sampling methods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed

    Change-point model on nonhomogeneous Poisson processes with application in copy number profiling by next-generation DNA sequencing

    Get PDF
    We propose a flexible change-point model for inhomogeneous Poisson Processes, which arise naturally from next-generation DNA sequencing, and derive score and generalized likelihood statistics for shifts in intensity functions. We construct a modified Bayesian information criterion (mBIC) to guide model selection, and point-wise approximate Bayesian confidence intervals for assessing the confidence in the segmentation. The model is applied to DNA Copy Number profiling with sequencing data and evaluated on simulated spike-in and real data sets.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS517 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Object-Based Greenhouse Mapping Using Very High Resolution Satellite Data and Landsat 8 Time Series

    Get PDF
    Greenhouse mapping through remote sensing has received extensive attention over the last decades. In this article, the innovative goal relies on mapping greenhouses through the combined use of very high resolution satellite data (WorldView-2) and Landsat 8 Operational Land Imager (OLI) time series within a context of an object-based image analysis (OBIA) and decision tree classification. Thus, WorldView-2 was mainly used to segment the study area focusing on individual greenhouses. Basic spectral information, spectral and vegetation indices, textural features, seasonal statistics and a spectral metric (Moment Distance Index, MDI) derived from Landsat 8 time series and/or WorldView-2 imagery were computed on previously segmented image objects. In order to test its temporal stability, the same approach was applied for two different years, 2014 and 2015. In both years, MDI was pointed out as the most important feature to detect greenhouses. Moreover, the threshold value of this spectral metric turned to be extremely stable for both Landsat 8 and WorldView-2 imagery. A simple decision tree always using the same threshold values for features from Landsat 8 time series and WorldView-2 was finally proposed. Overall accuracies of 93.0% and 93.3% and kappa coefficients of 0.856 and 0.861 were attained for 2014 and 2015 datasets, respectively

    EQUIPMENT TO ADDRESS INFRASTRUCTURE AND HUMAN RESOURCE CHALLENGES FOR RADIOTHERAPY IN LOW-RESOURCE SETTINGS

    Get PDF
    Millions of people in low- and middle- income countries (LMICs) are without access to radiation therapy and as rate of population growth in these regions increase and lifestyle factors which are indicative of cancer increase; the cancer burden will only rise. There are a multitude of reasons for lack of access but two themes among them are the lack of access to affordable and reliable teletherapy units and insufficient properly trained staff to deliver high quality care. The purpose of this work was to investigate to two proposed efforts to improve access to radiotherapy in low-resource areas; an upright radiotherapy chair (to facilitate low-cost treatment devices) and a fully automated treatment planning strategy. A fixed-beam patient treatment device would allow for reduced upfront and ongoing cost of teletherapy machines. The enabling technology for such a device is the immobilization chair. A rotating seated patient not only allows for a low-cost fixed treatment machine but also has dosimetric and comfort advantages. We examined the inter- and intra- fraction setup reproducibility, and showed they are less than 3mm, similar to reports for the supine position. The head-and-neck treatment site, one of the most challenging treatment planning, greatly benefits from the use of advanced treatment planning strategies. These strategies, however, require time consuming normal tissue and target contouring and complex plan optimization strategies. An automated treatment planning approach could reduce the additional number of medical physicists (the primary treatment planners) in LMICs by up to half. We used in-house algorithms including mutli-atlas contouring and quality assurance checks, combined with tools in the Eclipse Treatment Planning System®, to automate every step of the treatment planning process for head-and-neck cancers. Requiring only the patient CT scan, patient details including dose and fractionation, and contours of the gross tumor volume, high quality treatment plans can be created in less than 40 minutes

    Challenges in imaging and predictive modeling of rhizosphere processes

    Get PDF
    Background Plant-soil interaction is central to human food production and ecosystem function. Thus, it is essential to not only understand, but also to develop predictive mathematical models which can be used to assess how climate and soil management practices will affect these interactions. Scope In this paper we review the current developments in structural and chemical imaging of rhizosphere processes within the context of multiscale mathematical image based modeling. We outline areas that need more research and areas which would benefit from more detailed understanding. Conclusions We conclude that the combination of structural and chemical imaging with modeling is an incredibly powerful tool which is fundamental for understanding how plant roots interact with soil. We emphasize the need for more researchers to be attracted to this area that is so fertile for future discoveries. Finally, model building must go hand in hand with experiments. In particular, there is a real need to integrate rhizosphere structural and chemical imaging with modeling for better understanding of the rhizosphere processes leading to models which explicitly account for pore scale processes

    Analysis of full disc Ca II K spectroheliograms. II. Towards an accurate assessment of long-term variations in plage areas

    Full text link
    Reconstructions of past irradiance variations require suitable data on solar activity. The longest direct proxy is the sunspot number, and it has been most widely employed for this purpose. These data, however, only provide information on the surface magnetic field emerging in sunspots, while a suitable proxy of the evolution of the bright magnetic features, specifically faculae/plage and network, is missing. This information can potentially be extracted from the historical full-disc observations in the Ca II K line. We have analysed over 100,000 historical images from 8 digitised photographic archives of the Arcetri, Kodaikanal, McMath-Hulbert, Meudon, Mitaka, Mt Wilson, Schauinsland, and Wendelstein observatories, as well as one archive of modern observations from the Rome/PSPT. The analysed data cover the period 1893--2018. We first performed careful photometric calibration and compensation for the centre-to-limb variation, and then segmented the images to identify plage regions. This has been consistently applied to both historical and modern observations. The plage series derived from different archives are generally in good agreement with each other. However, there are also clear deviations that most likely hint at intrinsic differences in the data and their digitisation. We showed that accurate image processing significantly reduces errors in the plage area estimates. Accurate photometric calibration also allows precise plage identification on images from different archives without the need to arbitrarily adjust the segmentation parameters. Finally, by comparing the plage area series from the various records, we found the conversion laws between them. This allowed us to produce a preliminary composite of the plage areas obtained from all the datasets studied here. This is a first step towards an accurate assessment of the long-term variation of plage regions.Comment: 30 pages, 22 figures, accepted in A&

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    • …
    corecore