23 research outputs found

    Global Wheat Head Detection (GWHD) dataset: a large and diverse dataset of high resolution RGB labelled images to develop and benchmark wheat head detection methods

    Get PDF
    Detection of wheat heads is an important task allowing to estimate pertinent traits including head population density and head characteristics such as sanitary state, size, maturity stage and the presence of awns. Several studies developed methods for wheat head detection from high-resolution RGB imagery. They are based on computer vision and machine learning and are generally calibrated and validated on limited datasets. However, variability in observational conditions, genotypic differences, development stages, head orientation represents a challenge in computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse and well-labelled dataset, the Global Wheat Head detection (GWHD) dataset. It contains 4,700 high-resolution RGB images and 190,000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD is publicly available at http://www.global-wheat.com/ and aimed at developing and benchmarking methods for wheat head detection.Comment: 16 pages, 7 figures, Dataset pape

    Analysis of Locally Coupled 3D Manipulation Mappings Based on Mobile Device Motion

    Get PDF
    We examine a class of techniques for 3D object manipulation on mobile devices, in which the device's physical motion is applied to 3D objects displayed on the device itself. This "local coupling" between input and display creates specific challenges compared to manipulation techniques designed for monitor-based or immersive virtual environments. Our work focuses specifically on the mapping between device motion and object motion. We review existing manipulation techniques and introduce a formal description of the main mappings under a common notation. Based on this notation, we analyze these mappings and their properties in order to answer crucial usability questions. We first investigate how the 3D objects should move on the screen, since the screen also moves with the mobile device during manipulation. We then investigate the effects of a limited range of manipulation and present a number of solutions to overcome this constraint. This work provides a theoretical framework to better understand the properties of locally-coupled 3D manipulation mappings based on mobile device motion

    Sitting and standing performance in a total population of children with cerebral palsy: a cross-sectional study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Knowledge of sitting and standing performance in a total population of children with cerebral palsy (CP) is of interest for health care planning and for prediction of future ability in the individual child. In 1994, a register and a health care programme for children with CP in southern Sweden was initiated. In the programme information on how the child usually sits, stands, stands up and sits down, together with use of support or assistive devices, is recorded annually.</p> <p>Methods</p> <p>A cross-sectional study was performed, analysing the most recent report of all children with CP born 1990-2005 and living in southern Sweden during 2008. All 562 children (326 boys, 236 girls) aged 3-18 years were included in the study. The degree of independence, use of support or assistive devices to sit, stand, stand up and sit down was analysed in relation to the Gross Motor Function Classification System (GMFCS), CP subtype and age.</p> <p>Result</p> <p>A majority of the children used standard chairs (57%), could stand independently (62%) and could stand up (62%) and sit down (63%) without external support. Adaptive seating was used by 42%, external support to stand was used by 31%, to stand up by 19%, and to sit down by 18%. The use of adaptive seating and assistive devices increased with GMFCS levels (p < 0.001) and there was a difference between CP subtypes (p < 0.001). The use of support was more frequent in preschool children aged 3-6 (p < 0.001).</p> <p>Conclusion</p> <p>About 60% of children with CP, aged 3-18, use standard chairs, stand, stand up, and sit down without external support. Adding those using adaptive seating and external support, 99% of the children could sit, 96% could stand and 81% could stand up from a sitting position and 81% could sit down from a standing position. The GMFCS classification system is a good predictor of sitting and standing performance.</p

    Effects of timber prices, ownership objectives, and owner characteristics on timber supply

    No full text

    From movement to models: a tribute to professor Alan G. Hannam

    Full text link
    This tribute article to Professor Alan G. Hannam is based on 7 presentations for him at the July 1, 2008 symposium honoring 3 "giants" in orofacial neuroscience: Professors B. J. Sessle, J. P. Lund, and A. G. Hannam. This tribute to Hannam's outstanding career draws examples from his 40-year academic career and spans topics from human evolution to complex modeling of the craniomandibular system. The first presentation by W. Hylander provides a plausible answer to the functional and evolutionary significance of canine reduction in hominins. The second presentation, by A. McMillan, describes research activities in the field of healthy aging, including findings that intensity-modulated radiotherapy improves the health condition and quality of life of people with nasopharyngeal carcinoma in comparison to conventional radiotherapy. The developments in dental imaging are summarized in the third paper by E. Lam, and an overview of the bite force magnitude and direction while clenching is described in the fourth paper by M. Watanabe. The last 3 contributions by G. Langenbach, I. Staveness, and C. Peck deal with the topic of bone remodeling as well as masticatory system modeling, which was Hannam's main research interest in recent years. These contributions show the considerable advancements that have been made in the last decade under Hannam's drive, in particular the development of an interactive model comprising, in addition to the masticatory system, also the upper airways. The final section of the article includes a final commentary from Professor Hannam

    GyroSuite: General-Purpose Interactions for Handheld Perspective Corrected Displays

    No full text
    International audienceHandheld Perspective-Corrected Displays (HPCDs) are physical objects that have a notable volume and that display a virtual 3D scene on their entire surface. Being handheld, they create the illusion of holding the scene in a physical container (the display). This has strong benefits for the intuitiveness of 3D interaction: manipulating objects of the virtual scene amounts to physical manipulations of the display. HPCDs have been limited so far to technical demonstrators and experimental tools to assess their merits. However, they show great potential as interactive systems for actual 3D applications. This requires that novel interactions be created to go beyond object manipulation and to offer general-purpose services such as menu command selection and continuous parameter control. Working with a two-handed spherical HPCD, we report on the design and informal evaluations of various interaction techniques for distant object selection, scene scaling, menu interaction and continuous parameter control. In particular, our design leverages the efficient two-handed control of the rotations of the display. We demonstrate how some of these techniques can be assemble in a self-contained anatomy learning application. Novice participants used the application in a qualitative user experiment. Most participants used the application effortlessly without any training or explanations
    corecore