861 research outputs found

    An Eigenpoint Based Multiscale Method for Validating Quantitative Remote Sensing Products

    Get PDF
    This letter first proposes the eigenpoint concept for quantitative remote sensing products (QRSPs) after discussing the eigenhomogeneity and eigenaccuracy for land surface variables. The eigenpoints are located according to the á trous wavelet planes of the QRSP. Based on these concepts, this letter proposes an eigenpoint based multiscale method for validating the QRSPs. The basic idea is that the QRSPs at coarse scales are validated by validating their eigenpoints using the QRSP at fine scale. The QRSP at fine scale is finally validated using observation data at the ground based eigenpoints at instrument scale. The ground based eigenpoints derived from the forecasted QRSP can be used as the observation positions when the satellites pass by the studied area. Experimental results demonstrate that the proposed method is manpower-and time-saving compared with the ideal scanning method and it is satisfying to perform simultaneous observation at these eigenpoints in terms of efficiency and accuracy

    YOLO-FaceV2: A Scale and Occlusion Aware Face Detector

    Full text link
    In recent years, face detection algorithms based on deep learning have made great progress. These algorithms can be generally divided into two categories, i.e. two-stage detector like Faster R-CNN and one-stage detector like YOLO. Because of the better balance between accuracy and speed, one-stage detectors have been widely used in many applications. In this paper, we propose a real-time face detector based on the one-stage detector YOLOv5, named YOLO-FaceV2. We design a Receptive Field Enhancement module called RFE to enhance receptive field of small face, and use NWD Loss to make up for the sensitivity of IoU to the location deviation of tiny objects. For face occlusion, we present an attention module named SEAM and introduce Repulsion Loss to solve it. Moreover, we use a weight function Slide to solve the imbalance between easy and hard samples and use the information of the effective receptive field to design the anchor. The experimental results on WiderFace dataset show that our face detector outperforms YOLO and its variants can be find in all easy, medium and hard subsets. Source code in https://github.com/Krasjet-Yu/YOLO-FaceV

    DifferSketching: How Differently Do People Sketch 3D Objects?

    Full text link
    Multiple sketch datasets have been proposed to understand how people draw 3D objects. However, such datasets are often of small scale and cover a small set of objects or categories. In addition, these datasets contain freehand sketches mostly from expert users, making it difficult to compare the drawings by expert and novice users, while such comparisons are critical in informing more effective sketch-based interfaces for either user groups. These observations motivate us to analyze how differently people with and without adequate drawing skills sketch 3D objects. We invited 70 novice users and 38 expert users to sketch 136 3D objects, which were presented as 362 images rendered from multiple views. This leads to a new dataset of 3,620 freehand multi-view sketches, which are registered with their corresponding 3D objects under certain views. Our dataset is an order of magnitude larger than the existing datasets. We analyze the collected data at three levels, i.e., sketch-level, stroke-level, and pixel-level, under both spatial and temporal characteristics, and within and across groups of creators. We found that the drawings by professionals and novices show significant differences at stroke-level, both intrinsically and extrinsically. We demonstrate the usefulness of our dataset in two applications: (i) freehand-style sketch synthesis, and (ii) posing it as a potential benchmark for sketch-based 3D reconstruction. Our dataset and code are available at https://chufengxiao.github.io/DifferSketching/.Comment: SIGGRAPH Asia 2022 (Journal Track

    Modeling and control of a bedside cable-driven lower-limb rehabilitation robot for bedridden individuals

    Get PDF
    Individuals with acute neurological or limb-related disorders may be temporarily bedridden and unable to go to the physical therapy departments. The rehabilitation training of these patients in the ward can only be performed manually by therapists because the space in inpatient wards is limited. This paper proposes a bedside cable-driven lower-limb rehabilitation robot based on the sling exercise therapy theory. The robot can actively drive the hip and knee motions at the bedside using flexible cables linking the knee and ankle joints. A human–cable coupling controller was designed to improve the stability of the human–machine coupling system. The controller dynamically adjusts the impedance coefficient of the cable driving force based on the impedance identification of the human lower-limb joints, thus realizing the stable motion of the human body. The experiments with five participants showed that the cable-driven rehabilitation robot effectively improved the maximum flexion of the hip and knee joints, reaching 85° and 90°, respectively. The mean annulus width of the knee joint trajectory was reduced by 63.84%, and the mean oscillation of the ankle joint was decreased by 56.47%, which demonstrated that human joint impedance identification for cable-driven control can effectively stabilize the motion of the human–cable coupling system
    • …
    corecore